Sample records for obtaining accurate values

  1. Obtaining Accurate Probabilities Using Classifier Calibration

    ERIC Educational Resources Information Center

    Pakdaman Naeini, Mahdi

    2016-01-01

    Learning probabilistic classification and prediction models that generate accurate probabilities is essential in many prediction and decision-making tasks in machine learning and data mining. One way to achieve this goal is to post-process the output of classification models to obtain more accurate probabilities. These post-processing methods are…

  2. A precise and accurate acupoint location obtained on the face using consistency matrix pointwise fusion method.

    PubMed

    Yanq, Xuming; Ye, Yijun; Xia, Yong; Wei, Xuanzhong; Wang, Zheyu; Ni, Hongmei; Zhu, Ying; Xu, Lingyu

    2015-02-01

    To develop a more precise and accurate method, and identified a procedure to measure whether an acupoint had been correctly located. On the face, we used an acupoint location from different acupuncture experts and obtained the most precise and accurate values of acupoint location based on the consistency information fusion algorithm, through a virtual simulation of the facial orientation coordinate system. Because of inconsistencies in each acupuncture expert's original data, the system error the general weight calculation. First, we corrected each expert of acupoint location system error itself, to obtain a rational quantification for each expert of acupuncture and moxibustion acupoint location consistent support degree, to obtain pointwise variable precision fusion results, to put every expert's acupuncture acupoint location fusion error enhanced to pointwise variable precision. Then, we more effectively used the measured characteristics of different acupuncture expert's acupoint location, to improve the measurement information utilization efficiency and acupuncture acupoint location precision and accuracy. Based on using the consistency matrix pointwise fusion method on the acupuncture experts' acupoint location values, each expert's acupoint location information could be calculated, and the most precise and accurate values of each expert's acupoint location could be obtained.

  3. Development of a Method to Obtain More Accurate General and Oral Health Related Information Retrospectively

    PubMed Central

    A, Golkari; A, Sabokseir; D, Blane; A, Sheiham; RG, Watt

    2017-01-01

    Statement of Problem: Early childhood is a crucial period of life as it affects one’s future health. However, precise data on adverse events during this period is usually hard to access or collect, especially in developing countries. Objectives: This paper first reviews the existing methods for retrospective data collection in health and social sciences, and then introduces a new method/tool for obtaining more accurate general and oral health related information from early childhood retrospectively. Materials and Methods: The Early Childhood Events Life-Grid (ECEL) was developed to collect information on the type and time of health-related adverse events during the early years of life, by questioning the parents. The validity of ECEL and the accuracy of information obtained by this method were assessed in a pilot study and in a main study of 30 parents of 8 to 11 year old children from Shiraz (Iran). Responses obtained from parents using the final ECEL were compared with the recorded health insurance documents. Results: There was an almost perfect agreement between the health insurance and ECEL data sets (Kappa value=0.95 and p < 0.001). Interviewees remembered the important events more accurately (100% exact timing match in case of hospitalization). Conclusions: The Early Childhood Events Life-Grid method proved to be highly accurate when compared with recorded medical documents. PMID:28959773

  4. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    NASA Astrophysics Data System (ADS)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  5. A More Accurate and Efficient Technique Developed for Using Computational Methods to Obtain Helical Traveling-Wave Tube Interaction Impedance

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    1999-01-01

    The phenomenal growth of commercial communications has created a great demand for traveling-wave tube (TWT) amplifiers. Although the helix slow-wave circuit remains the mainstay of the TWT industry because of its exceptionally wide bandwidth, until recently it has been impossible to accurately analyze a helical TWT using its exact dimensions because of the complexity of its geometrical structure. For the first time, an accurate three-dimensional helical model was developed that allows accurate prediction of TWT cold-test characteristics including operating frequency, interaction impedance, and attenuation. This computational model, which was developed at the NASA Lewis Research Center, allows TWT designers to obtain a more accurate value of interaction impedance than is possible using experimental methods. Obtaining helical slow-wave circuit interaction impedance is an important part of the design process for a TWT because it is related to the gain and efficiency of the tube. This impedance cannot be measured directly; thus, conventional methods involve perturbing a helical circuit with a cylindrical dielectric rod placed on the central axis of the circuit and obtaining the difference in resonant frequency between the perturbed and unperturbed circuits. A mathematical relationship has been derived between this frequency difference and the interaction impedance (ref. 1). However, because of the complex configuration of the helical circuit, deriving this relationship involves several approximations. In addition, this experimental procedure is time-consuming and expensive, but until recently it was widely accepted as the most accurate means of determining interaction impedance. The advent of an accurate three-dimensional helical circuit model (ref. 2) made it possible for Lewis researchers to fully investigate standard approximations made in deriving the relationship between measured perturbation data and interaction impedance. The most prominent approximations made

  6. Probabilistic techniques for obtaining accurate patient counts in Clinical Data Warehouses

    PubMed Central

    Myers, Risa B.; Herskovic, Jorge R.

    2011-01-01

    Proposal and execution of clinical trials, computation of quality measures and discovery of correlation between medical phenomena are all applications where an accurate count of patients is needed. However, existing sources of this type of patient information, including Clinical Data Warehouses (CDW) may be incomplete or inaccurate. This research explores applying probabilistic techniques, supported by the MayBMS probabilistic database, to obtain accurate patient counts from a clinical data warehouse containing synthetic patient data. We present a synthetic clinical data warehouse (CDW), and populate it with simulated data using a custom patient data generation engine. We then implement, evaluate and compare different techniques for obtaining patients counts. We model billing as a test for the presence of a condition. We compute billing’s sensitivity and specificity both by conducting a “Simulated Expert Review” where a representative sample of records are reviewed and labeled by experts, and by obtaining the ground truth for every record. We compute the posterior probability of a patient having a condition through a “Bayesian Chain”, using Bayes’ Theorem to calculate the probability of a patient having a condition after each visit. The second method is a “one-shot” approach that computes the probability of a patient having a condition based on whether the patient is ever billed for the condition Our results demonstrate the utility of probabilistic approaches, which improve on the accuracy of raw counts. In particular, the simulated review paired with a single application of Bayes’ Theorem produces the best results, with an average error rate of 2.1% compared to 43.7% for the straightforward billing counts. Overall, this research demonstrates that Bayesian probabilistic approaches improve patient counts on simulated patient populations. We believe that total patient counts based on billing data are one of the many possible applications of our

  7. Accurately Decoding Visual Information from fMRI Data Obtained in a Realistic Virtual Environment

    DTIC Science & Technology

    2015-06-09

    Center for Learning and Memory , The University of Texas at Austin, 100 E 24th Street, Stop C7000, Austin, TX 78712, USA afloren@utexas.edu Received: 18...information from fMRI data obtained in a realistic virtual environment. Front. Hum. Neurosci. 9:327. doi: 10.3389/fnhum.2015.00327 Accurately decoding...visual information from fMRI data obtained in a realistic virtual environment Andrew Floren 1*, Bruce Naylor 2, Risto Miikkulainen 3 and David Ress 4

  8. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, Mark W.; George, William A.

    1987-01-01

    A process for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H.sub.2 O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg.sub.2 Cl.sub.2. The method for doing this involves dissolving a precise amount of Hg.sub.2 Cl.sub.2 in an electrolyte solution comprised of concentrated HCl and H.sub.2 O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg.

  9. Obtaining accurate amounts of mercury from mercury compounds via electrolytic methods

    DOEpatents

    Grossman, M.W.; George, W.A.

    1987-07-07

    A process is described for obtaining pre-determined, accurate rate amounts of mercury. In one embodiment, predetermined, precise amounts of Hg are separated from HgO and plated onto a cathode wire. The method for doing this involves dissolving a precise amount of HgO which corresponds to a pre-determined amount of Hg desired in an electrolyte solution comprised of glacial acetic acid and H[sub 2]O. The mercuric ions are then electrolytically reduced and plated onto a cathode producing the required pre-determined quantity of Hg. In another embodiment, pre-determined, precise amounts of Hg are obtained from Hg[sub 2]Cl[sub 2]. The method for doing this involves dissolving a precise amount of Hg[sub 2]Cl[sub 2] in an electrolyte solution comprised of concentrated HCl and H[sub 2]O. The mercurous ions in solution are then electrolytically reduced and plated onto a cathode wire producing the required, pre-determined quantity of Hg. 1 fig.

  10. Guidelines and techniques for obtaining water samples that accurately represent the water chemistry of an aquifer

    USGS Publications Warehouse

    Claassen, Hans C.

    1982-01-01

    Obtaining ground-water samples that accurately represent the water chemistry of an aquifer is a complex task. Before a ground-water sampling program can be started, an understanding of the kind of chemical data needed and the potential changes in water chemistry resulting from various drilling, well-completion, and sampling techniques is needed. This report provides a basis for such an evaluation and permits a choice of techniques that will result in obtaining the best possible data for the time and money allocated.

  11. Glucose Meters: A Review of Technical Challenges to Obtaining Accurate Results

    PubMed Central

    Tonyushkina, Ksenia; Nichols, James H.

    2009-01-01

    , anemia, hypotension, and other disease states. This article reviews the challenges involved in obtaining accurate glucose meter results. PMID:20144348

  12. Measuring the value of accurate link prediction for network seeding.

    PubMed

    Wei, Yijin; Spencer, Gwen

    2017-01-01

    The influence-maximization literature seeks small sets of individuals whose structural placement in the social network can drive large cascades of behavior. Optimization efforts to find the best seed set often assume perfect knowledge of the network topology. Unfortunately, social network links are rarely known in an exact way. When do seeding strategies based on less-than-accurate link prediction provide valuable insight? We introduce optimized-against-a-sample ([Formula: see text]) performance to measure the value of optimizing seeding based on a noisy observation of a network. Our computational study investigates [Formula: see text] under several threshold-spread models in synthetic and real-world networks. Our focus is on measuring the value of imprecise link information. The level of investment in link prediction that is strategic appears to depend closely on spread model: in some parameter ranges investments in improving link prediction can pay substantial premiums in cascade size. For other ranges, such investments would be wasted. Several trends were remarkably consistent across topologies.

  13. Cumulative atomic multipole moments complement any atomic charge model to obtain more accurate electrostatic properties

    NASA Technical Reports Server (NTRS)

    Sokalski, W. A.; Shibata, M.; Ornstein, R. L.; Rein, R.

    1992-01-01

    The quality of several atomic charge models based on different definitions has been analyzed using cumulative atomic multipole moments (CAMM). This formalism can generate higher atomic moments starting from any atomic charges, while preserving the corresponding molecular moments. The atomic charge contribution to the higher molecular moments, as well as to the electrostatic potentials, has been examined for CO and HCN molecules at several different levels of theory. The results clearly show that the electrostatic potential obtained from CAMM expansion is convergent up to R-5 term for all atomic charge models used. This illustrates that higher atomic moments can be used to supplement any atomic charge model to obtain more accurate description of electrostatic properties.

  14. Accurate Gaussian basis sets for atomic and molecular calculations obtained from the generator coordinate method with polynomial discretization.

    PubMed

    Celeste, Ricardo; Maringolo, Milena P; Comar, Moacyr; Viana, Rommel B; Guimarães, Amanda R; Haiduke, Roberto L A; da Silva, Albérico B F

    2015-10-01

    Accurate Gaussian basis sets for atoms from H to Ba were obtained by means of the generator coordinate Hartree-Fock (GCHF) method based on a polynomial expansion to discretize the Griffin-Wheeler-Hartree-Fock equations (GWHF). The discretization of the GWHF equations in this procedure is based on a mesh of points not equally distributed in contrast with the original GCHF method. The results of atomic Hartree-Fock energies demonstrate the capability of these polynomial expansions in designing compact and accurate basis sets to be used in molecular calculations and the maximum error found when compared to numerical values is only 0.788 mHartree for indium. Some test calculations with the B3LYP exchange-correlation functional for N2, F2, CO, NO, HF, and HCN show that total energies within 1.0 to 2.4 mHartree compared to the cc-pV5Z basis sets are attained with our contracted bases with a much smaller number of polarization functions (2p1d and 2d1f for hydrogen and heavier atoms, respectively). Other molecular calculations performed here are also in very good accordance with experimental and cc-pV5Z results. The most important point to be mentioned here is that our generator coordinate basis sets required only a tiny fraction of the computational time when compared to B3LYP/cc-pV5Z calculations.

  15. Obtaining high g-values with low degree expansion of the phasefunction

    NASA Astrophysics Data System (ADS)

    Rinzema, Kees; ten Bosch, Jaap J.; Ferwerda, Hedzer A.; Hoenders, Bernhard J.

    1994-02-01

    Analytic theory of anisotropic random flight requires the expansion of phase-functions in spherical harmonics. The number of terms should be limited while a g value should be obtained that is as high as possible. We describe how such a phase function can be constructed for a given number N of spherical components of the phasefunction, while obtaining a maximum value of the asymmetry parameter g.

  16. Obtaining the Iodine Value of Various Oils via Bromination with Pyridinium Tribromide

    ERIC Educational Resources Information Center

    Simurdiak, Michael; Olukoga, Olushola; Hedberg, Kirk

    2016-01-01

    A laboratory exercise was devised that allows students to rapidly and fairly accurately determine the iodine value of oleic acid. This method utilizes the addition of elemental bromine to the unsaturated bonds in oleic acid, due to bromine's relatively fast reaction rate compared to that of the traditional Wijs solution method. This method also…

  17. Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data

    PubMed Central

    Kang, D.-K.; Silvescu, A.; Honavar, V.

    2009-01-01

    In many application domains, there is a need for learning algorithms that can effectively exploit attribute value taxonomies (AVT)—hierarchical groupings of attribute values—to learn compact, comprehensible and accurate classifiers from data—including data that are partially specified. This paper describes AVT-NBL, a natural generalization of the naïve Bayes learner (NBL), for learning classifiers from AVT and data. Our experimental results show that AVT-NBL is able to generate classifiers that are substantially more compact and more accurate than those produced by NBL on a broad range of data sets with different percentages of partially specified values. We also show that AVT-NBL is more efficient in its use of training data: AVT-NBL produces classifiers that outperform those produced by NBL using substantially fewer training examples. PMID:20351793

  18. The identification of complete domains within protein sequences using accurate E-values for semi-global alignment

    PubMed Central

    Kann, Maricel G.; Sheetlin, Sergey L.; Park, Yonil; Bryant, Stephen H.; Spouge, John L.

    2007-01-01

    The sequencing of complete genomes has created a pressing need for automated annotation of gene function. Because domains are the basic units of protein function and evolution, a gene can be annotated from a domain database by aligning domains to the corresponding protein sequence. Ideally, complete domains are aligned to protein subsequences, in a ‘semi-global alignment’. Local alignment, which aligns pieces of domains to subsequences, is common in high-throughput annotation applications, however. It is a mature technique, with the heuristics and accurate E-values required for screening large databases and evaluating the screening results. Hidden Markov models (HMMs) provide an alternative theoretical framework for semi-global alignment, but their use is limited because they lack heuristic acceleration and accurate E-values. Our new tool, GLOBAL, overcomes some limitations of previous semi-global HMMs: it has accurate E-values and the possibility of the heuristic acceleration required for high-throughput applications. Moreover, according to a standard of truth based on protein structure, two semi-global HMM alignment tools (GLOBAL and HMMer) had comparable performance in identifying complete domains, but distinctly outperformed two tools based on local alignment. When searching for complete protein domains, therefore, GLOBAL avoids disadvantages commonly associated with HMMs, yet maintains their superior retrieval performance. PMID:17596268

  19. A new look on anomalous thermal gradient values obtained in South Portugal

    NASA Astrophysics Data System (ADS)

    Duque, M. R.; Malico, I.

    2012-04-01

    A NEW LOOK ON THE ANOMALOUS THERMAL GRADIENT VALUES OBTAINED IN SOUTH PORTUGAL Duque, M. R. and Malico, I. M. Physics Department, University of Évora, Rua Romão Ramalho, 59,7000-671, Évora, Portugal It is well known that soil temperatures can be altered by water circulation. In this paper, we study numerically this effect by simulating some aquifers occurring in South Portugal. At this location, the thermal gradient values obtained in boreholes with depths less than 200 m, range between 22 and 30 °C km-1. However, there, it is easy to find places where temperatures are around 30 °C, at depths of 100 m. The obtained thermal gradient values show an increase one day after raining and a decrease during the dry season. Additionally, the curve of temperature as function of depth showed no hot water inlet in the hole. The region studied shows a smooth topography due to intensive erosion, but it was affected by alpine and hercinian orogenies. As a result, a high topography in depth, with folds and wrinkles is present. The space between adjacent folds is now filled by small sedimentary basins. Aquifers existing in this region can reach considerable depths and return to depths near the surface, but hot springs in the area are scarce. Water temperature rises in depth, and when the speed is high enough high temperatures near the surface, due to water circulation, can be found. The ability of the fluid to flow through the system depends on topography relief, rock permeability and basal heat flow. In this study, the steady-state fluid flow and heat transfer by conduction and advection are modeled. Fractures in the medium are simulated by an equivalent porous medium saturated with liquid. Thermal conductivity values for the water and the rocks can vary in space .Porosities used have high values in the region of the aquifer, low values in the lower region of the model and intermediate values in the upper regions. The results obtained show that temperature anomaly values

  20. Latest Developments on Obtaining Accurate Measurements with Pitot Tubes in ZPG Turbulent Boundary Layers

    NASA Astrophysics Data System (ADS)

    Nagib, Hassan; Vinuesa, Ricardo

    2013-11-01

    Ability of available Pitot tube corrections to provide accurate mean velocity profiles in ZPG boundary layers is re-examined following the recent work by Bailey et al. Measurements by Bailey et al., carried out with probes of diameters ranging from 0.2 to 1.89 mm, together with new data taken with larger diameters up to 12.82 mm, show deviations with respect to available high-quality datasets and hot-wire measurements in the same Reynolds number range. These deviations are significant in the buffer region around y+ = 30 - 40 , and lead to disagreement in the von Kármán coefficient κ extracted from profiles. New forms for shear, near-wall and turbulence corrections are proposed, highlighting the importance of the latest one. Improved agreement in mean velocity profiles is obtained with new forms, where shear and near-wall corrections contribute with around 85%, and remaining 15% of the total correction comes from turbulence correction. Finally, available algorithms to correct wall position in profile measurements of wall-bounded flows are tested, using as benchmark the corrected Pitot measurements with artificially simulated probe shifts and blockage effects. We develop a new scheme, κB - Musker, which is able to accurately locate wall position.

  1. Toward accurate prediction of pKa values for internal protein residues: the importance of conformational relaxation and desolvation energy.

    PubMed

    Wallace, Jason A; Wang, Yuhang; Shi, Chuanyin; Pastoor, Kevin J; Nguyen, Bao-Linh; Xia, Kai; Shen, Jana K

    2011-12-01

    Proton uptake or release controls many important biological processes, such as energy transduction, virus replication, and catalysis. Accurate pK(a) prediction informs about proton pathways, thereby revealing detailed acid-base mechanisms. Physics-based methods in the framework of molecular dynamics simulations not only offer pK(a) predictions but also inform about the physical origins of pK(a) shifts and provide details of ionization-induced conformational relaxation and large-scale transitions. One such method is the recently developed continuous constant pH molecular dynamics (CPHMD) method, which has been shown to be an accurate and robust pK(a) prediction tool for naturally occurring titratable residues. To further examine the accuracy and limitations of CPHMD, we blindly predicted the pK(a) values for 87 titratable residues introduced in various hydrophobic regions of staphylococcal nuclease and variants. The predictions gave a root-mean-square deviation of 1.69 pK units from experiment, and there were only two pK(a)'s with errors greater than 3.5 pK units. Analysis of the conformational fluctuation of titrating side-chains in the context of the errors of calculated pK(a) values indicate that explicit treatment of conformational flexibility and the associated dielectric relaxation gives CPHMD a distinct advantage. Analysis of the sources of errors suggests that more accurate pK(a) predictions can be obtained for the most deeply buried residues by improving the accuracy in calculating desolvation energies. Furthermore, it is found that the generalized Born implicit-solvent model underlying the current CPHMD implementation slightly distorts the local conformational environment such that the inclusion of an explicit-solvent representation may offer improvement of accuracy. Copyright © 2011 Wiley-Liss, Inc.

  2. Total sperm per ejaculate of men: obtaining a meaningful value or a mean value with appropriate precision.

    PubMed

    Amann, Rupert P; Chapman, Phillip L

    2009-01-01

    We retrospectively mined and modeled data to answer 3 questions. 1) Relative to an estimate based on approximately 20 semen samples, how imprecise is an estimate of an individual's total sperm per ejaculate (TSperm) based on 1 sample? 2) What is the impact of abstinence interval on TSperm and TSperm/h? 3) How many samples are needed to provide a meaningful estimate of an individual's mean TSperm or TSperm/h? Data were for 18-20 consecutive masturbation samples from each of 48 semen donors. Modeling exploited the gamma distribution of values for TSperm and a unique approach to project to future samples. Answers: 1) Within-individual coefficients of variation were similar for TSperm or TSperm/h abstinence and ranged from 17% to 51%; average approximately 34%. TSperm or TSperm/h in any individual sample from a given donor was between -20% and +20% of the mean value in 48% of 18-20 samples per individual. 2) For a majority of individuals, TSperm increased in a nearly linear manner through approximately 72 hours of abstinence. TSperm and TSperm/h after 18-36 hours' abstinence are high. To obtain meaningful values for diagnostic purposes and maximize distinction of individuals with relatively low or high sperm production, the requested abstinence should be 42-54 hours with an upper limit of 64 hours. For individuals producing few sperm, 7 days or more of abstinence might be appropriate to obtain sperm for insemination. 3) At least 3 samples from a hypothetical future subject are recommended for most applications. Assuming 60 hours' abstinence, 80% confidence limits for TSperm/h for 1, 3, or 6 samples would be 70%-163%, 80%-130%, or 85%-120% of the mean for observed values. In only approximately 50% of cases would TSperm/h for a single sample be within -16% and +30% of the true mean value for that subject. Pooling values for TSperm in samples obtained after 18-36 or 72-168 hours' abstinence with values for TSperm obtained after 42-64 hours is inappropriate. Reliance on

  3. Compensation method for obtaining accurate, sub-micrometer displacement measurements of immersed specimens using electronic speckle interferometry.

    PubMed

    Fazio, Massimo A; Bruno, Luigi; Reynaud, Juan F; Poggialini, Andrea; Downs, J Crawford

    2012-03-01

    We proposed and validated a compensation method that accounts for the optical distortion inherent in measuring displacements on specimens immersed in aqueous solution. A spherically-shaped rubber specimen was mounted and pressurized on a custom apparatus, with the resulting surface displacements recorded using electronic speckle pattern interferometry (ESPI). Point-to-point light direction computation is achieved by a ray-tracing strategy coupled with customized B-spline-based analytical representation of the specimen shape. The compensation method reduced the mean magnitude of the displacement error induced by the optical distortion from 35% to 3%, and ESPI displacement measurement repeatability showed a mean variance of 16 nm at the 95% confidence level for immersed specimens. The ESPI interferometer and numerical data analysis procedure presented herein provide reliable, accurate, and repeatable measurement of sub-micrometer deformations obtained from pressurization tests of spherically-shaped specimens immersed in aqueous salt solution. This method can be used to quantify small deformations in biological tissue samples under load, while maintaining the hydration necessary to ensure accurate material property assessment.

  4. Accuracy Evaluation of the Unified P-Value from Combining Correlated P-Values

    PubMed Central

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Meta-analysis methods that combine -values into a single unified -value are frequently employed to improve confidence in hypothesis testing. An assumption made by most meta-analysis methods is that the -values to be combined are independent, which may not always be true. To investigate the accuracy of the unified -value from combining correlated -values, we have evaluated a family of statistical methods that combine: independent, weighted independent, correlated, and weighted correlated -values. Statistical accuracy evaluation by combining simulated correlated -values showed that correlation among -values can have a significant effect on the accuracy of the combined -value obtained. Among the statistical methods evaluated those that weight -values compute more accurate combined -values than those that do not. Also, statistical methods that utilize the correlation information have the best performance, producing significantly more accurate combined -values. In our study we have demonstrated that statistical methods that combine -values based on the assumption of independence can produce inaccurate -values when combining correlated -values, even when the -values are only weakly correlated. Therefore, to prevent from drawing false conclusions during hypothesis testing, our study advises caution be used when interpreting the -value obtained from combining -values of unknown correlation. However, when the correlation information is available, the weighting-capable statistical method, first introduced by Brown and recently modified by Hou, seems to perform the best amongst the methods investigated. PMID:24663491

  5. Accurate collision-induced line-coupling parameters for the fundamental band of CO in He - Close coupling and coupled states scattering calculations

    NASA Technical Reports Server (NTRS)

    Green, Sheldon; Boissoles, J.; Boulet, C.

    1988-01-01

    The first accurate theoretical values for off-diagonal (i.e., line-coupling) pressure-broadening cross sections are presented. Calculations were done for CO perturbed by He at thermal collision energies using an accurate ab initio potential energy surface. Converged close coupling, i.e., numerically exact values, were obtained for coupling to the R(0) and R(2) lines. These were used to test the coupled states (CS) and infinite order sudden (IOS) approximate scattering methods. CS was found to be of quantitative accuracy (a few percent) and has been used to obtain coupling values for lines to R(10). IOS values are less accurate, but, owing to their simplicity, may nonetheless prove useful as has been recently demonstrated.

  6. [Value of liquid-based cytology of brushing specimens obtained via fiberoptic bronchoscopy for the diagnosis of lung cancer].

    PubMed

    Zhao, Huan; Guo, Huiqin; Zhang, Chuanxin; Zhao, Linlin; Cao, Jian; Pan, Qinjing

    2015-06-01

    To investigate the value of the liquid-based cytology (LBC) of brushing specimens obtained via fiberoptic bronchoscopy for clinical diagnosis of lung cancer. We retrospectively analyzed the LBC cases in our hospital from January 2011 to May 2012, and evaluate its role in the diagnosis of lung cancer. The clinical data of a total of 4 380 cases were reviewed and 3 763 of them had histopathological or clinical follow-up results (including 3 306 lung cancer cases and 457 benign lesion cases). The sensitivity, specificity, and accuracy of LBC diagnosis for lung cancer were 72.4% (2 392/3 306), 99.3% (454/457) and 75.6% (2 846/3 763), respectively. Of the 1 992 lung cancer cases diagnosed by brushing LBC, 528 cases (26.5%) were failed to take forceps biopsy and 113 cases (5.7%) showed negative forceps biopsy results. The accurate rate of subtyping of LBC for non-small cell carcinoma and small cell carcinoma was 99.0% (1 487/1 502) (P < 0.001). Take the resection histopathology as gold standard, the accurate rates of subtyping squamous cell carcinoma, adenocarcinoma and small cell carcinoma by LBC were 95.6% (351/367), 95.6% (351/367) and 100% (367/367), respectively, (P < 0.001). The accurate rates of subtyping of squamous cell carcinoma, adenocarcinoma and small cell carcinoma by forceps biopsy were 97.0% (293/302), 97.4% (294/302) and 99.7% (301/302), respectively, (Kappa = 0.895, P < 0.001). There was no significant difference in subtyping respectively between forceps biopsy and brushing LBC (P > 0.05). Fiberoptic bronchoscopic brushing liquid-based cytology can significantly improve the detection rate of lung cancer, and have a high specificity and accurate rate of subtyping. It is an effective tool for the diagnosis and subtyping of lung cancer.

  7. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    PubMed

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  8. Accurate determinations of alpha(s) from realistic lattice QCD.

    PubMed

    Mason, Q; Trottier, H D; Davies, C T H; Foley, K; Gray, A; Lepage, G P; Nobes, M; Shigemitsu, J

    2005-07-29

    We obtain a new value for the QCD coupling constant by combining lattice QCD simulations with experimental data for hadron masses. Our lattice analysis is the first to (1) include vacuum polarization effects from all three light-quark flavors (using MILC configurations), (2) include third-order terms in perturbation theory, (3) systematically estimate fourth and higher-order terms, (4) use an unambiguous lattice spacing, and (5) use an [symbol: see text](a2)-accurate QCD action. We use 28 different (but related) short-distance quantities to obtain alpha((5)/(MS))(M(Z)) = 0.1170(12).

  9. Third-Order Incremental Dual-Basis Set Zero-Buffer Approach: An Accurate and Efficient Way To Obtain CCSD and CCSD(T) Energies.

    PubMed

    Zhang, Jun; Dolg, Michael

    2013-07-09

    An efficient way to obtain accurate CCSD and CCSD(T) energies for large systems, i.e., the third-order incremental dual-basis set zero-buffer approach (inc3-db-B0), has been developed and tested. This approach combines the powerful incremental scheme with the dual-basis set method, and along with the new proposed K-means clustering (KM) method and zero-buffer (B0) approximation, can obtain very accurate absolute and relative energies efficiently. We tested the approach for 10 systems of different chemical nature, i.e., intermolecular interactions including hydrogen bonding, dispersion interaction, and halogen bonding; an intramolecular rearrangement reaction; aliphatic and conjugated hydrocarbon chains; three compact covalent molecules; and a water cluster. The results show that the errors for relative energies are <1.94 kJ/mol (or 0.46 kcal/mol), for absolute energies of <0.0026 hartree. By parallelization, our approach can be applied to molecules of more than 30 atoms and more than 100 correlated electrons with high-quality basis set such as cc-pVDZ or cc-pVTZ, saving computational cost by a factor of more than 10-20, compared to traditional implementation. The physical reasons of the success of the inc3-db-B0 approach are also analyzed.

  10. Clinical utility of apparent diffusion coefficient values obtained using high b-value when diagnosing prostate cancer using 3 tesla MRI: comparison between ultra-high b-value (2000 s/mm²) and standard high b-value (1000 s/mm²).

    PubMed

    Kitajima, Kazuhiro; Takahashi, Satoru; Ueno, Yoshiko; Yoshikawa, Takeshi; Ohno, Yoshiharu; Obara, Makoto; Miyake, Hideaki; Fujisawa, Masato; Sugimura, Kazuro

    2012-07-01

    To determine whether the apparent diffusion coefficient (ADC) obtained using b = 2000 s/mm(2) upon 3 Tesla (T) diffusion-weighted MRI is superior to b = 1000 s/mm(2) for discriminating malignant from normal prostate tissue and predicting the aggressiveness of prostate cancer, using histopathological findings of radical prostatectomy as a reference. Eighty prostate cancer patients underwent preoperative 3T MRI including diffusion weighted imaging with b-values of 0, 1000, and 2000 s/mm(2) . ADCs were measured for malignant lesions and normal sites on three sets of ADC maps calculated with monoexponential fitting between b = 0 and 1000, 0 and 2000, and 1000 and 2000, respectively. The relationship between the ADC and Gleason score was evaluated. The areas under the ROC curves for b = 0,1000, b = 0,2000, and b = 1000,2000 were 0.896, 0.937, and 0.857, respectively, in the peripheral zone (PZ) and 0.877, 0.889, and 0.731, respectively, in the transition zone (TZ). The difference between b = 0,1000 and b = 0,2000 was significant in PZ (P = 0.033), but not in TZ (P = 0.84). Weak but significant negative correlations were identified between ADCs and Gleason score in both PZ and TZ cancer at b = 0,1000 and b = 0,2000 (r = -0.323 to -0.341). For 3T MRI, ADCs using b = 0,2000 are more accurate than b = 0,1000 for diagnosing PZ cancer, and as accurate for TZ cancer. Copyright © 2012 Wiley Periodicals, Inc.

  11. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops.

    PubMed

    Bengochea-Guevara, José M; Andújar, Dionisio; Sanchez-Sardana, Francisco L; Cantuña, Karla; Ribeiro, Angela

    2017-12-24

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, "on ground crop inspection" potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. "On ground monitoring" is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows.

  12. A Low-Cost Approach to Automatically Obtain Accurate 3D Models of Woody Crops

    PubMed Central

    Andújar, Dionisio; Sanchez-Sardana, Francisco L.; Cantuña, Karla

    2017-01-01

    Crop monitoring is an essential practice within the field of precision agriculture since it is based on observing, measuring and properly responding to inter- and intra-field variability. In particular, “on ground crop inspection” potentially allows early detection of certain crop problems or precision treatment to be carried out simultaneously with pest detection. “On ground monitoring” is also of great interest for woody crops. This paper explores the development of a low-cost crop monitoring system that can automatically create accurate 3D models (clouds of coloured points) of woody crop rows. The system consists of a mobile platform that allows the easy acquisition of information in the field at an average speed of 3 km/h. The platform, among others, integrates an RGB-D sensor that provides RGB information as well as an array with the distances to the objects closest to the sensor. The RGB-D information plus the geographical positions of relevant points, such as the starting and the ending points of the row, allow the generation of a 3D reconstruction of a woody crop row in which all the points of the cloud have a geographical location as well as the RGB colour values. The proposed approach for the automatic 3D reconstruction is not limited by the size of the sampled space and includes a method for the removal of the drift that appears in the reconstruction of large crop rows. PMID:29295536

  13. Accurate oscillator strengths for interstellar ultraviolet lines of Cl I

    NASA Technical Reports Server (NTRS)

    Schectman, R. M.; Federman, S. R.; Beideck, D. J.; Ellis, D. J.

    1993-01-01

    Analyses on the abundance of interstellar chlorine rely on accurate oscillator strengths for ultraviolet transitions. Beam-foil spectroscopy was used to obtain f-values for the astrophysically important lines of Cl I at 1088, 1097, and 1347 A. In addition, the line at 1363 A was studied. Our f-values for 1088, 1097 A represent the first laboratory measurements for these lines; the values are f(1088)=0.081 +/- 0.007 (1 sigma) and f(1097) = 0.0088 +/- 0.0013 (1 sigma). These results resolve the issue regarding the relative strengths for 1088, 1097 A in favor of those suggested by astronomical measurements. For the other lines, our results of f(1347) = 0.153 +/- 0.011 (1 sigma) and f(1363) = 0.055 +/- 0.004 (1 sigma) are the most precisely measured values available. The f-values are somewhat greater than previous experimental and theoretical determinations.

  14. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 41 Public Contracts and Property Management 3 2010-07-01 2010-07-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  15. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 41 Public Contracts and Property Management 3 2012-01-01 2012-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  16. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 41 Public Contracts and Property Management 3 2014-01-01 2014-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  17. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 41 Public Contracts and Property Management 3 2013-07-01 2013-07-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  18. 41 CFR 102-75.305 - What type of appraisal value must be obtained for real property disposal transactions?

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 41 Public Contracts and Property Management 3 2011-01-01 2011-01-01 false What type of appraisal value must be obtained for real property disposal transactions? 102-75.305 Section 102-75.305 Public...-75.305 What type of appraisal value must be obtained for real property disposal transactions? For all...

  19. Accurate determination of the charge transfer efficiency of photoanodes for solar water splitting.

    PubMed

    Klotz, Dino; Grave, Daniel A; Rothschild, Avner

    2017-08-09

    The oxygen evolution reaction (OER) at the surface of semiconductor photoanodes is critical for photoelectrochemical water splitting. This reaction involves photo-generated holes that oxidize water via charge transfer at the photoanode/electrolyte interface. However, a certain fraction of the holes that reach the surface recombine with electrons from the conduction band, giving rise to the surface recombination loss. The charge transfer efficiency, η t , defined as the ratio between the flux of holes that contribute to the water oxidation reaction and the total flux of holes that reach the surface, is an important parameter that helps to distinguish between bulk and surface recombination losses. However, accurate determination of η t by conventional voltammetry measurements is complicated because only the total current is measured and it is difficult to discern between different contributions to the current. Chopped light measurement (CLM) and hole scavenger measurement (HSM) techniques are widely employed to determine η t , but they often lead to errors resulting from instrumental as well as fundamental limitations. Intensity modulated photocurrent spectroscopy (IMPS) is better suited for accurate determination of η t because it provides direct information on both the total photocurrent and the surface recombination current. However, careful analysis of IMPS measurements at different light intensities is required to account for nonlinear effects. This work compares the η t values obtained by these methods using heteroepitaxial thin-film hematite photoanodes as a case study. We show that a wide spread of η t values is obtained by different analysis methods, and even within the same method different values may be obtained depending on instrumental and experimental conditions such as the light source and light intensity. Statistical analysis of the results obtained for our model hematite photoanode show good correlation between different methods for

  20. Accurate Determination of the Values of Fundamental Physical Constants: The Basis of the New "Quantum" SI Units

    NASA Astrophysics Data System (ADS)

    Karshenboim, S. G.

    2018-03-01

    The metric system appeared as the system of units designed for macroscopic (laboratory scale) measurements. The progress in accurate determination of the values of quantum constants (such as the Planck constant) in SI units shows that the capabilities in high-precision measurement of microscopic and macroscopic quantities in terms of the same units have increased substantially recently. At the same time, relative microscopic measurements (for example, the comparison of atomic transition frequencies or atomic masses) are often much more accurate than relative measurements of macroscopic quantities. This is the basis for the strategy to define units in microscopic phenomena and then use them on the laboratory scale, which plays a crucial role in practical methodological applications determined by everyday life and technologies. The international CODATA task group on fundamental constants regularly performs an overall analysis of the precision world data (the so-called Adjustment of the Fundamental Constants) and publishes their recommended values. The most recent evaluation was based on the data published by the end of 2014; here, we review the corresponding data and results. The accuracy in determination of the Boltzmann constant has increased, the consistency of the data on determination of the Planck constant has improved; it is these two dimensional constants that will be used in near future as the basis for the new definition of the kelvin and kilogram, respectively. The contradictions in determination of the Rydberg constant and the proton charge radius remain. The accuracy of determination of the fine structure constant and relative atomic weight of the electron has improved. Overall, we give a detailed review of the state of the art in precision determination of the values of fundamental constants. The mathematical procedure of the Adjustment, the new data and results are considered in detail. The limitations due to macroscopic properties of material

  1. The determination of accurate dipole polarizabilities alpha and gamma for the noble gases

    NASA Technical Reports Server (NTRS)

    Rice, Julia E.; Taylor, Peter R.; Lee, Timothy J.; Almlof, Jan

    1991-01-01

    Accurate static dipole polarizabilities alpha and gamma of the noble gases He through Xe were determined using wave functions of similar quality for each system. Good agreement with experimental data for the static polarizability gamma was obtained for Ne and Xe, but not for Ar and Kr. Calculations suggest that the experimental values for these latter ions are too low.

  2. A carbon CT system: how to obtain accurate stopping power ratio using a Bragg peak reduction technique

    NASA Astrophysics Data System (ADS)

    Lee, Sung Hyun; Sunaguchi, Naoki; Hirano, Yoshiyuki; Kano, Yosuke; Liu, Chang; Torikoshi, Masami; Ohno, Tatsuya; Nakano, Takashi; Kanai, Tatsuaki

    2018-02-01

    In this study, we investigate the performance of the Gunma University Heavy Ion Medical Center’s ion computed tomography (CT) system, which measures the residual range of a carbon-ion beam using a fluoroscopy screen, a charge-coupled-device camera, and a moving wedge absorber and collects CT reconstruction images from each projection angle. Each 2D image was obtained by changing the polymethyl methacrylate (PMMA) thickness, such that all images for one projection could be expressed as the depth distribution in PMMA. The residual range as a function of PMMA depth was related to the range in water through a calibration factor, which was determined by comparing the PMMA-equivalent thickness measured by the ion CT system to the water-equivalent thickness measured by a water column. Aluminium, graphite, PMMA, and five biological phantoms were placed in a sample holder, and the residual range for each was quantified simultaneously. A novel method of CT reconstruction to correct for the angular deflection of incident carbon ions in the heterogeneous region utilising the Bragg peak reduction (BPR) is also introduced in this paper, and its performance is compared with other methods present in the literature such as the decomposition and differential methods. Stopping power ratio values derived with the BPR method from carbon-ion CT images matched closely with the true water-equivalent length values obtained from the validation slab experiment.

  3. Determining accurate distances to nearby galaxies

    NASA Astrophysics Data System (ADS)

    Bonanos, Alceste Zoe

    2005-11-01

    Determining accurate distances to nearby or distant galaxies is a very simple conceptually, yet complicated in practice, task. Presently, distances to nearby galaxies are only known to an accuracy of 10-15%. The current anchor galaxy of the extragalactic distance scale is the Large Magellanic Cloud, which has large (10-15%) systematic uncertainties associated with it, because of its morphology, its non-uniform reddening and the unknown metallicity dependence of the Cepheid period-luminosity relation. This work aims to determine accurate distances to some nearby galaxies, and subsequently help reduce the error in the extragalactic distance scale and the Hubble constant H 0 . In particular, this work presents the first distance determination of the DIRECT Project to M33 with detached eclipsing binaries. DIRECT aims to obtain a new anchor galaxy for the extragalactic distance scale by measuring direct, accurate (to 5%) distances to two Local Group galaxies, M31 and M33, with detached eclipsing binaries. It involves a massive variability survey of these galaxies and subsequent photometric and spectroscopic follow-up of the detached binaries discovered. In this work, I also present a catalog of variable stars discovered in one of the DIRECT fields, M31Y, which includes 41 eclipsing binaries. Additionally, we derive the distance to the Draco Dwarf Spheroidal galaxy, with ~100 RR Lyrae found in our first CCD variability study of this galaxy. A "hybrid" method of discovering Cepheids with ground-based telescopes is described next. It involves applying the image subtraction technique on the images obtained from ground-based telescopes and then following them up with the Hubble Space Telescope to derive Cepheid period-luminosity distances. By re-analyzing ESO Very Large Telescope data on M83 (NGC 5236), we demonstrate that this method is much more powerful for detecting variability, especially in crowded fields. I finally present photometry for the Wolf-Rayet binary WR 20a

  4. NNLOPS accurate associated HW production

    NASA Astrophysics Data System (ADS)

    Astill, William; Bizon, Wojciech; Re, Emanuele; Zanderighi, Giulia

    2016-06-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross section Working Group.

  5. Obtaining Approximate Values of Exterior Orientation Elements of Multi-Intersection Images Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, S. W.

    2012-07-01

    In this paper, an efficient global optimization algorithm in the field of artificial intelligence, named Particle Swarm Optimization (PSO), is introduced into close range photogrammetric data processing. PSO can be applied to obtain the approximate values of exterior orientation elements under the condition that multi-intersection photography and a small portable plane control frame are used. PSO, put forward by an American social psychologist J. Kennedy and an electrical engineer R.C. Eberhart, is a stochastic global optimization method based on swarm intelligence, which was inspired by social behavior of bird flocking or fish schooling. The strategy of obtaining the approximate values of exterior orientation elements using PSO is as follows: in terms of image coordinate observed values and space coordinates of few control points, the equations of calculating the image coordinate residual errors can be given. The sum of absolute value of each image coordinate is minimized to be the objective function. The difference between image coordinate observed value and the image coordinate computed through collinear condition equation is defined as the image coordinate residual error. Firstly a gross area of exterior orientation elements is given, and then the adjustment of other parameters is made to get the particles fly in the gross area. After iterative computation for certain times, the satisfied approximate values of exterior orientation elements are obtained. By doing so, the procedures like positioning and measuring space control points in close range photogrammetry can be avoided. Obviously, this method can improve the surveying efficiency greatly and at the same time can decrease the surveying cost. And during such a process, only one small portable control frame with a couple of control points is employed, and there are no strict requirements for the space distribution of control points. In order to verify the effectiveness of this algorithm, two experiments are

  6. Estimating patient dose from CT exams that use automatic exposure control: Development and validation of methods to accurately estimate tube current values.

    PubMed

    McMillan, Kyle; Bostani, Maryam; Cagnon, Christopher H; Yu, Lifeng; Leng, Shuai; McCollough, Cynthia H; McNitt-Gray, Michael F

    2017-08-01

    The vast majority of body CT exams are performed with automatic exposure control (AEC), which adapts the mean tube current to the patient size and modulates the tube current either angularly, longitudinally or both. However, most radiation dose estimation tools are based on fixed tube current scans. Accurate estimates of patient dose from AEC scans require knowledge of the tube current values, which is usually unavailable. The purpose of this work was to develop and validate methods to accurately estimate the tube current values prescribed by one manufacturer's AEC system to enable accurate estimates of patient dose. Methods were developed that took into account available patient attenuation information, user selected image quality reference parameters and x-ray system limits to estimate tube current values for patient scans. Methods consistent with AAPM Report 220 were developed that used patient attenuation data that were: (a) supplied by the manufacturer in the CT localizer radiograph and (b) based on a simulated CT localizer radiograph derived from image data. For comparison, actual tube current values were extracted from the projection data of each patient. Validation of each approach was based on data collected from 40 pediatric and adult patients who received clinically indicated chest (n = 20) and abdomen/pelvis (n = 20) scans on a 64 slice multidetector row CT (Sensation 64, Siemens Healthcare, Forchheim, Germany). For each patient dataset, the following were collected with Institutional Review Board (IRB) approval: (a) projection data containing actual tube current values at each projection view, (b) CT localizer radiograph (topogram) and (c) reconstructed image data. Tube current values were estimated based on the actual topogram (actual-topo) as well as the simulated topogram based on image data (sim-topo). Each of these was compared to the actual tube current values from the patient scan. In addition, to assess the accuracy of each method in estimating

  7. Comparing the rankings obtained from two biodiversity indices: the Fair Proportion Index and the Shapley Value.

    PubMed

    Wicke, Kristina; Fischer, Mareike

    2017-10-07

    The Shapley Value and the Fair Proportion Index of phylogenetic trees have been frequently discussed as prioritization tools in conservation biology. Both indices rank species according to their contribution to total phylogenetic diversity, allowing for a simple conservation criterion. While both indices have their specific advantages and drawbacks, it has recently been shown that both values are closely related. However, as different authors use different definitions of the Shapley Value, the specific degree of relatedness depends on the specific version of the Shapley Value - it ranges from a high correlation index to equality of the indices. In this note, we first give an overview of the different indices. Then we turn our attention to the mere ranking order provided by either of the indices. We compare the rankings obtained from different versions of the Shapley Value for a phylogenetic tree of European amphibians and illustrate their differences. We then undertake further analyses on simulated data and show that even though the chance of two rankings being exactly identical (when obtained from different versions of the Shapley Value) decreases with an increasing number of taxa, the distance between the two rankings converges to zero, i.e., the rankings are becoming more and more alike. Moreover, we introduce our freely available software package FairShapley, which was implemented in Perl and with which all calculations have been performed. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Highly accurate surface maps from profilometer measurements

    NASA Astrophysics Data System (ADS)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  9. Accurate assessment and identification of naturally occurring cellular cobalamins.

    PubMed

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V; Moreira, Edward S; Brasch, Nicola E; Jacobsen, Donald W

    2008-01-01

    Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo beta-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Experiments were designed to: 1) assess beta-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable beta-axial ligands. The cobalamin profile of cells grown in the presence of [ 57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [ 57Co]-aquacobalamin, [ 57Co]-glutathionylcobalamin, [ 57Co]-sulfitocobalamin, [ 57Co]-cyanocobalamin, [ 57Co]-adenosylcobalamin, [ 57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalaminacting as a scavenger cobalamin (i.e. "cold trapping"), the recovery of both [ 57Co]-glutathionylcobalamin and [ 57Co]-sulfitocobalamin decreases to low but consistent levels. In contrasts, the [ 57Co]-nitrocobalamin observed in the extracts prepared without excess aquacobalamin is undetected in extracts prepared with cold trapping. This demonstrates that beta-ligand exchange occur with non-covalently bound beta-ligands. The exception to this observation is cyanocobalamin with a non-exchangeable CN- group. It is now possible to obtain accurate profiles of cellular cobalamin.

  10. Accurate assessment and identification of naturally occurring cellular cobalamins

    PubMed Central

    Hannibal, Luciana; Axhemi, Armend; Glushchenko, Alla V.; Moreira, Edward S.; Brasch, Nicola E.; Jacobsen, Donald W.

    2009-01-01

    Background Accurate assessment of cobalamin profiles in human serum, cells, and tissues may have clinical diagnostic value. However, non-alkyl forms of cobalamin undergo β-axial ligand exchange reactions during extraction, which leads to inaccurate profiles having little or no diagnostic value. Methods Experiments were designed to: 1) assess β-axial ligand exchange chemistry during the extraction and isolation of cobalamins from cultured bovine aortic endothelial cells, human foreskin fibroblasts, and human hepatoma HepG2 cells, and 2) to establish extraction conditions that would provide a more accurate assessment of endogenous forms containing both exchangeable and non-exchangeable β-axial ligands. Results The cobalamin profile of cells grown in the presence of [57Co]-cyanocobalamin as a source of vitamin B12 shows that the following derivatives are present: [57Co]-aquacobalamin, [57Co]-glutathionylcobalamin, [57Co]-sulfitocobalamin, [57Co]-cyanocobalamin, [57Co]-adenosylcobalamin, [57Co]-methylcobalamin, as well as other yet unidentified corrinoids. When the extraction is performed in the presence of excess cold aquacobalamin acting as a scavenger cobalamin (i.e., “cold trapping”), the recovery of both [57Co]-glutathionylcobalamin and [57Co]-sulfitocobalamin decreases to low but consistent levels. In contrast, the [57Co]-nitrocobalamin observed in extracts prepared without excess aquacobalamin is undetectable in extracts prepared with cold trapping. Conclusions This demonstrates that β-ligand exchange occurs with non-covalently bound β-ligands. The exception to this observation is cyanocobalamin with a non-covalent but non-exchangeable− CNT group. It is now possible to obtain accurate profiles of cellular cobalamins. PMID:18973458

  11. How to obtain accurate resist simulations in very low-k1 era?

    NASA Astrophysics Data System (ADS)

    Chiou, Tsann-Bim; Park, Chan-Ha; Choi, Jae-Seung; Min, Young-Hong; Hansen, Steve; Tseng, Shih-En; Chen, Alek C.; Yim, Donggyu

    2006-03-01

    A procedure for calibrating a resist model iteratively adjusts appropriate parameters until the simulations of the model match the experimental data. The tunable parameters may include the shape of the illuminator, the geometry and transmittance/phase of the mask, light source and scanner-related parameters that affect imaging quality, resist process control and most importantly the physical/chemical factors in the resist model. The resist model can be accurately calibrated by measuring critical dimensions (CD) of a focus-exposure matrix (FEM) and the technique has been demonstrated to be very successful in predicting lithographic performance. However, resist model calibration is more challenging in the low k1 (<0.3) regime because numerous uncertainties, such as mask and resist CD metrology errors, are becoming too large to be ignored. This study demonstrates a resist model calibration procedure for a 0.29 k1 process using a 6% halftone mask containing 2D brickwall patterns. The influence of different scanning electron microscopes (SEM) and their wafer metrology signal analysis algorithms on the accuracy of the resist model is evaluated. As an example of the metrology issue of the resist pattern, the treatment of a sidewall angle is demonstrated for the resist line ends where the contrast is relatively low. Additionally, the mask optical proximity correction (OPC) and corner rounding are considered in the calibration procedure that is based on captured SEM images. Accordingly, the average root-mean-square (RMS) error, which is the difference between simulated and experimental CDs, can be improved by considering the metrological issues. Moreover, a weighting method and a measured CD tolerance are proposed to handle the different CD variations of the various edge points of the wafer resist pattern. After the weighting method is implemented and the CD selection criteria applied, the RMS error can be further suppressed. Therefore, the resist CD and process window can

  12. Partial volume correction and image segmentation for accurate measurement of standardized uptake value of grey matter in the brain.

    PubMed

    Bural, Gonca; Torigian, Drew; Basu, Sandip; Houseni, Mohamed; Zhuge, Ying; Rubello, Domenico; Udupa, Jayaram; Alavi, Abass

    2015-12-01

    Our aim was to explore a novel quantitative method [based upon an MRI-based image segmentation that allows actual calculation of grey matter, white matter and cerebrospinal fluid (CSF) volumes] for overcoming the difficulties associated with conventional techniques for measuring actual metabolic activity of the grey matter. We included four patients with normal brain MRI and fluorine-18 fluorodeoxyglucose (F-FDG)-PET scans (two women and two men; mean age 46±14 years) in this analysis. The time interval between the two scans was 0-180 days. We calculated the volumes of grey matter, white matter and CSF by using a novel segmentation technique applied to the MRI images. We measured the mean standardized uptake value (SUV) representing the whole metabolic activity of the brain from the F-FDG-PET images. We also calculated the white matter SUV from the upper transaxial slices (centrum semiovale) of the F-FDG-PET images. The whole brain volume was calculated by summing up the volumes of the white matter, grey matter and CSF. The global cerebral metabolic activity was calculated by multiplying the mean SUV with total brain volume. The whole brain white matter metabolic activity was calculated by multiplying the mean SUV for the white matter by the white matter volume. The global cerebral metabolic activity only reflects those of the grey matter and the white matter, whereas that of the CSF is zero. We subtracted the global white matter metabolic activity from that of the whole brain, resulting in the global grey matter metabolism alone. We then divided the grey matter global metabolic activity by grey matter volume to accurately calculate the SUV for the grey matter alone. The brain volumes ranged between 1546 and 1924 ml. The mean SUV for total brain was 4.8-7. Total metabolic burden of the brain ranged from 5565 to 9617. The mean SUV for white matter was 2.8-4.1. On the basis of these measurements we generated the grey matter SUV, which ranged from 8.1 to 11.3. The

  13. Purification of pharmaceutical preparations using thin-layer chromatography to obtain mass spectra with Direct Analysis in Real Time and accurate mass spectrometry.

    PubMed

    Wood, Jessica L; Steiner, Robert R

    2011-06-01

    Forensic analysis of pharmaceutical preparations requires a comparative analysis with a standard of the suspected drug in order to identify the active ingredient. Purchasing analytical standards can be expensive or unattainable from the drug manufacturers. Direct Analysis in Real Time (DART™) is a novel, ambient ionization technique, typically coupled with a JEOL AccuTOF™ (accurate mass) mass spectrometer. While a fast and easy technique to perform, a drawback of using DART™ is the lack of component separation of mixtures prior to ionization. Various in-house pharmaceutical preparations were purified using thin-layer chromatography (TLC) and mass spectra were subsequently obtained using the AccuTOF™- DART™ technique. Utilizing TLC prior to sample introduction provides a simple, low-cost solution to acquiring mass spectra of the purified preparation. Each spectrum was compared against an in-house molecular formula list to confirm the accurate mass elemental compositions. Spectra of purified ingredients of known pharmaceuticals were added to an in-house library for use as comparators for casework samples. Resolving isomers from one another can be accomplished using collision-induced dissociation after ionization. Challenges arose when the pharmaceutical preparation required an optimized TLC solvent to achieve proper separation and purity of the standard. Purified spectra were obtained for 91 preparations and included in an in-house drug standard library. Primary standards would only need to be purchased when pharmaceutical preparations not previously encountered are submitted for comparative analysis. TLC prior to DART™ analysis demonstrates a time efficient and cost saving technique for the forensic drug analysis community. Copyright © 2011 John Wiley & Sons, Ltd. Copyright © 2011 John Wiley & Sons, Ltd.

  14. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    PubMed Central

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR. PMID:24605060

  15. Do pattern deviation values accurately estimate glaucomatous visual field damage in eyes with glaucoma and cataract?

    PubMed

    Matsuda, Aya; Hara, Takeshi; Miyata, Kazunori; Matsuo, Hiroshi; Murata, Hiroshi; Mayama, Chihiro; Asaoka, Ryo

    2015-09-01

    To study the efficacy of pattern deviation (PD) values in the estimation of visual field compensating the influence of cataract in eyes with glaucoma. The study subjects comprised of 48 eyes of 37 glaucoma patients. Mean total deviation value (mTDs) on Humphrey Field Analyzer after cataract surgery was compared with mean PD (mPD) before the surgery. Visual field measurements were carried out ≤6 months before (VF(pre)) and following (VF(post)) successful cataract surgery. The difference between the mPD or mTD values in the VF(pre) and mTD values in the VF(post) (denoted as εmPD/ΔmTD) was calculated, and the influence of the extent of 'true' glaucomatous visual field damage or cataract (as represented by εmPD and ΔmTD, respectively) on this difference was also investigated. There was a significant difference between mTD in the VF(pre) and mTD in the VF(post) (p<0.001, repeated measures analysis of variance). There was not a significant difference between mPD in the VF(pre) and mTD in the VF(post) (p=0.06); however, εmPD was significantly correlated with the mTD in VF(post) and also ΔmTD (R(2)=0.56 and 0.27, p<0.001, Pearson's correlation). The accurate prediction of the mTD in the VF(post) can be achieved using the pattern standard deviation (PSD), mTD and also visual acuity before surgery. Clinicians should be very careful when reviewing the VF of a patient with glaucoma and cataract since PD values may underestimate glaucomatous VF damage in patients with advanced disease and also overestimate glaucomatous VF damage in patients with early to moderate cataract. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  16. Susceptibility patterns for amoxicillin/clavulanate tests mimicking the licensed formulations and pharmacokinetic relationships: do the MIC obtained with 2:1 ratio testing accurately reflect activity against beta-lactamase-producing strains of Haemophilus influenzae and Moraxella catarrhalis?

    PubMed

    Pottumarthy, Sudha; Sader, Helio S; Fritsche, Thomas R; Jones, Ronald N

    2005-11-01

    Amoxicillin/clavulanate has recently undergone formulation changes (XR and ES-600) that represent 14:1 and 16:1 ratios of amoxicillin/clavulanate. These ratios greatly differ from the 2:1 ratio used in initial formulations and in vitro susceptibility testing. The objective of this study was to determine if the reference method using a 2:1 ratio accurately reflects the susceptibility to the various clinically used amoxicillin/clavulanate formulations and their respective serum concentration ratios. A collection of 330 Haemophilus influenzae strains (300 beta-lactamase-positive and 30 beta-lactamase-negative) and 40 Moraxella catarrhalis strains (30 beta-lactamase-positive and 10 beta-lactamase-negative) were tested by the broth microdilution method against eight amoxicillin/clavulanate combinations (4:1, 5:1, 7:1, 9:1, 14:1, and 16:1 ratios; 0.5 and 2 microg/mL fixed clavulanate concentrations) and the minimum inhibitory concentration (MIC) results were compared with those obtained with the reference 2:1 ratio testing. For the beta-lactamase-negative strains of both genera, there was no demonstrable change in the MIC values obtained for all ratios analyzed (2:1 to 16:1). For the beta-lactamase-positive strains of H. influenzae and M. catarrhalis, at ratios >or=4:1 there was a shift in the central tendency of the MIC scatterplot compared with the results of testing 2:1 ratio. As a result, there was a 2-fold dilution increase in the MIC(50) and MIC(90) values, most evident for H. influenzae and BRO-1-producing M. catarrhalis strains. For beta-lactamase-positive strains of H. influenzae, the shift resulted in a change in the interpretive result for 3 isolates (1.0%) from susceptible using the reference method (2:1 ratio) to resistant (8/4 microg/mL; very major error) at the 16:1 ratio. In addition, the number of isolates with MIC values at or 1 dilution lower than the breakpoint (4/2 microg/mL) increased from 5% at 2:1 ratio to 32-33% for ratios 14:1 and 16:1. Our

  17. Methods for the accurate estimation of confidence intervals on protein folding ϕ-values

    PubMed Central

    Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.

    2006-01-01

    ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714

  18. Obtaining accurate glucose measurements from wild animals under field conditions: comparing a hand held glucometer with a standard laboratory technique in grey seals

    PubMed Central

    Turner, Lucy M.; Millward, Sebastian; Moss, Simon E. W.; Hall, Ailsa J.

    2017-01-01

    Abstract Glucose is an important metabolic fuel and circulating levels are tightly regulated in most mammals, but can drop when body fuel reserves become critically low. Glucose is mobilized rapidly from liver and muscle during stress in response to increased circulating cortisol. Blood glucose levels can thus be of value in conservation as an indicator of nutritional status and may be a useful, rapid assessment marker for acute or chronic stress. However, seals show unusual glucose regulation: circulating levels are high and insulin sensitivity is limited. Accurate blood glucose measurement is therefore vital to enable meaningful health and physiological assessments in captive, wild or rehabilitated seals and to explore its utility as a marker of conservation relevance in these animals. Point-of-care devices are simple, portable, relatively cheap and use less blood compared with traditional sampling approaches, making them useful in conservation-related monitoring. We investigated the accuracy of a hand-held glucometer for ‘instant’ field measurement of blood glucose, compared with blood drawing followed by laboratory testing, in wild grey seals (Halichoerus grypus), a species used as an indicator for Good Environmental Status in European waters. The glucometer showed high precision, but low accuracy, relative to laboratory measurements, and was least accurate at extreme values. It did not provide a reliable alternative to plasma analysis. Poor correlation between methods may be due to suboptimal field conditions, greater and more variable haematocrit, faster erythrocyte settling rate and/or lipaemia in seals. Glucometers must therefore be rigorously tested before use in new species and demographic groups. Sampling, processing and glucose determination methods have major implications for conclusions regarding glucose regulation, and health assessment in seals generally, which is important in species of conservation concern and in development of circulating

  19. 3D surface voxel tracing corrector for accurate bone segmentation.

    PubMed

    Guo, Haoyan; Song, Sicong; Wang, Jinke; Guo, Maozu; Cheng, Yuanzhi; Wang, Yadong; Tamura, Shinichi

    2018-06-18

    For extremely close bones, their boundaries are weak and diffused due to strong interaction between adjacent surfaces. These factors prevent the accurate segmentation of bone structure. To alleviate these difficulties, we propose an automatic method for accurate bone segmentation. The method is based on a consideration of the 3D surface normal direction, which is used to detect the bone boundary in 3D CT images. Our segmentation method is divided into three main stages. Firstly, we consider a surface tracing corrector combined with Gaussian standard deviation [Formula: see text] to improve the estimation of normal direction. Secondly, we determine an optimal value of [Formula: see text] for each surface point during this normal direction correction. Thirdly, we construct the 1D signal and refining the rough boundary along the corrected normal direction. The value of [Formula: see text] is used in the first directional derivative of the Gaussian to refine the location of the edge point along accurate normal direction. Because the normal direction is corrected and the value of [Formula: see text] is optimized, our method is robust to noise images and narrow joint space caused by joint degeneration. We applied our method to 15 wrists and 50 hip joints for evaluation. In the wrist segmentation, Dice overlap coefficient (DOC) of [Formula: see text]% was obtained by our method. In the hip segmentation, fivefold cross-validations were performed for two state-of-the-art methods. Forty hip joints were used for training in two state-of-the-art methods, 10 hip joints were used for testing and performing comparisons. The DOCs of [Formula: see text], [Formula: see text]%, and [Formula: see text]% were achieved by our method for the pelvis, the left femoral head and the right femoral head, respectively. Our method was shown to improve segmentation accuracy for several specific challenging cases. The results demonstrate that our approach achieved a superior accuracy over two

  20. Geometric constraints in semiclassical initial value representation calculations in Cartesian coordinates: accurate reduction in zero-point energy.

    PubMed

    Issack, Bilkiss B; Roy, Pierre-Nicholas

    2005-08-22

    An approach for the inclusion of geometric constraints in semiclassical initial value representation calculations is introduced. An important aspect of the approach is that Cartesian coordinates are used throughout. We devised an algorithm for the constrained sampling of initial conditions through the use of multivariate Gaussian distribution based on a projected Hessian. We also propose an approach for the constrained evaluation of the so-called Herman-Kluk prefactor in its exact log-derivative form. Sample calculations are performed for free and constrained rare-gas trimers. The results show that the proposed approach provides an accurate evaluation of the reduction in zero-point energy. Exact basis set calculations are used to assess the accuracy of the semiclassical results. Since Cartesian coordinates are used, the approach is general and applicable to a variety of molecular and atomic systems.

  1. Accurate interatomic force fields via machine learning with covariant kernels

    NASA Astrophysics Data System (ADS)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  2. Optimizing Methods of Obtaining Stellar Parameters for the H3 Survey

    NASA Astrophysics Data System (ADS)

    Ivory, KeShawn; Conroy, Charlie; Cargile, Phillip

    2018-01-01

    The Stellar Halo at High Resolution with Hectochelle Survey (H3) is in the process of observing and collecting stellar parameters for stars in the Milky Way's halo. With a goal of measuring radial velocities for fainter stars, it is crucial that we have optimal methods of obtaining this and other parameters from the data from these stars.The method currently developed is The Payne, named after Cecilia Payne-Gaposchkin, a code that uses neural networks and Markov Chain Monte Carlo methods to utilize both spectra and photometry to obtain values for stellar parameters. This project was to investigate the benefit of fitting both spectra and spectral energy distributions (SED). Mock spectra using the parameters of the Sun were created and noise was inserted at various signal to noise values. The Payne then fit each mock spectrum with and without a mock SED also generated from solar parameters. The result was that at high signal to noise, the spectrum dominated and the effect of fitting the SED was minimal. But at low signal to noise, the addition of the SED greatly decreased the standard deviation of the data and resulted in more accurate values for temperature and metallicity.

  3. Comparison of oxygen saturation values obtained from fingers on physically restrained or unrestrained sides of the body.

    PubMed

    Korhan, Esra Akin; Yönt, Gülendam Hakverdioğlu; Khorshid, Leyla

    2011-01-01

    The aim of this study was to compare semiexperimentally the pulse oximetry values obtained from a finger on restrained or unrestrained sides of the body. The pulse oximeter provides a noninvasive measurement of the oxygen saturation of hemoglobin in arterial blood. One of the procedures most frequently applied to patients in intensive care units is the application of physical restraint. Circulation problems are the most important complication in patients who are physically restrained. Evaluation of oxygen saturation from body parts in which circulation is impeded or has deteriorated can cause false results. The research sample consisted of 30 hospitalized patients who participated in the study voluntarily and who were concordant with the inclusion criteria of the study. Patient information and patient follow-up forms were used for data collection. Pulse oximetry values were measured simultaneously using OxiMax Nellcor finger sensors from fingers on the restrained and unrestrained sides of the body. Numeric and percentile distributions were used in evaluating the sociodemographic properties of patients. A significant difference was found between the oxygen saturation values obtained from a finger of an arm that had been physically restrained and a finger of an arm that had not been physically restrained. The mean oxygen saturation value measured from a finger of an arm that had been physically restrained was found to be 93.40 (SD, 2.97), and the mean oxygen saturation value measured from a finger of an arm that had not been physically restrained was found to be 95.53 (SD, 2.38). The results of this study indicate that nurses should use a finger of an arm that is not physically restrained when evaluating oxygen saturation values to evaluate them correctly.

  4. Accurate Projection Methods for the Incompressible Navier–Stokes Equations

    DOE PAGES

    Brown, David L.; Cortez, Ricardo; Minion, Michael L.

    2001-04-10

    This paper considers the accuracy of projection method approximations to the initial–boundary-value problem for the incompressible Navier–Stokes equations. The issue of how to correctly specify numerical boundary conditions for these methods has been outstanding since the birth of the second-order methodology a decade and a half ago. It has been observed that while the velocity can be reliably computed to second-order accuracy in time and space, the pressure is typically only first-order accurate in the L ∞-norm. Here, we identify the source of this problem in the interplay of the global pressure-update formula with the numerical boundary conditions and presentsmore » an improved projection algorithm which is fully second-order accurate, as demonstrated by a normal mode analysis and numerical experiments. In addition, a numerical method based on a gauge variable formulation of the incompressible Navier–Stokes equations, which provides another option for obtaining fully second-order convergence in both velocity and pressure, is discussed. The connection between the boundary conditions for projection methods and the gauge method is explained in detail.« less

  5. The use of multiple imputation for the accurate measurements of individual feed intake by electronic feeders.

    PubMed

    Jiao, S; Tiezzi, F; Huang, Y; Gray, K A; Maltecca, C

    2016-02-01

    Obtaining accurate individual feed intake records is the key first step in achieving genetic progress toward more efficient nutrient utilization in pigs. Feed intake records collected by electronic feeding systems contain errors (erroneous and abnormal values exceeding certain cutoff criteria), which are due to feeder malfunction or animal-feeder interaction. In this study, we examined the use of a novel data-editing strategy involving multiple imputation to minimize the impact of errors and missing values on the quality of feed intake data collected by an electronic feeding system. Accuracy of feed intake data adjustment obtained from the conventional linear mixed model (LMM) approach was compared with 2 alternative implementations of multiple imputation by chained equation, denoted as MI (multiple imputation) and MICE (multiple imputation by chained equation). The 3 methods were compared under 3 scenarios, where 5, 10, and 20% feed intake error rates were simulated. Each of the scenarios was replicated 5 times. Accuracy of the alternative error adjustment was measured as the correlation between the true daily feed intake (DFI; daily feed intake in the testing period) or true ADFI (the mean DFI across testing period) and the adjusted DFI or adjusted ADFI. In the editing process, error cutoff criteria are used to define if a feed intake visit contains errors. To investigate the possibility that the error cutoff criteria may affect any of the 3 methods, the simulation was repeated with 2 alternative error cutoff values. Multiple imputation methods outperformed the LMM approach in all scenarios with mean accuracies of 96.7, 93.5, and 90.2% obtained with MI and 96.8, 94.4, and 90.1% obtained with MICE compared with 91.0, 82.6, and 68.7% using LMM for DFI. Similar results were obtained for ADFI. Furthermore, multiple imputation methods consistently performed better than LMM regardless of the cutoff criteria applied to define errors. In conclusion, multiple imputation

  6. Comparison of internal dose estimates obtained using organ-level, voxel S value, and Monte Carlo techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grimes, Joshua, E-mail: grimes.joshua@mayo.edu; Celler, Anna

    2014-09-15

    Purpose: The authors’ objective was to compare internal dose estimates obtained using the Organ Level Dose Assessment with Exponential Modeling (OLINDA/EXM) software, the voxel S value technique, and Monte Carlo simulation. Monte Carlo dose estimates were used as the reference standard to assess the impact of patient-specific anatomy on the final dose estimate. Methods: Six patients injected with{sup 99m}Tc-hydrazinonicotinamide-Tyr{sup 3}-octreotide were included in this study. A hybrid planar/SPECT imaging protocol was used to estimate {sup 99m}Tc time-integrated activity coefficients (TIACs) for kidneys, liver, spleen, and tumors. Additionally, TIACs were predicted for {sup 131}I, {sup 177}Lu, and {sup 90}Y assuming themore » same biological half-lives as the {sup 99m}Tc labeled tracer. The TIACs were used as input for OLINDA/EXM for organ-level dose calculation and voxel level dosimetry was performed using the voxel S value method and Monte Carlo simulation. Dose estimates for {sup 99m}Tc, {sup 131}I, {sup 177}Lu, and {sup 90}Y distributions were evaluated by comparing (i) organ-level S values corresponding to each method, (ii) total tumor and organ doses, (iii) differences in right and left kidney doses, and (iv) voxelized dose distributions calculated by Monte Carlo and the voxel S value technique. Results: The S values for all investigated radionuclides used by OLINDA/EXM and the corresponding patient-specific S values calculated by Monte Carlo agreed within 2.3% on average for self-irradiation, and differed by as much as 105% for cross-organ irradiation. Total organ doses calculated by OLINDA/EXM and the voxel S value technique agreed with Monte Carlo results within approximately ±7%. Differences between right and left kidney doses determined by Monte Carlo were as high as 73%. Comparison of the Monte Carlo and voxel S value dose distributions showed that each method produced similar dose volume histograms with a minimum dose covering 90% of the volume

  7. 7 CFR 356.4 - Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 5 2010-01-01 2010-01-01 false Property valued at $10,000 or less; notice of seizure... PROCEDURES § 356.4 Property valued at $10,000 or less; notice of seizure administrative action to obtain... notice of seizure and proposed forfeiture as provided in paragraph (c)(1) of this section, by posting for...

  8. Accurate determination of the geoid undulation N

    NASA Astrophysics Data System (ADS)

    Lambrou, E.; Pantazis, G.; Balodimos, D. D.

    2003-04-01

    This work is related to the activities of the CERGOP Study Group Geodynamics of the Balkan Peninsula, presents a method for the determination of the variation ΔN and, indirectly, of the geoid undulation N with an accuracy of a few millimeters. It is based on the determination of the components xi, eta of the deflection of the vertical using modern geodetic instruments (digital total station and GPS receiver). An analysis of the method is given. Accuracy of the order of 0.01arcsec in the estimated values of the astronomical coordinates Φ and Δ is achieved. The result of applying the proposed method in an area around Athens is presented. In this test application, a system is used which takes advantage of the capabilities of modern geodetic instruments. The GPS receiver permits the determination of the geodetic coordinates at a chosen reference system and, in addition, provides accurate timing information. The astronomical observations are performed through a digital total station with electronic registering of angles and time. The required accuracy of the values of the coordinates is achieved in about four hours of fieldwork. In addition, the instrumentation is lightweight, easily transportable and can be setup in the field very quickly. Combined with a stream-lined data reduction procedure and the use of up-to-date astrometric data, the values of the components xi, eta of the deflection of the vertical and, eventually, the changes ΔN of the geoid undulation are determined easily and accurately. In conclusion, this work demonstrates that it is quite feasible to create an accurate map of the geoid undulation, especially in areas that present large geoid variations and other methods are not capable to give accurate and reliable results.

  9. Mass spectrometry-based protein identification with accurate statistical significance assignment.

    PubMed

    Alves, Gelio; Yu, Yi-Kuo

    2015-03-01

    Assigning statistical significance accurately has become increasingly important as metadata of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of metadata at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry-based proteomics, even though accurate statistics for peptide identification can now be achieved, accurate protein level statistics remain challenging. We have constructed a protein ID method that combines peptide evidences of a candidate protein based on a rigorous formula derived earlier; in this formula the database P-value of every peptide is weighted, prior to the final combination, according to the number of proteins it maps to. We have also shown that this protein ID method provides accurate protein level E-value, eliminating the need of using empirical post-processing methods for type-I error control. Using a known protein mixture, we find that this protein ID method, when combined with the Sorić formula, yields accurate values for the proportion of false discoveries. In terms of retrieval efficacy, the results from our method are comparable with other methods tested. The source code, implemented in C++ on a linux system, is available for download at ftp://ftp.ncbi.nlm.nih.gov/pub/qmbp/qmbp_ms/RAId/RAId_Linux_64Bit. Published by Oxford University Press 2014. This work is written by US Government employees and is in the public domain in the US.

  10. On the accurate estimation of gap fraction during daytime with digital cover photography

    NASA Astrophysics Data System (ADS)

    Hwang, Y. R.; Ryu, Y.; Kimm, H.; Macfarlane, C.; Lang, M.; Sonnentag, O.

    2015-12-01

    Digital cover photography (DCP) has emerged as an indirect method to obtain gap fraction accurately. Thus far, however, the intervention of subjectivity, such as determining the camera relative exposure value (REV) and threshold in the histogram, hindered computing accurate gap fraction. Here we propose a novel method that enables us to measure gap fraction accurately during daytime under various sky conditions by DCP. The novel method computes gap fraction using a single DCP unsaturated raw image which is corrected for scattering effects by canopies and a reconstructed sky image from the raw format image. To test the sensitivity of the novel method derived gap fraction to diverse REVs, solar zenith angles and canopy structures, we took photos in one hour interval between sunrise to midday under dense and sparse canopies with REV 0 to -5. The novel method showed little variation of gap fraction across different REVs in both dense and spares canopies across diverse range of solar zenith angles. The perforated panel experiment, which was used to test the accuracy of the estimated gap fraction, confirmed that the novel method resulted in the accurate and consistent gap fractions across different hole sizes, gap fractions and solar zenith angles. These findings highlight that the novel method opens new opportunities to estimate gap fraction accurately during daytime from sparse to dense canopies, which will be useful in monitoring LAI precisely and validating satellite remote sensing LAI products efficiently.

  11. PyVCI: A flexible open-source code for calculating accurate molecular infrared spectra

    NASA Astrophysics Data System (ADS)

    Sibaev, Marat; Crittenden, Deborah L.

    2016-06-01

    The PyVCI program package is a general purpose open-source code for simulating accurate molecular spectra, based upon force field expansions of the potential energy surface in normal mode coordinates. It includes harmonic normal coordinate analysis and vibrational configuration interaction (VCI) algorithms, implemented primarily in Python for accessibility but with time-consuming routines written in C. Coriolis coupling terms may be optionally included in the vibrational Hamiltonian. Non-negligible VCI matrix elements are stored in sparse matrix format to alleviate the diagonalization problem. CPU and memory requirements may be further controlled by algorithmic choices and/or numerical screening procedures, and recommended values are established by benchmarking using a test set of 44 molecules for which accurate analytical potential energy surfaces are available. Force fields in normal mode coordinates are obtained from the PyPES library of high quality analytical potential energy surfaces (to 6th order) or by numerical differentiation of analytic second derivatives generated using the GAMESS quantum chemical program package (to 4th order).

  12. Accurate radiative transfer calculations for layered media.

    PubMed

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics.

  13. An automated A-value measurement tool for accurate cochlear duct length estimation.

    PubMed

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p < 0.01 and r = 0.69, p < 0.01, respectively). An automated A-value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual

  14. Accurate Valence Ionization Energies from Kohn-Sham Eigenvalues with the Help of Potential Adjustors.

    PubMed

    Thierbach, Adrian; Neiss, Christian; Gallandi, Lukas; Marom, Noa; Körzdörfer, Thomas; Görling, Andreas

    2017-10-10

    An accurate yet computationally very efficient and formally well justified approach to calculate molecular ionization potentials is presented and tested. The first as well as higher ionization potentials are obtained as the negatives of the Kohn-Sham eigenvalues of the neutral molecule after adjusting the eigenvalues by a recently [ Görling Phys. Rev. B 2015 , 91 , 245120 ] introduced potential adjustor for exchange-correlation potentials. Technically the method is very simple. Besides a Kohn-Sham calculation of the neutral molecule, only a second Kohn-Sham calculation of the cation is required. The eigenvalue spectrum of the neutral molecule is shifted such that the negative of the eigenvalue of the highest occupied molecular orbital equals the energy difference of the total electronic energies of the cation minus the neutral molecule. For the first ionization potential this simply amounts to a ΔSCF calculation. Then, the higher ionization potentials are obtained as the negatives of the correspondingly shifted Kohn-Sham eigenvalues. Importantly, this shift of the Kohn-Sham eigenvalue spectrum is not just ad hoc. In fact, it is formally necessary for the physically correct energetic adjustment of the eigenvalue spectrum as it results from ensemble density-functional theory. An analogous approach for electron affinities is equally well obtained and justified. To illustrate the practical benefits of the approach, we calculate the valence ionization energies of test sets of small- and medium-sized molecules and photoelectron spectra of medium-sized electron acceptor molecules using a typical semilocal (PBE) and two typical global hybrid functionals (B3LYP and PBE0). The potential adjusted B3LYP and PBE0 eigenvalues yield valence ionization potentials that are in very good agreement with experimental values, reaching an accuracy that is as good as the best G 0 W 0 methods, however, at much lower computational costs. The potential adjusted PBE eigenvalues result in

  15. Numerical solution of sixth-order boundary-value problems using Legendre wavelet collocation method

    NASA Astrophysics Data System (ADS)

    Sohaib, Muhammad; Haq, Sirajul; Mukhtar, Safyan; Khan, Imad

    2018-03-01

    An efficient method is proposed to approximate sixth order boundary value problems. The proposed method is based on Legendre wavelet in which Legendre polynomial is used. The mechanism of the method is to use collocation points that converts the differential equation into a system of algebraic equations. For validation two test problems are discussed. The results obtained from proposed method are quite accurate, also close to exact solution, and other different methods. The proposed method is computationally more effective and leads to more accurate results as compared to other methods from literature.

  16. Yes, one can obtain better quality structures from routine X-ray data collection.

    PubMed

    Sanjuan-Szklarz, W Fabiola; Hoser, Anna A; Gutmann, Matthias; Madsen, Anders Østergaard; Woźniak, Krzysztof

    2016-01-01

    Single-crystal X-ray diffraction structural results for benzidine dihydrochloride, hydrated and protonated N,N,N,N-peri(dimethylamino)naphthalene chloride, triptycene, dichlorodimethyltriptycene and decamethylferrocene have been analysed. A critical discussion of the dependence of structural and thermal parameters on resolution for these compounds is presented. Results of refinements against X-ray data, cut off to different resolutions from the high-resolution data files, are compared to structural models derived from neutron diffraction experiments. The Independent Atom Model (IAM) and the Transferable Aspherical Atom Model (TAAM) are tested. The average differences between the X-ray and neutron structural parameters (with the exception of valence angles defined by H atoms) decrease with the increasing 2θmax angle. The scale of differences between X-ray and neutron geometrical parameters can be significantly reduced when data are collected to the higher, than commonly used, 2θmax diffraction angles (for Mo Kα 2θmax > 65°). The final structural and thermal parameters obtained for the studied compounds using TAAM refinement are in better agreement with the neutron values than the IAM results for all resolutions and all compounds. By using TAAM, it is still possible to obtain accurate results even from low-resolution X-ray data. This is particularly important as TAAM is easy to apply and can routinely be used to improve the quality of structural investigations [Dominiak (2015 ▸). LSDB from UBDB. University of Buffalo, USA]. We can recommend that, in order to obtain more adequate (more accurate and precise) structural and displacement parameters during the IAM model refinement, data should be collected up to the larger diffraction angles, at least, for Mo Kα radiation to 2θmax = 65° (sin θmax/λ < 0.75 Å(-1)). The TAAM approach is a very good option to obtain more adequate results even using data collected to the lower 2θmax angles. Also

  17. A Weibull statistics-based lignocellulose saccharification model and a built-in parameter accurately predict lignocellulose hydrolysis performance.

    PubMed

    Wang, Mingyu; Han, Lijuan; Liu, Shasha; Zhao, Xuebing; Yang, Jinghua; Loh, Soh Kheang; Sun, Xiaomin; Zhang, Chenxi; Fang, Xu

    2015-09-01

    Renewable energy from lignocellulosic biomass has been deemed an alternative to depleting fossil fuels. In order to improve this technology, we aim to develop robust mathematical models for the enzymatic lignocellulose degradation process. By analyzing 96 groups of previously published and newly obtained lignocellulose saccharification results and fitting them to Weibull distribution, we discovered Weibull statistics can accurately predict lignocellulose saccharification data, regardless of the type of substrates, enzymes and saccharification conditions. A mathematical model for enzymatic lignocellulose degradation was subsequently constructed based on Weibull statistics. Further analysis of the mathematical structure of the model and experimental saccharification data showed the significance of the two parameters in this model. In particular, the λ value, defined the characteristic time, represents the overall performance of the saccharification system. This suggestion was further supported by statistical analysis of experimental saccharification data and analysis of the glucose production levels when λ and n values change. In conclusion, the constructed Weibull statistics-based model can accurately predict lignocellulose hydrolysis behavior and we can use the λ parameter to assess the overall performance of enzymatic lignocellulose degradation. Advantages and potential applications of the model and the λ value in saccharification performance assessment were discussed. Copyright © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. 7 CFR 356.4 - Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 5 2012-01-01 2012-01-01 false Property valued at $10,000 or less; notice of seizure administrative action to obtain forfeiture. 356.4 Section 356.4 Agriculture Regulations of the Department of Agriculture (Continued) ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE FORFEITURE...

  19. Accurate and fast multiple-testing correction in eQTL studies.

    PubMed

    Sul, Jae Hoon; Raj, Towfique; de Jong, Simone; de Bakker, Paul I W; Raychaudhuri, Soumya; Ophoff, Roel A; Stranger, Barbara E; Eskin, Eleazar; Han, Buhm

    2015-06-04

    In studies of expression quantitative trait loci (eQTLs), it is of increasing interest to identify eGenes, the genes whose expression levels are associated with variation at a particular genetic variant. Detecting eGenes is important for follow-up analyses and prioritization because genes are the main entities in biological processes. To detect eGenes, one typically focuses on the genetic variant with the minimum p value among all variants in cis with a gene and corrects for multiple testing to obtain a gene-level p value. For performing multiple-testing correction, a permutation test is widely used. Because of growing sample sizes of eQTL studies, however, the permutation test has become a computational bottleneck in eQTL studies. In this paper, we propose an efficient approach for correcting for multiple testing and assess eGene p values by utilizing a multivariate normal distribution. Our approach properly takes into account the linkage-disequilibrium structure among variants, and its time complexity is independent of sample size. By applying our small-sample correction techniques, our method achieves high accuracy in both small and large studies. We have shown that our method consistently produces extremely accurate p values (accuracy > 98%) for three human eQTL datasets with different sample sizes and SNP densities: the Genotype-Tissue Expression pilot dataset, the multi-region brain dataset, and the HapMap 3 dataset. Copyright © 2015 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  20. Noniterative accurate algorithm for the exact exchange potential of density-functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cinal, M.; Holas, A.

    2007-10-15

    An algorithm for determination of the exchange potential is constructed and tested. It represents a one-step procedure based on the equations derived by Krieger, Li, and Iafrate (KLI) [Phys. Rev. A 46, 5453 (1992)], implemented already as an iterative procedure by Kuemmel and Perdew [Phys. Rev. Lett. 90, 043004 (2003)]. Due to suitable transformation of the KLI equations, we can solve them avoiding iterations. Our algorithm is applied to the closed-shell atoms, from Be up to Kr, within the DFT exchange-only approximation. Using pseudospectral techniques for representing orbitals, we obtain extremely accurate values of total and orbital energies with errorsmore » at least four orders of magnitude smaller than known in the literature.« less

  1. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    PubMed Central

    Chen, Shengyong; Xiao, Gang; Li, Xiaoli

    2014-01-01

    This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views' calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain. PMID:24803954

  2. On accurate determination of contact angle

    NASA Technical Reports Server (NTRS)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  3. Method for accurate determination of dissociation constants of optical ratiometric systems: chemical probes, genetically encoded sensors, and interacting molecules.

    PubMed

    Pomorski, Adam; Kochańczyk, Tomasz; Miłoch, Anna; Krężel, Artur

    2013-12-03

    Ratiometric chemical probes and genetically encoded sensors are of high interest for both analytical chemists and molecular biologists. Their high sensitivity toward the target ligand and ability to obtain quantitative results without a known sensor concentration have made them a very useful tool in both in vitro and in vivo assays. Although ratiometric sensors are widely used in many applications, their successful and accurate usage depends on how they are characterized in terms of sensing target molecules. The most important feature of probes and sensors besides their optical parameters is an affinity constant toward analyzed molecules. The literature shows that different analytical approaches are used to determine the stability constants, with the ratio approach being most popular. However, oversimplification and lack of attention to detail results in inaccurate determination of stability constants, which in turn affects the results obtained using these sensors. Here, we present a new method where ratio signal is calibrated for borderline values of intensities of both wavelengths, instead of borderline ratio values that generate errors in many studies. At the same time, the equation takes into account the cooperativity factor or fluorescence artifacts and therefore can be used to characterize systems with various stoichiometries and experimental conditions. Accurate determination of stability constants is demonstrated utilizing four known optical ratiometric probes and sensors, together with a discussion regarding other, currently used methods.

  4. ROCK I Has More Accurate Prognostic Value than MET in Predicting Patient Survival in Colorectal Cancer.

    PubMed

    Li, Jian; Bharadwaj, Shruthi S; Guzman, Grace; Vishnubhotla, Ramana; Glover, Sarah C

    2015-06-01

    Colorectal cancer remains the second leading cause of death in the United States despite improvements in incidence rates and advancements in screening. The present study evaluated the prognostic value of two tumor markers, MET and ROCK I, which have been noted in other cancers to provide more accurate prognoses of patient outcomes than tumor staging alone. We constructed a tissue microarray from surgical specimens of adenocarcinomas from 108 colorectal cancer patients. Using immunohistochemistry, we examined the expression levels of tumor markers MET and ROCK I, with a pathologist blinded to patient identities and clinical outcomes providing the scoring of MET and ROCK I expression. We then used retrospective analysis of patients' survival data to provide correlations with expression levels of MET and ROCK I. Both MET and ROCK I were significantly over-expressed in colorectal cancer tissues, relative to the unaffected adjacent mucosa. Kaplan-Meier survival analysis revealed that patients' 5-year survival was inversely correlated with levels of expression of ROCK I. In contrast, MET was less strongly correlated with five-year survival. ROCK I provides better efficacy in predicting patient outcomes, compared to either tumor staging or MET expression. As a result, ROCK I may provide a less invasive method of assessing patient prognoses and directing therapeutic interventions. Copyright© 2015 International Institute of Anticancer Research (Dr. John G. Delinassios), All rights reserved.

  5. In-vitro evaluation of the accuracy of conventional and digital methods of obtaining full-arch dental impressions.

    PubMed

    Ender, Andreas; Mehl, Albert

    2015-01-01

    To investigate the accuracy of conventional and digital impression methods used to obtain full-arch impressions by using an in-vitro reference model. Eight different conventional (polyether, POE; vinylsiloxanether, VSE; direct scannable vinylsiloxanether, VSES; and irreversible hydrocolloid, ALG) and digital (CEREC Bluecam, CER; CEREC Omnicam, OC; Cadent iTero, ITE; and Lava COS, LAV) full-arch impressions were obtained from a reference model with a known morphology, using a highly accurate reference scanner. The impressions obtained were then compared with the original geometry of the reference model and within each test group. A point-to-point measurement of the surface of the model using the signed nearest neighbour method resulted in a mean (10%-90%)/2 percentile value for the difference between the impression and original model (trueness) as well as the difference between impressions within a test group (precision). Trueness values ranged from 11.5 μm (VSE) to 60.2 μm (POE), and precision ranged from 12.3 μm (VSE) to 66.7 μm (POE). Among the test groups, VSE, VSES, and CER showed the highest trueness and precision. The deviation pattern varied with the impression method. Conventional impressions showed high accuracy across the full dental arch in all groups, except POE and ALG. Conventional and digital impression methods show differences regarding full-arch accuracy. Digital impression systems reveal higher local deviations of the full-arch model. Digital intraoral impression systems do not show superior accuracy compared to highly accurate conventional impression techniques. However, they provide excellent clinical results within their indications applying the correct scanning technique.

  6. Estimating the Value of New Technologies That Provide More Accurate Drug Adherence Information to Providers for Their Patients with Schizophrenia.

    PubMed

    Shafrin, Jason; Schwartz, Taylor T; Lakdawalla, Darius N; Forma, Felicia M

    2016-11-01

    Nonadherence to antipsychotic medication among patients with schizophrenia results in poor symptom management and increased health care and other costs. Despite its health impact, medication adherence remains difficult to accurately assess. New technologies offer the possibility of real-time patient monitoring data on adherence, which may in turn improve clinical decision making. However, the economic benefit of accurate patient drug adherence information (PDAI) has yet to be evaluated. To quantify how more accurate PDAI can generate value to payers by improving health care provider decision making in the treatment of patients with schizophrenia. A 3-step decision tree modeling framework was used to measure the effect of PDAI on annual costs (2016 U.S. dollars) for patients with schizophrenia who initiated therapy with an atypical antipsychotic. The first step classified patients using 3 attributes: adherence to antipsychotic medication, medication tolerance, and response to therapy conditional on medication adherence. The prevalence of each characteristic was determined from claims database analysis and literature reviews. The second step modeled the effect of PDAI on provider treatment decisions based on health care providers' survey responses to schizophrenia case vignettes. In the survey, providers were randomized to vignettes with access to PDAI and with no access. In the third step, the economic implications of alternative provider decisions were identified from published peer-reviewed studies. The simulation model calculated the total economic value of PDAI as the difference between expected annual patient total cost corresponding to provider decisions made with or without PDAI. In claims data, 75.3% of patients with schizophrenia were found to be nonadherent to their antipsychotic medications. Review of the literature revealed that 7% of patients cannot tolerate medication, and 72.9% would respond to antipsychotic medication if adherent. Survey responses by

  7. DNA barcode data accurately assign higher spider taxa

    PubMed Central

    Coddington, Jonathan A.; Agnarsson, Ingi; Cheng, Ren-Chung; Čandek, Klemen; Driskell, Amy; Frick, Holger; Gregorič, Matjaž; Kostanjšek, Rok; Kropf, Christian; Kweskin, Matthew; Lokovšek, Tjaša; Pipan, Miha; Vidergar, Nina

    2016-01-01

    The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci) clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%). Accurate assignment of higher taxa (PIdent above which errors totaled less than 5%) occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However, the quality of

  8. High accurate time system of the Low Latitude Meridian Circle.

    NASA Astrophysics Data System (ADS)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  9. Proton dissociation properties of arylphosphonates: Determination of accurate Hammett equation parameters.

    PubMed

    Dargó, Gergő; Bölcskei, Adrienn; Grün, Alajos; Béni, Szabolcs; Szántó, Zoltán; Lopata, Antal; Keglevich, György; Balogh, György T

    2017-09-05

    Determination of the proton dissociation constants of several arylphosphonic acid derivatives was carried out to investigate the accuracy of the Hammett equations available for this family of compounds. For the measurement of the pK a values modern, accurate methods, such as the differential potentiometric titration and NMR-pH titration were used. We found our results significantly different from the pK a values reported before (pK a1 : MAE = 0.16 pK a2 : MAE=0.59). Based on our recently measured pK a values, refined Hammett equations were determined that might be used for predicting highly accurate ionization constants of newly synthesized compounds (pK a1 =1.70-0.894σ, pK a2 =6.92-0.934σ). Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Utilization of coffee by-products obtained from semi-washed process for production of value-added compounds.

    PubMed

    Bonilla-Hermosa, Verónica Alejandra; Duarte, Whasley Ferreira; Schwan, Rosane Freitas

    2014-08-01

    The semi-dry processing of coffee generates significant amounts of coffee pulp and wastewater. This study evaluated the production of bioethanol and volatile compounds of eight yeast strains cultivated in a mixture of these residues. Hanseniaspora uvarum UFLA CAF76 showed the best fermentation performance; hence it was selected to evaluate different culture medium compositions and inoculum size. The best results were obtained with 12% w/v of coffee pulp, 1 g/L of yeast extract and 0.3 g/L of inoculum. Using these conditions, fermentation in 1 L of medium was carried out, achieving higher ethanol yield, productivity and efficiency with values of 0.48 g/g, 0.55 g/L h and 94.11% respectively. Twenty-one volatile compounds corresponding to higher alcohols, acetates, terpenes, aldehydes and volatile acids were identified by GC-FID. Such results indicate that coffee residues show an excellent potential as substrates for production of value-added compounds. H. uvarum demonstrated high fermentative capacity using these residues. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Computation of convex bounds for present value functions with random payments

    NASA Astrophysics Data System (ADS)

    Ahcan, Ales; Darkiewicz, Grzegorz; Goovaerts, Marc; Hoedemakers, Tom

    2006-02-01

    In this contribution we study the distribution of the present value function of a series of random payments in a stochastic financial environment. Such distributions occur naturally in a wide range of applications within fields of insurance and finance. We obtain accurate approximations by developing upper and lower bounds in the convex-order sense for present value functions. Technically speaking, our methodology is an extension of the results of Dhaene et al. [Insur. Math. Econom. 31(1) (2002) 3-33, Insur. Math. Econom. 31(2) (2002) 133-161] to the case of scalar products of mutually independent random vectors.

  12. Accurately estimating PSF with straight lines detected by Hough transform

    NASA Astrophysics Data System (ADS)

    Wang, Ruichen; Xu, Liangpeng; Fan, Chunxiao; Li, Yong

    2018-04-01

    This paper presents an approach to estimating point spread function (PSF) from low resolution (LR) images. Existing techniques usually rely on accurate detection of ending points of the profile normal to edges. In practice however, it is often a great challenge to accurately localize profiles of edges from a LR image, which hence leads to a poor PSF estimation of the lens taking the LR image. For precisely estimating the PSF, this paper proposes firstly estimating a 1-D PSF kernel with straight lines, and then robustly obtaining the 2-D PSF from the 1-D kernel by least squares techniques and random sample consensus. Canny operator is applied to the LR image for obtaining edges and then Hough transform is utilized to extract straight lines of all orientations. Estimating 1-D PSF kernel with straight lines effectively alleviates the influence of the inaccurate edge detection on PSF estimation. The proposed method is investigated on both natural and synthetic images for estimating PSF. Experimental results show that the proposed method outperforms the state-ofthe- art and does not rely on accurate edge detection.

  13. 7 CFR 356.3 - Property valued at greater than $10,000; notice of seizure and civil action to obtain forfeiture.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 5 2012-01-01 2012-01-01 false Property valued at greater than $10,000; notice of seizure and civil action to obtain forfeiture. 356.3 Section 356.3 Agriculture Regulations of the Department of Agriculture (Continued) ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE...

  14. Evaluation of marginal and internal gaps of metal ceramic crowns obtained from conventional impressions and casting techniques with those obtained from digital techniques.

    PubMed

    Rai, Rathika; Kumar, S Arun; Prabhu, R; Govindan, Ranjani Thillai; Tanveer, Faiz Mohamed

    2017-01-01

    Accuracy in fit of cast metal restoration has always remained as one of the primary factors in determining the success of the restoration. A well-fitting restoration needs to be accurate both along its margin and with regard to its internal surface. The aim of the study is to evaluate the marginal fit of metal ceramic crowns obtained by conventional inlay casting wax pattern using conventional impression with the metal ceramic crowns obtained by computer-aided design and computer-aided manufacturing (CAD/CAM) technique using direct and indirect optical scanning. This in vitro study on preformed custom-made stainless steel models with former assembly that resembles prepared tooth surfaces of standardized dimensions comprised three groups: the first group included ten samples of metal ceramic crowns fabricated with conventional technique, the second group included CAD/CAM-milled direct metal laser sintering (DMLS) crowns using indirect scanning, and the third group included DMLS crowns fabricated by direct scanning of the stainless steel model. The vertical marginal gap and the internal gap were evaluated with the stereomicroscope (Zoomstar 4); post hoc Turkey's test was used for statistical analysis. One-way analysis of variance method was used to compare the mean values. Metal ceramic crowns obtained from direct optical scanning showed the least marginal and internal gap when compared to the castings obtained from inlay casting wax and indirect optical scanning. Indirect and direct optical scanning had yielded results within clinically acceptable range.

  15. The Defense Logistics Agency Properly Awarded Power Purchase Agreements and the Army Obtained Fair Market Value for Leases Supporting Power Purchase Agreements

    DTIC Science & Technology

    2016-09-28

    Fair Market Value for Leases Supporting Power Purchase Agreements I N T E G R I T Y  E F F I C I E N C Y  A C C O U N T A B I L I T Y  E X... Market Value for Leases Supporting Power Purchase Agreements Visit us at www.dodig.mil September 28, 2016 Objective We determined whether the...Department of the Army properly awarded and obtained fair market value for leases supporting energy production projects. We conducted this audit in

  16. An accurate method of extracting fat droplets in liver images for quantitative evaluation

    NASA Astrophysics Data System (ADS)

    Ishikawa, Masahiro; Kobayashi, Naoki; Komagata, Hideki; Shinoda, Kazuma; Yamaguchi, Masahiro; Abe, Tokiya; Hashiguchi, Akinori; Sakamoto, Michiie

    2015-03-01

    The steatosis in liver pathological tissue images is a promising indicator of nonalcoholic fatty liver disease (NAFLD) and the possible risk of hepatocellular carcinoma (HCC). The resulting values are also important for ensuring the automatic and accurate classification of HCC images, because the existence of many fat droplets is likely to create errors in quantifying the morphological features used in the process. In this study we propose a method that can automatically detect, and exclude regions with many fat droplets by using the feature values of colors, shapes and the arrangement of cell nuclei. We implement the method and confirm that it can accurately detect fat droplets and quantify the fat droplet ratio of actual images. This investigation also clarifies the effective characteristics that contribute to accurate detection.

  17. Accurate Diabetes Risk Stratification Using Machine Learning: Role of Missing Value and Outliers.

    PubMed

    Maniruzzaman, Md; Rahman, Md Jahanur; Al-MehediHasan, Md; Suri, Harman S; Abedin, Md Menhazul; El-Baz, Ayman; Suri, Jasjit S

    2018-04-10

    Diabetes mellitus is a group of metabolic diseases in which blood sugar levels are too high. About 8.8% of the world was diabetic in 2017. It is projected that this will reach nearly 10% by 2045. The major challenge is that when machine learning-based classifiers are applied to such data sets for risk stratification, leads to lower performance. Thus, our objective is to develop an optimized and robust machine learning (ML) system under the assumption that missing values or outliers if replaced by a median configuration will yield higher risk stratification accuracy. This ML-based risk stratification is designed, optimized and evaluated, where: (i) the features are extracted and optimized from the six feature selection techniques (random forest, logistic regression, mutual information, principal component analysis, analysis of variance, and Fisher discriminant ratio) and combined with ten different types of classifiers (linear discriminant analysis, quadratic discriminant analysis, naïve Bayes, Gaussian process classification, support vector machine, artificial neural network, Adaboost, logistic regression, decision tree, and random forest) under the hypothesis that both missing values and outliers when replaced by computed medians will improve the risk stratification accuracy. Pima Indian diabetic dataset (768 patients: 268 diabetic and 500 controls) was used. Our results demonstrate that on replacing the missing values and outliers by group median and median values, respectively and further using the combination of random forest feature selection and random forest classification technique yields an accuracy, sensitivity, specificity, positive predictive value, negative predictive value and area under the curve as: 92.26%, 95.96%, 79.72%, 91.14%, 91.20%, and 0.93, respectively. This is an improvement of 10% over previously developed techniques published in literature. The system was validated for its stability and reliability. RF-based model showed the best

  18. Time-Accurate Numerical Simulations of Synthetic Jet Quiescent Air

    NASA Technical Reports Server (NTRS)

    Rupesh, K-A. B.; Ravi, B. R.; Mittal, R.; Raju, R.; Gallas, Q.; Cattafesta, L.

    2007-01-01

    The unsteady evolution of three-dimensional synthetic jet into quiescent air is studied by time-accurate numerical simulations using a second-order accurate mixed explicit-implicit fractional step scheme on Cartesian grids. Both two-dimensional and three-dimensional calculations of synthetic jet are carried out at a Reynolds number (based on average velocity during the discharge phase of the cycle V(sub j), and jet width d) of 750 and Stokes number of 17.02. The results obtained are assessed against PIV and hotwire measurements provided for the NASA LaRC workshop on CFD validation of synthetic jets.

  19. Accurate and consistent automatic seismocardiogram annotation without concurrent ECG.

    PubMed

    Laurin, A; Khosrow-Khavar, F; Blaber, A P; Tavakolian, Kouhyar

    2016-09-01

    Seismocardiography (SCG) is the measurement of vibrations in the sternum caused by the beating of the heart. Precise cardiac mechanical timings that are easily obtained from SCG are critically dependent on accurate identification of fiducial points. So far, SCG annotation has relied on concurrent ECG measurements. An algorithm capable of annotating SCG without the use any other concurrent measurement was designed. We subjected 18 participants to graded lower body negative pressure. We collected ECG and SCG, obtained R peaks from the former, and annotated the latter by hand, using these identified peaks. We also annotated the SCG automatically. We compared the isovolumic moment timings obtained by hand to those obtained using our algorithm. Mean  ±  confidence interval of the percentage of accurately annotated cardiac cycles were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for levels of negative pressure 0, -20, -30, -40, and  -50 mmHg. LF/HF ratios, the relative power of low-frequency variations to high-frequency variations in heart beat intervals, obtained from isovolumic moments were also compared to those obtained from R peaks. The mean differences  ±  confidence interval were [Formula: see text], [Formula: see text], [Formula: see text], [Formula: see text], and [Formula: see text] for increasing levels of negative pressure. The accuracy and consistency of the algorithm enables the use of SCG as a stand-alone heart monitoring tool in healthy individuals at rest, and could serve as a basis for an eventual application in pathological cases.

  20. Accurate control of a liquid-crystal display to produce a homogenized Fourier transform for holographic memories.

    PubMed

    Márquez, Andrés; Gallego, Sergi; Méndez, David; Alvarez, Mariela L; Fernández, Elena; Ortuño, Manuel; Neipp, Cristian; Beléndez, Augusto; Pascual, Inmaculada

    2007-09-01

    We show an accurate procedure to obtain a Fourier transform (FT) with no dc term using a commercial twisted-nematic liquid-crystal display. We focus on the application to holographic storage of binary data pages, where a drastic decrease of the dc term in the FT is highly desirable. Two different codification schemes are considered: binary pi radians phase modulation and hybrid ternary modulation. Any deviation in the values of the amplitude and phase shift generates the appearance of a strong dc term. Experimental results confirm that the calculated configurations provide a FT with no dc term, thus showing the effectiveness of the proposal.

  1. Ensemble MD simulations restrained via crystallographic data: Accurate structure leads to accurate dynamics

    PubMed Central

    Xue, Yi; Skrynnikov, Nikolai R

    2014-01-01

    Currently, the best existing molecular dynamics (MD) force fields cannot accurately reproduce the global free-energy minimum which realizes the experimental protein structure. As a result, long MD trajectories tend to drift away from the starting coordinates (e.g., crystallographic structures). To address this problem, we have devised a new simulation strategy aimed at protein crystals. An MD simulation of protein crystal is essentially an ensemble simulation involving multiple protein molecules in a crystal unit cell (or a block of unit cells). To ensure that average protein coordinates remain correct during the simulation, we introduced crystallography-based restraints into the MD protocol. Because these restraints are aimed at the ensemble-average structure, they have only minimal impact on conformational dynamics of the individual protein molecules. So long as the average structure remains reasonable, the proteins move in a native-like fashion as dictated by the original force field. To validate this approach, we have used the data from solid-state NMR spectroscopy, which is the orthogonal experimental technique uniquely sensitive to protein local dynamics. The new method has been tested on the well-established model protein, ubiquitin. The ensemble-restrained MD simulations produced lower crystallographic R factors than conventional simulations; they also led to more accurate predictions for crystallographic temperature factors, solid-state chemical shifts, and backbone order parameters. The predictions for 15N R1 relaxation rates are at least as accurate as those obtained from conventional simulations. Taken together, these results suggest that the presented trajectories may be among the most realistic protein MD simulations ever reported. In this context, the ensemble restraints based on high-resolution crystallographic data can be viewed as protein-specific empirical corrections to the standard force fields. PMID:24452989

  2. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X

    NASA Astrophysics Data System (ADS)

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A.; Sondhi, S. L.

    2017-10-01

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition. This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'.

  3. Accurate determination of the binding energy of the formic acid dimer: The importance of geometry relaxation

    NASA Astrophysics Data System (ADS)

    Kalescky, Robert; Kraka, Elfi; Cremer, Dieter

    2014-02-01

    The formic acid dimer in its C2h-symmetrical cyclic form is stabilized by two equivalent H-bonds. The currently accepted interaction energy is 18.75 kcal/mol whereas the experimental binding energy D0 value is only 14.22 ±0.12 kcal/mol [F. Kollipost, R. W. Larsen, A. V. Domanskaya, M. Nörenberg, and M. A. Suhm, J. Chem. Phys. 136, 151101 (2012)]. Calculation of the binding energies De and D0 at the CCSD(T) (Coupled Cluster with Single and Double excitations and perturbative Triple excitations)/CBS (Complete Basis Set) level of theory, utilizing CCSD(T)/CBS geometries and the frequencies of the dimer and monomer, reveals that there is a 3.2 kcal/mol difference between interaction energy and binding energy De, which results from (i) not relaxing the geometry of the monomers upon dissociation of the dimer and (ii) approximating CCSD(T) correlation effects with MP2. The most accurate CCSD(T)/CBS values obtained in this work are De = 15.55 and D0 = 14.32 kcal/mol where the latter binding energy differs from the experimental value by 0.1 kcal/mol. The necessity of employing augmented VQZ and VPZ calculations and relaxing monomer geometries of H-bonded complexes upon dissociation to obtain reliable binding energies is emphasized.

  4. Accurately measuring volcanic plume velocity with multiple UV spectrometers

    USGS Publications Warehouse

    Williams-Jones, Glyn; Horton, Keith A.; Elias, Tamar; Garbeil, Harold; Mouginis-Mark, Peter J; Sutton, A. Jeff; Harris, Andrew J. L.

    2006-01-01

    A fundamental problem with all ground-based remotely sensed measurements of volcanic gas flux is the difficulty in accurately measuring the velocity of the gas plume. Since a representative wind speed and direction are used as proxies for the actual plume velocity, there can be considerable uncertainty in reported gas flux values. Here we present a method that uses at least two time-synchronized simultaneously recording UV spectrometers (FLYSPECs) placed a known distance apart. By analyzing the time varying structure of SO2 concentration signals at each instrument, the plume velocity can accurately be determined. Experiments were conducted on Kīlauea (USA) and Masaya (Nicaragua) volcanoes in March and August 2003 at plume velocities between 1 and 10 m s−1. Concurrent ground-based anemometer measurements differed from FLYSPEC-measured plume speeds by up to 320%. This multi-spectrometer method allows for the accurate remote measurement of plume velocity and can therefore greatly improve the precision of volcanic or industrial gas flux measurements.

  5. Accurate, noninvasive continuous monitoring of cardiac output by whole-body electrical bioimpedance.

    PubMed

    Cotter, Gad; Moshkovitz, Yaron; Kaluski, Edo; Cohen, Amram J; Miller, Hilton; Goor, Daniel; Vered, Zvi

    2004-04-01

    Cardiac output (CO) is measured but sparingly due to limitations in its measurement technique (ie, right-heart catheterization). Yet, in recent years it has been suggested that CO may be of value in the diagnosis, risk stratification, and treatment titration of cardiac patients, especially those with congestive heart failure (CHF). We examine the use of a new noninvasive, continuous whole-body bioimpedance system (NICaS; NI Medical; Hod-Hasharon, Israel) for measuring CO. The aim of the present study was to test the validity of this noninvasive cardiac output system/monitor (NICO) in a cohort of cardiac patients. Prospective, double-blind comparison of the NICO and thermodilution CO determinations. We enrolled 122 patients in three different groups: during cardiac catheterization (n = 40); before, during, and after coronary bypass surgery (n = 51); and while being treated for acute congestive heart failure (CHF) exacerbation (n = 31). MEASUREMENTS AND INTERVENTION: In all patients, CO measurements were obtained by two independent blinded operators. CO was measured by both techniques three times, and an average was determined for each time point. CO was measured at one time point in patients undergoing coronary catheterization; before, during, and after bypass surgery in patients undergoing coronary bypass surgery; and before and during vasodilator treatment in patients treated for acute heart failure. Overall, 418 paired CO measurements were obtained. The overall correlation between the NICO cardiac index (CI) and the thermodilution CI was r = 0.886, with a small bias (0.0009 +/- 0.684 L) [mean +/- 2 SD], and this finding was consistent within each group of patients. Thermodilution readings were 15% higher than NICO when CI was < 1.5 L/min/m(2), and 5% lower than NICO when CI was > 3 L/min/m(2). The NICO has also accurately detected CI changes during coronary bypass operation and vasodilator administration for acute CHF. The results of the present study indicate

  6. Remote measurement of water color in coastal waters. [spectral radiance data used to obtain quantitative values for chlorophyll and turbidity

    NASA Technical Reports Server (NTRS)

    Weldon, J. W.

    1973-01-01

    An investigation was conducted to develop procedure to obtain quantitative values for chlorophyll and turbidity in coastal waters by observing the changes in spectral radiance of the backscattered spectrum. The technique under consideration consists of Examining Exotech model 20-D spectral radiometer data and determining which radiance ratios best correlated with chlorophyll and turbidity measurements as obtained from analyses of water samples and sechi visibility readings. Preliminary results indicate that there is a correlation between backscattered light and chlorophyll concentration and secchi visibility. The tests were conducted with the spectrometer mounted in a light aircraft over the Mississippi Sound at altitudes of 2.5K, 2.8K and 10K feet.

  7. The reliability and validity of a three-camera foot image system for obtaining foot anthropometrics.

    PubMed

    O'Meara, Damien; Vanwanseele, Benedicte; Hunt, Adrienne; Smith, Richard

    2010-08-01

    The purpose was to develop a foot image capture and measurement system with web cameras (the 3-FIS) to provide reliable and valid foot anthropometric measures with efficiency comparable to that of the conventional method of using a handheld anthropometer. Eleven foot measures were obtained from 10 subjects using both methods. Reliability of each method was determined over 3 consecutive days using the intraclass correlation coefficient and root mean square error (RMSE). Reliability was excellent for both the 3-FIS and the handheld anthropometer for the same 10 variables, and good for the fifth metatarsophalangeal joint height. The RMSE values over 3 days ranged from 0.9 to 2.2 mm for the handheld anthropometer, and from 0.8 to 3.6 mm for the 3-FIS. The RMSE values between the 3-FIS and the handheld anthropometer were between 2.3 and 7.4 mm. The 3-FIS required less time to collect and obtain the final variables than the handheld anthropometer. The 3-FIS provided accurate and reproducible results for each of the foot variables and in less time than the conventional approach of a handheld anthropometer.

  8. Time-Accurate Solutions of Incompressible Navier-Stokes Equations for Potential Turbopump Applications

    NASA Technical Reports Server (NTRS)

    Kiris, Cetin; Kwak, Dochan

    2001-01-01

    Two numerical procedures, one based on artificial compressibility method and the other pressure projection method, are outlined for obtaining time-accurate solutions of the incompressible Navier-Stokes equations. The performance of the two method are compared by obtaining unsteady solutions for the evolution of twin vortices behind a at plate. Calculated results are compared with experimental and other numerical results. For an un- steady ow which requires small physical time step, pressure projection method was found to be computationally efficient since it does not require any subiterations procedure. It was observed that the artificial compressibility method requires a fast convergence scheme at each physical time step in order to satisfy incompressibility condition. This was obtained by using a GMRES-ILU(0) solver in our computations. When a line-relaxation scheme was used, the time accuracy was degraded and time-accurate computations became very expensive.

  9. The Effect of Starspots on Accurate Radius Determination of the Low-Mass Double-Lined Eclipsing Binary Gu Boo

    NASA Astrophysics Data System (ADS)

    Windmiller, G.; Orosz, J. A.; Etzel, P. B.

    2010-04-01

    GU Boo is one of only a relatively small number of well-studied double-lined eclipsing binaries that contain low-mass stars. López-Morales & Ribas present a comprehensive analysis of multi-color light and radial velocity curves for this system. The GU Boo light curves presented by López-Morales & Ribas had substantial asymmetries, which were attributed to large spots. In spite of the asymmetry, López-Morales & Ribas derived masses and radii accurate to sime2%. We obtained additional photometry of GU Boo using both a CCD and a single-channel photometer and modeled the light curves with the ELC software to determine if the large spots in the light curves give rise to systematic errors at the few percent level. We also modeled the original light curves from the work of López-Morales & Ribas using models with and without spots. We derived a radius of the primary of 0.6329 ± 0.0026 R sun, 0.6413 ± 0.0049 R sun, and 0.6373 ± 0.0029 R sun from the CCD, photoelectric, and López-Morales & Ribas data, respectively. Each of these measurements agrees with the value reported by López-Morales & Ribas (R 1 = 0.623 ± 0.016 R sun) at the level of ≈2%. In addition, the spread in these values is ≈1%-2% from the mean. For the secondary, we derive radii of 0.6074 ± 0.0035 R sun, 0.5944 ± 0.0069 R sun, and 0.5976 ± 0.0059 R sun from the three respective data sets. The López-Morales & Ribas value is R 2 = 0.620 ± 0.020 R sun, which is ≈2%-3% larger than each of the three values we found. The spread in these values is ≈2% from the mean. The systematic difference between our three determinations of the secondary radius and that of López-Morales & Ribas might be attributed to differences in the modeling process and codes used. Our own fits suggest that, for GU Boo at least, using accurate spot modeling of a single set of multi-color light curves results in radii determinations accurate at the ≈2% level.

  10. THE EFFECT OF STARSPOTS ON ACCURATE RADIUS DETERMINATION OF THE LOW-MASS DOUBLE-LINED ECLIPSING BINARY GU Boo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Windmiller, G.; Orosz, J. A.; Etzel, P. B., E-mail: windmill@rohan.sdsu.ed, E-mail: orosz@sciences.sdsu.ed, E-mail: etzel@sciences.sdsu.ed

    2010-04-01

    GU Boo is one of only a relatively small number of well-studied double-lined eclipsing binaries that contain low-mass stars. Lopez-Morales and Ribas present a comprehensive analysis of multi-color light and radial velocity curves for this system. The GU Boo light curves presented by Lopez-Morales and Ribas had substantial asymmetries, which were attributed to large spots. In spite of the asymmetry, Lopez-Morales and Ribas derived masses and radii accurate to {approx_equal}2%. We obtained additional photometry of GU Boo using both a CCD and a single-channel photometer and modeled the light curves with the ELC software to determine if the large spotsmore » in the light curves give rise to systematic errors at the few percent level. We also modeled the original light curves from the work of Lopez-Morales and Ribas using models with and without spots. We derived a radius of the primary of 0.6329 +- 0.0026 R{sub sun}, 0.6413 +- 0.0049 R{sub sun}, and 0.6373 +- 0.0029 R{sub sun} from the CCD, photoelectric, and Lopez-Morales and Ribas data, respectively. Each of these measurements agrees with the value reported by Lopez-Morales and Ribas (R{sub 1} = 0.623 +- 0.016 R{sub sun}) at the level of {approx}2%. In addition, the spread in these values is {approx}1%-2% from the mean. For the secondary, we derive radii of 0.6074 +- 0.0035 R{sub sun}, 0.5944 +- 0.0069 R{sub sun}, and 0.5976 +- 0.0059 R{sub sun} from the three respective data sets. The Lopez-Morales and Ribas value is R{sub 2} = 0.620 +- 0.020 R{sub sun}, which is {approx}2%-3% larger than each of the three values we found. The spread in these values is {approx}2% from the mean. The systematic difference between our three determinations of the secondary radius and that of Lopez-Morales and Ribas might be attributed to differences in the modeling process and codes used. Our own fits suggest that, for GU Boo at least, using accurate spot modeling of a single set of multi-color light curves results in radii

  11. [Reference values for lead levels in blood for the urban population].

    PubMed

    Paolielo, M M; Gutierrez, P R; Turini, C A; Matsuo, T; Mezzaroba, L; Barbosa, D S; Alvarenga, A L; Carvalho, S R; Figueiroa, G A; Leite, V G; Gutierrez, A C; Nogueira, K B; Inamine, W A; Zavatti, A M

    1997-04-01

    The lead reference values for blood used in Brazil come from studies conducted in other countries, where socioeconomic, clinical, nutritional and occupational conditions are significantly different. In order to guarantee an accurate biomonitoring of the population which is occupationally exposed to lead, a major health concern of the studied community, reference values for individuals who are not occupationally exposed and who live in the southern region of the city were established. The sample was composed of 206 subjects of at least 15 years of age. Various strategies were employed to assure good-quality sampling. Subjects who presented values outside clinical or laboratory norms were excluded, as well as those whose specific activities might interfere with the results. Lead reference values for blood were found to be from 2.40 to 16.6 micrograms.dL-1, obtained by the interval x +/- 2s (where x is the mean and s is the standard deviation form observed values) and the median was 7.9 micrograms.dL-1.

  12. Accurate registration of temporal CT images for pulmonary nodules detection

    NASA Astrophysics Data System (ADS)

    Yan, Jichao; Jiang, Luan; Li, Qiang

    2017-02-01

    Interpretation of temporal CT images could help the radiologists to detect some subtle interval changes in the sequential examinations. The purpose of this study was to develop a fully automated scheme for accurate registration of temporal CT images for pulmonary nodule detection. Our method consisted of three major registration steps. Firstly, affine transformation was applied in the segmented lung region to obtain global coarse registration images. Secondly, B-splines based free-form deformation (FFD) was used to refine the coarse registration images. Thirdly, Demons algorithm was performed to align the feature points extracted from the registered images in the second step and the reference images. Our database consisted of 91 temporal CT cases obtained from Beijing 301 Hospital and Shanghai Changzheng Hospital. The preliminary results showed that approximately 96.7% cases could obtain accurate registration based on subjective observation. The subtraction images of the reference images and the rigid and non-rigid registered images could effectively remove the normal structures (i.e. blood vessels) and retain the abnormalities (i.e. pulmonary nodules). This would be useful for the screening of lung cancer in our future study.

  13. Accurate computation of survival statistics in genome-wide studies.

    PubMed

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J; Upfal, Eli

    2015-05-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations.

  14. Accurate Computation of Survival Statistics in Genome-Wide Studies

    PubMed Central

    Vandin, Fabio; Papoutsaki, Alexandra; Raphael, Benjamin J.; Upfal, Eli

    2015-01-01

    A key challenge in genomics is to identify genetic variants that distinguish patients with different survival time following diagnosis or treatment. While the log-rank test is widely used for this purpose, nearly all implementations of the log-rank test rely on an asymptotic approximation that is not appropriate in many genomics applications. This is because: the two populations determined by a genetic variant may have very different sizes; and the evaluation of many possible variants demands highly accurate computation of very small p-values. We demonstrate this problem for cancer genomics data where the standard log-rank test leads to many false positive associations between somatic mutations and survival time. We develop and analyze a novel algorithm, Exact Log-rank Test (ExaLT), that accurately computes the p-value of the log-rank statistic under an exact distribution that is appropriate for any size populations. We demonstrate the advantages of ExaLT on data from published cancer genomics studies, finding significant differences from the reported p-values. We analyze somatic mutations in six cancer types from The Cancer Genome Atlas (TCGA), finding mutations with known association to survival as well as several novel associations. In contrast, standard implementations of the log-rank test report dozens-hundreds of likely false positive associations as more significant than these known associations. PMID:25950620

  15. Accurate age determinations of several nearby open clusters containing magnetic Ap stars

    NASA Astrophysics Data System (ADS)

    Silaj, J.; Landstreet, J. D.

    2014-06-01

    Context. To study the time evolution of magnetic fields, chemical abundance peculiarities, and other characteristics of magnetic Ap and Bp stars during their main sequence lives, a sample of these stars in open clusters has been obtained, as such stars can be assumed to have the same ages as the clusters to which they belong. However, in exploring age determinations in the literature, we find a large dispersion among different age determinations, even for bright, nearby clusters. Aims: Our aim is to obtain ages that are as accurate as possible for the seven nearby open clusters α Per, Coma Ber, IC 2602, NGC 2232, NGC 2451A, NGC 2516, and NGC 6475, each of which contains at least one magnetic Ap or Bp star. Simultaneously, we test the current calibrations of Te and luminosity for the Ap/Bp star members, and identify clearly blue stragglers in the clusters studied. Methods: We explore the possibility that isochrone fitting in the theoretical Hertzsprung-Russell diagram (i.e. log (L/L⊙) vs. log Te), rather than in the conventional colour-magnitude diagram, can provide more precise and accurate cluster ages, with well-defined uncertainties. Results: Well-defined ages are found for all the clusters studied. For the nearby clusters studied, the derived ages are not very sensitive to the small uncertainties in distance, reddening, membership, metallicity, or choice of isochrones. Our age determinations are all within the range of previously determined values, but the associated uncertainties are considerably smaller than the spread in recent age determinations from the literature. Furthermore, examination of proper motions and HR diagrams confirms that the Ap stars identified in these clusters are members, and that the presently accepted temperature scale and bolometric corrections for Ap stars are approximately correct. We show that in these theoretical HR diagrams blue stragglers are particularly easy to identify. Conclusions: Constructing the theoretical HR diagram

  16. Numerical analysis of the asymptotic two-point boundary value solution for N-body trajectories.

    NASA Technical Reports Server (NTRS)

    Lancaster, J. E.; Allemann, R. A.

    1972-01-01

    Previously published asymptotic solutions for lunar and interplanetary trajectories have been modified and combined to formulate a general analytical boundary value solution applicable to a broad class of trajectory problems. In addition, the earlier first-order solutions have been extended to second-order to determine if improved accuracy is possible. Comparisons between the asymptotic solution and numerical integration for several lunar and interplanetary trajectories show that the asymptotic solution is generally quite accurate. Also, since no iterations are required, a solution to the boundary value problem is obtained in a fraction of the time required for numerically integrated solutions.

  17. A non-contact method based on multiple signal classification algorithm to reduce the measurement time for accurately heart rate detection

    NASA Astrophysics Data System (ADS)

    Bechet, P.; Mitran, R.; Munteanu, M.

    2013-08-01

    Non-contact methods for the assessment of vital signs are of great interest for specialists due to the benefits obtained in both medical and special applications, such as those for surveillance, monitoring, and search and rescue. This paper investigates the possibility of implementing a digital processing algorithm based on the MUSIC (Multiple Signal Classification) parametric spectral estimation in order to reduce the observation time needed to accurately measure the heart rate. It demonstrates that, by proper dimensioning the signal subspace, the MUSIC algorithm can be optimized in order to accurately assess the heart rate during an 8-28 s time interval. The validation of the processing algorithm performance was achieved by minimizing the mean error of the heart rate after performing simultaneous comparative measurements on several subjects. In order to calculate the error the reference value of heart rate was measured using a classic measurement system through direct contact.

  18. An accurate and inexpensive color-based assay for detecting severe anemia in a limited-resource setting

    PubMed Central

    McGann, Patrick T.; Tyburski, Erika A.; de Oliveira, Vysolela; Santos, Brigida; Ware, Russell E.; Lam, Wilbur A.

    2016-01-01

    Severe anemia is an important cause of morbidity and mortality among children in resource-poor settings, but laboratory diagnostics are often limited in these locations. To address this need, we developed a simple, inexpensive, and color-based point-of-care (POC) assay to detect severe anemia. The purpose of this study was to evaluate the accuracy of this novel POC assay to detect moderate and severe anemia in a limited-resource setting. The study was a cross-sectional study conducted on children with sickle cell anemia in Luanda, Angola. The hemoglobin concentrations obtained by the POC assay were compared to reference values measured by a calibrated automated hematology analyzer. A total of 86 samples were analyzed (mean hemoglobin concentration 6.6 g/dL). There was a strong correlation between the hemoglobin concentrations obtained by the POC assay and reference values obtained from an automated hematology analyzer (r=0.88, P<0.0001). The POC assay demonstrated excellent reproducibility (r=0.93, P<0.0001) and the reagents appeared to be durable in a tropical setting (r=0.93, P<0.0001). For the detection of severe anemia that may require blood transfusion (hemoglobin <5 g/dL), the POC assay had sensitivity of 88.9% and specificity of 98.7%. These data demonstrate that an inexpensive (<$0.25 USD) POC assay accurately estimates low hemoglobin concentrations and has the potential to become a transformational diagnostic tool for severe anemia in limited-resource settings. PMID:26317494

  19. Obtaining highly excited eigenstates of the localized XX chain via DMRG-X.

    PubMed

    Devakul, Trithep; Khemani, Vedika; Pollmann, Frank; Huse, David A; Sondhi, S L

    2017-12-13

    We benchmark a variant of the recently introduced density matrix renormalization group (DMRG)-X algorithm against exact results for the localized random field XX chain. We find that the eigenstates obtained via DMRG-X exhibit a highly accurate l-bit description for system sizes much bigger than the direct, many-body, exact diagonalization in the spin variables is able to access. We take advantage of the underlying free fermion description of the XX model to accurately test the strengths and limitations of this algorithm for large system sizes. We discuss the theoretical constraints on the performance of the algorithm from the entanglement properties of the eigenstates, and its actual performance at different values of disorder. A small but significant improvement to the algorithm is also presented, which helps significantly with convergence. We find that, at high entanglement, DMRG-X shows a bias towards eigenstates with low entanglement, but can be improved with increased bond dimension. This result suggests that one must be careful when applying the algorithm for interacting many-body localized spin models near a transition.This article is part of the themed issue 'Breakdown of ergodicity in quantum systems: from solids to synthetic matter'. © 2017 The Author(s).

  20. Quantitative Phase Microscopy for Accurate Characterization of Microlens Arrays

    NASA Astrophysics Data System (ADS)

    Grilli, Simonetta; Miccio, Lisa; Merola, Francesco; Finizio, Andrea; Paturzo, Melania; Coppola, Sara; Vespini, Veronica; Ferraro, Pietro

    Microlens arrays are of fundamental importance in a wide variety of applications in optics and photonics. This chapter deals with an accurate digital holography-based characterization of both liquid and polymeric microlenses fabricated by an innovative pyro-electrowetting process. The actuation of liquid and polymeric films is obtained through the use of pyroelectric charges generated into polar dielectric lithium niobate crystals.

  1. Unenhanced breast MRI (STIR, T2-weighted TSE, DWIBS): An accurate and alternative strategy for detecting and differentiating breast lesions.

    PubMed

    Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe; Moschetta, Marco

    2015-10-01

    To assess the role of STIR, T2-weighted TSE and DWIBS sequences for detecting and characterizing breast lesions and to compare unenhanced (UE)-MRI results with contrast-enhanced (CE)-MRI and histological findings, having the latter as the reference standard. Two hundred eighty consecutive patients (age range, 27-73 years; mean age±standard deviation (SD), 48.8±9.8years) underwent MR examination with a diagnostic protocol including STIR, T2-weighted TSE, THRIVE and DWIBS sequences. Two radiologists blinded to both dynamic sequences and histological findings evaluated in consensus STIR, T2-weighted TSE and DWIBS sequences and after two weeks CE-MRI images searching for breast lesions. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for UE-MRI and CE-MRI were calculated. UE-MRI results were also compared with CE- MRI. UE-MRI sequences obtained sensitivity, specificity, diagnostic accuracy, PPV and NPV values of 94%, 79%, 86%, 79% and 94%, respectively. CE-MRI sequences obtained sensitivity, specificity, diagnostic accuracy, PPV and NPV values of 98%, 83%, 90%, 84% and 98%, respectively. No statistically significant difference between UE-MRI and CE-MRI was found. Breast UE-MRI could represent an accurate diagnostic tool and a valid alternative to CE-MRI for evaluating breast lesions. STIR and DWIBS sequences allow to detect breast lesions while T2-weighted TSE sequences and ADC values could be useful for lesion characterization. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Accurate oscillator strengths for ultraviolet lines of Ar I - Implications for interstellar material

    NASA Technical Reports Server (NTRS)

    Federman, S. R.; Beideck, D. J.; Schectman, R. M.; York, D. G.

    1992-01-01

    Analysis of absorption from interstellar Ar I in lightly reddened lines of sight provides information on the warm and hot components of the interstellar medium near the sun. The details of the analysis are limited by the quality of the atomic data. Accurate oscillator strengths for the Ar I lines at 1048 and 1067 A and the astrophysical implications are presented. From lifetimes measured with beam-foil spectroscopy, an f-value for 1048 A of 0.257 +/- 0.013 is obtained. Through the use of a semiempirical formalism for treating singlet-triplet mixing, an oscillator strength of 0.064 +/- 0.003 is derived for 1067 A. Because of the accuracy of the results, the conclusions of York and colleagues from spectra taken with the Copernicus satellite are strengthened. In particular, for interstellar gas in the solar neighborhood, argon has a solar abundance, and the warm, neutral material is not pervasive.

  3. Accurate Determination of the Q Quality Factor in Magnetoelastic Resonant Platforms for Advanced Biological Detection

    PubMed Central

    Lopes, Ana Catarina; Sagasti, Ariane; Lasheras, Andoni; Muto, Virginia; Gutiérrez, Jon; Kouzoudis, Dimitris; Barandiarán, José Manuel

    2018-01-01

    The main parameters of magnetoelastic resonators in the detection of chemical (i.e., salts, gases, etc.) or biological (i.e., bacteria, phages, etc.) agents are the sensitivity S (or external agent change magnitude per Hz change in the resonance frequency) and the quality factor Q of the resonance. We present an extensive study on the experimental determination of the Q factor in such magnetoelastic resonant platforms, using three different strategies: (a) analyzing the real and imaginary components of the susceptibility at resonance; (b) numerical fitting of the modulus of the susceptibility; (c) using an exact mathematical expression for the real part of the susceptibility. Q values obtained by the three methods are analyzed and discussed, aiming to establish the most adequate one to accurately determine the quality factor of the magnetoelastic resonance. PMID:29547578

  4. Accurate Determination of the Q Quality Factor in Magnetoelastic Resonant Platforms for Advanced Biological Detection.

    PubMed

    Lopes, Ana Catarina; Sagasti, Ariane; Lasheras, Andoni; Muto, Virginia; Gutiérrez, Jon; Kouzoudis, Dimitris; Barandiarán, José Manuel

    2018-03-16

    The main parameters of magnetoelastic resonators in the detection of chemical (i.e., salts, gases, etc.) or biological (i.e., bacteria, phages, etc.) agents are the sensitivity S (or external agent change magnitude per Hz change in the resonance frequency) and the quality factor Q of the resonance. We present an extensive study on the experimental determination of the Q factor in such magnetoelastic resonant platforms, using three different strategies: (a) analyzing the real and imaginary components of the susceptibility at resonance; (b) numerical fitting of the modulus of the susceptibility; (c) using an exact mathematical expression for the real part of the susceptibility. Q values obtained by the three methods are analyzed and discussed, aiming to establish the most adequate one to accurately determine the quality factor of the magnetoelastic resonance.

  5. Magnetic gaps in organic tri-radicals: From a simple model to accurate estimates.

    PubMed

    Barone, Vincenzo; Cacelli, Ivo; Ferretti, Alessandro; Prampolini, Giacomo

    2017-03-14

    The calculation of the energy gap between the magnetic states of organic poly-radicals still represents a challenging playground for quantum chemistry, and high-level techniques are required to obtain accurate estimates. On these grounds, the aim of the present study is twofold. From the one side, it shows that, thanks to recent algorithmic and technical improvements, we are able to compute reliable quantum mechanical results for the systems of current fundamental and technological interest. From the other side, proper parameterization of a simple Hubbard Hamiltonian allows for a sound rationalization of magnetic gaps in terms of basic physical effects, unraveling the role played by electron delocalization, Coulomb repulsion, and effective exchange in tuning the magnetic character of the ground state. As case studies, we have chosen three prototypical organic tri-radicals, namely, 1,3,5-trimethylenebenzene, 1,3,5-tridehydrobenzene, and 1,2,3-tridehydrobenzene, which differ either for geometric or electronic structure. After discussing the differences among the three species and their consequences on the magnetic properties in terms of the simple model mentioned above, accurate and reliable values for the energy gap between the lowest quartet and doublet states are computed by means of the so-called difference dedicated configuration interaction (DDCI) technique, and the final results are discussed and compared to both available experimental and computational estimates.

  6. Accurate Measurements of the Local Deuterium Abundance from HST Spectra

    NASA Technical Reports Server (NTRS)

    Linsky, Jeffrey L.

    1996-01-01

    An accurate measurement of the primordial value of D/H would provide a critical test of nucleosynthesis models for the early universe and the baryon density. I briefly summarize the ongoing HST observations of the interstellar H and D Lyman-alpha absorption for lines of sight to nearby stars and comment on recent reports of extragalactic D/H measurements.

  7. A new automatic blood pressure kit auscultates for accurate reading with a smartphone

    PubMed Central

    Wu, Hongjun; Wang, Bingjian; Zhu, Xinpu; Chu, Guang; Zhang, Zhi

    2016-01-01

    Abstract The widely used oscillometric automated blood pressure (BP) monitor was continuously questioned on its accuracy. A novel BP kit named Accutension which adopted Korotkoff auscultation method was then devised. Accutension worked with a miniature microphone, a pressure sensor, and a smartphone. The BP values were automatically displayed on the smartphone screen through the installed App. Data recorded in the phone could be played back and reconfirmed after measurement. They could also be uploaded and saved to the iCloud. The accuracy and consistency of this novel electronic auscultatory sphygmomanometer was preliminarily verified here. Thirty-two subjects were included and 82 qualified readings were obtained. The mean differences ± SD for systolic and diastolic BP readings between Accutension and mercury sphygmomanometer were 0.87 ± 2.86 and −0.94 ± 2.93 mm Hg. Agreements between Accutension and mercury sphygmomanometer were highly significant for systolic (ICC = 0.993, 95% confidence interval (CI): 0.989–0.995) and diastolic (ICC = 0.987, 95% CI: 0.979–0.991). In conclusion, Accutension worked accurately based on our pilot study data. The difference was acceptable. ICC and Bland–Altman plot charts showed good agreements with manual measurements. Systolic readings of Accutension were slightly higher than those of manual measurement, while diastolic readings were slightly lower. One possible reason was that Accutension captured the first and the last korotkoff sound more sensitively than human ear during manual measurement and avoided sound missing, so that it might be more accurate than traditional mercury sphygmomanometer. By documenting and analyzing of variant tendency of BP values, Accutension helps management of hypertension and therefore contributes to the mobile heath service. PMID:27512876

  8. Discussion on Comprehensive Utilization Value of Scutellaria Baicalensis Flower

    NASA Astrophysics Data System (ADS)

    Song, Yagang; Miao, Mingsan

    2018-01-01

    The chemical constituents of Scutellaria baicalensis flower are flavonoids, volatile oils and melanin, It has anti-tumor, anti-inflammatory, antioxidant, anti angiogenic and antithrombotic pharmacological effects, and it has the effect of clearing away heat and relieving lung fire. Scutellaria baicalensis flower is rich in resources, cheap, easy to obtain, accurate effect, With the prevention and treatment of a variety of diseases. In this paper, the ancient application, chemical constituents, pharmacological actions and comprehensive utilization of Scutellaria baicalensis flower were reviewed, The purpose of this study was to explore the value of its development and utilization, so as to provide reference for the comprehensive utilization of Scutellaria baicalensis flower.

  9. Accurate beacon positioning method for satellite-to-ground optical communication.

    PubMed

    Wang, Qiang; Tong, Ling; Yu, Siyuan; Tan, Liying; Ma, Jing

    2017-12-11

    In satellite laser communication systems, accurate positioning of the beacon is essential for establishing a steady laser communication link. For satellite-to-ground optical communication, the main influencing factors on the acquisition of the beacon are background noise and atmospheric turbulence. In this paper, we consider the influence of background noise and atmospheric turbulence on the beacon in satellite-to-ground optical communication, and propose a new locating algorithm for the beacon, which takes the correlation coefficient obtained by curve fitting for image data as weights. By performing a long distance laser communication experiment (11.16 km), we verified the feasibility of this method. Both simulation and experiment showed that the new algorithm can accurately obtain the position of the centroid of beacon. Furthermore, for the distortion of the light spot through atmospheric turbulence, the locating accuracy of the new algorithm was 50% higher than that of the conventional gray centroid algorithm. This new approach will be beneficial for the design of satellite-to ground optical communication systems.

  10. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  11. Accurate interlaminar stress recovery from finite element analysis

    NASA Technical Reports Server (NTRS)

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  12. Reliable and accurate extraction of Hamaker constants from surface force measurements.

    PubMed

    Miklavcic, S J

    2018-08-15

    A simple and accurate closed-form expression for the Hamaker constant that best represents experimental surface force data is presented. Numerical comparisons are made with the current standard least squares approach, which falsely assumes error-free separation measurements, and a nonlinear version assuming independent measurements of force and separation are subject to error. The comparisons demonstrate that not only is the proposed formula easily implemented it is also considerably more accurate. This option is appropriate for any value of Hamaker constant, high or low, and certainly for any interacting system exhibiting an inverse square distance dependent van der Waals force. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Obtaining value prior to pulping with diethyl oxalate and oxalic acid

    Treesearch

    W.R. Kenealy; E. Horn; C.J. Houtman; J. Laplaza; T.W. Jeffries

    2007-01-01

    Pulp and paper are converted to paper products with yields of paper dependent on the wood and the process used. Even with high yield pulps there are conversion losses and with chemical pulps the yields approach 50%. The portions of the wood that do not provide product are either combusted to generate power and steam or incur a cost in waste water treatment. Value prior...

  14. GHM method for obtaining rationalsolutions of nonlinear differential equations.

    PubMed

    Vazquez-Leal, Hector; Sarmiento-Reyes, Arturo

    2015-01-01

    In this paper, we propose the application of the general homotopy method (GHM) to obtain rational solutions of nonlinear differential equations. It delivers a high precision representation of the nonlinear differential equation using a few linear algebraic terms. In order to assess the benefits of this proposal, three nonlinear problems are solved and compared against other semi-analytic methods or numerical methods. The obtained results show that GHM is a powerful tool, capable to generate highly accurate rational solutions. AMS subject classification 34L30.

  15. A systematic review of the angular values obtained by computerized photogrammetry in sagittal plane: a proposal for reference values.

    PubMed

    Krawczky, Bruna; Pacheco, Antonio G; Mainenti, Míriam R M

    2014-05-01

    Reference values for postural alignment in the coronal plane, as measured by computerized photogrammetry, have been established but not for the sagittal plane. The objective of this study is to propose reference values for angular measurements used for postural analysis in the sagittal plane for healthy adults. Electronic databases (PubMed, BVS, Cochrane, Scielo, and Science Direct) were searched using the following key words: evaluation, posture, photogrammetry, and software. Articles published between 2006 and 2012 that used the PAS/SAPO (postural assessment software) were selected. Another inclusion criterion was the presentation of, at least, one of the following measurements: head horizontal alignment, pelvic horizontal alignment, hip angle, vertical alignment of the body, thoracic kyphosis, and lumbar lordosis. Angle samples of the selected articles were grouped 2 by 2 in relation to an overall average, which made possible total average, variance, and SD calculations. Six articles were included, and the following average angular values were found: 51.42° ± 4.87° (head horizontal alignment), -12.26° ± 5.81° (pelvic horizontal alignment), -6.40° ± 3.86° (hip angle), and 1.73° ± 0.94° (vertical alignment of the body). None of the articles contained the measurements for thoracic kyphosis and lumbar lordosis. The reference values can be adopted as reference for postural assessment in future researches if the same anatomical points are considered. Copyright © 2014 National University of Health Sciences. Published by Mosby, Inc. All rights reserved.

  16. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    PubMed

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  17. Health values and prospect theory.

    PubMed

    Treadwell, J R; Lenert, L A

    1999-01-01

    Health values are important components of medical decisions. Experimental data suggest that people value health in complex and dynamic ways. Prospect theory is a descriptive theory of choice that may accurately characterize how people assign values to health states. The authors first provide background on prospect theory and how it can be applied to health values. Next, they review the relevant health research and find mixed support for prospect theory. Last, they discuss implications of prospect theory for cost-effectiveness analysis. The application of prospect theory to health deserves further research because it may help clarify the link between health and values.

  18. [VALUE OF SMART PHONE Scoliometer SOFTWARE IN OBTAINING OPTIMAL LUMBAR LORDOSIS DURING L4-S1 FUSION SURGERY].

    PubMed

    Yu, Weibo; Liang, De; Ye, Linqiang; Jiang, Xiaobing; Yao, Zhensong; Tang, Jingjing; Tang, Yongchao

    2015-10-01

    To investigate the value of smart phone Scoliometer software in obtaining optimal lumbar lordosis (LL) during L4-S1 fusion surgery. Between November 2014 and February 2015, 20 patients scheduled for L4-S1 fusion surgery were prospectively enrolled the study. There were 8 males and 12 females, aged 41-65 years (mean, 52.3 years). The disease duration ranged from 6 months to 6 years (mean, 3.4 years). Before operation, the pelvic incidence (PI) and Cobb angle of L4-S1 (CobbL4-S1) were measured on lateral X-ray film of lumbosacral spine by PACS system; and the ideal CobbL4-S1 was then calculated according to previously published methods [(PI+9 degrees) x 70%]. Subsequently, intraoperative CobbL4-S1 was monitored by the Scoliometer software and was defined as optimal while it was less than 5 degrees difference compared with ideal CobbL4-S1. Finally, the CobbL4-S1 was measured by the PACS system after operation and the consistency was compared between Scoliometer software and PACS system to evaluate the accuracy of this software. In addition, value of this method in obtaining optimal LL was validated by comparing the difference between ideal CobbL4-S1 and preoperative one with that between ideal CobbL4-S1 and postoperative one. The CobbL4-S1 was (36.17 ± 1.53)degrees for ideal one, (22.57 ± 5.50)degrees for preoperative one, (32.25 ± 1.46)degrees for intraoperative one measured by Scoliometer software, and (34.43 ± 1.72)degrees for postoperative one, respectively. The observed intraclass correlation coefficient (ICC) was excellent [ICC = 0.96, 95% confidence interval (0.93, 0.97)] and the mean absolute difference (MAD) was low (MAD = 1.23) between Scoliometer software and PACS system. The deviation between ideal CobbL4-S1 and postoperative CobbL4-S1 was (2.31 ± 0.23)degrees, which was significantly lower than the deviation between ideal CobbL4-S1 and preoperative CobbL4-S1 (13.60 ± 1.85)degrees (t = 6.065, P = 0.001). Scoliometer software can help surgeon obtain

  19. An accurate density functional theory based estimation of pK(a) values of polar residues combined with experimental data: from amino acids to minimal proteins.

    PubMed

    Matsui, Toru; Baba, Takeshi; Kamiya, Katsumasa; Shigeta, Yasuteru

    2012-03-28

    We report a scheme for estimating the acid dissociation constant (pK(a)) based on quantum-chemical calculations combined with a polarizable continuum model, where a parameter is determined for small reference molecules. We calculated the pK(a) values of variously sized molecules ranging from an amino acid to a protein consisting of 300 atoms. This scheme enabled us to derive a semiquantitative pK(a) value of specific chemical groups and discuss the influence of the surroundings on the pK(a) values. As applications, we have derived the pK(a) value of the side chain of an amino acid and almost reproduced the experimental value. By using our computing schemes, we showed the influence of hydrogen bonds on the pK(a) values in the case of tripeptides, which decreases the pK(a) value by 3.0 units for serine in comparison with those of the corresponding monopeptides. Finally, with some assumptions, we derived the pK(a) values of tyrosines and serines in chignolin and a tryptophan cage. We obtained quite different pK(a) values of adjacent serines in the tryptophan cage; the pK(a) value of the OH group of Ser13 exposed to bulk water is 14.69, whereas that of Ser14 not exposed to bulk water is 20.80 because of the internal hydrogen bonds.

  20. Applying the Expectancy-Value Model to understand health values.

    PubMed

    Zhang, Xu-Hao; Xie, Feng; Wee, Hwee-Lin; Thumboo, Julian; Li, Shu-Chuen

    2008-03-01

    % and 28% in separate MLR models (P < 0.05). When data were analyzed for each health state, variances in health values became small and explanatory power of EVM was reduced to a range between 8% and 23%. EVM was useful in explaining variances of health values and predicting important factors. Its power to explain small variances might be restricted due to limitations of 7-point Likert scale to measure AAs accurately. With further improvement and validation of a compatible continuous scale for more accurate measurement, EVM is expected to explain health values to a larger extent.

  1. Radiometrically accurate scene-based nonuniformity correction for array sensors.

    PubMed

    Ratliff, Bradley M; Hayat, Majeed M; Tyo, J Scott

    2003-10-01

    A novel radiometrically accurate scene-based nonuniformity correction (NUC) algorithm is described. The technique combines absolute calibration with a recently reported algebraic scene-based NUC algorithm. The technique is based on the following principle: First, detectors that are along the perimeter of the focal-plane array are absolutely calibrated; then the calibration is transported to the remaining uncalibrated interior detectors through the application of the algebraic scene-based algorithm, which utilizes pairs of image frames exhibiting arbitrary global motion. The key advantage of this technique is that it can obtain radiometric accuracy during NUC without disrupting camera operation. Accurate estimates of the bias nonuniformity can be achieved with relatively few frames, which can be fewer than ten frame pairs. Advantages of this technique are discussed, and a thorough performance analysis is presented with use of simulated and real infrared imagery.

  2. Collateral missing value imputation: a new robust missing value estimation algorithm for microarray data.

    PubMed

    Sehgal, Muhammad Shoaib B; Gondal, Iqbal; Dooley, Laurence S

    2005-05-15

    Microarray data are used in a range of application areas in biology, although often it contains considerable numbers of missing values. These missing values can significantly affect subsequent statistical analysis and machine learning algorithms so there is a strong motivation to estimate these values as accurately as possible before using these algorithms. While many imputation algorithms have been proposed, more robust techniques need to be developed so that further analysis of biological data can be accurately undertaken. In this paper, an innovative missing value imputation algorithm called collateral missing value estimation (CMVE) is presented which uses multiple covariance-based imputation matrices for the final prediction of missing values. The matrices are computed and optimized using least square regression and linear programming methods. The new CMVE algorithm has been compared with existing estimation techniques including Bayesian principal component analysis imputation (BPCA), least square impute (LSImpute) and K-nearest neighbour (KNN). All these methods were rigorously tested to estimate missing values in three separate non-time series (ovarian cancer based) and one time series (yeast sporulation) dataset. Each method was quantitatively analyzed using the normalized root mean square (NRMS) error measure, covering a wide range of randomly introduced missing value probabilities from 0.01 to 0.2. Experiments were also undertaken on the yeast dataset, which comprised 1.7% actual missing values, to test the hypothesis that CMVE performed better not only for randomly occurring but also for a real distribution of missing values. The results confirmed that CMVE consistently demonstrated superior and robust estimation capability of missing values compared with other methods for both series types of data, for the same order of computational complexity. A concise theoretical framework has also been formulated to validate the improved performance of the CMVE

  3. Valuing inter-sectoral costs and benefits of interventions in the healthcare sector: methods for obtaining unit prices.

    PubMed

    Drost, Ruben M W A; Paulus, Aggie T G; Ruwaard, Dirk; Evers, Silvia M A A

    2017-02-01

    There is a lack of knowledge about methods for valuing health intervention-related costs and monetary benefits in the education and criminal justice sectors, also known as 'inter-sectoral costs and benefits' (ICBs). The objective of this study was to develop methods for obtaining unit prices for the valuation of ICBs. By conducting an exploratory literature study and expert interviews, several generic methods were developed. The methods' feasibility was assessed through application in the Netherlands. Results were validated in an expert meeting, which was attended by policy makers, public health experts, health economists and HTA-experts, and discussed at several international conferences and symposia. The study resulted in four methods, including the opportunity cost method (A) and valuation using available unit prices (B), self-constructed unit prices (C) or hourly labor costs (D). The methods developed can be used internationally and are valuable for the broad international field of HTA.

  4. A review of the liquid metal diffusion data obtained from the space shuttle endeavour mission STS-47 and the space shuttle columbia mission STS-52

    NASA Astrophysics Data System (ADS)

    Shirkhanzadeh, Morteza

    Accurate data of liquid-phase solute diffusion coefficients are required to validate the condensed -matter physics theories. However, the required data accuracy to discriminate between com-peting theoretical models is 1 to 2 percent(1). Smith and Scott (2) have recently used the measured values of diffusion coefficients for Pb-Au in microgravity to validate the theoretical values of the diffusion coefficients derived from molecular dynamics simulations and several Enskog hard sphere models. The microgravity data used was obtained from the liquid diffusion experiments conducted on board the Space Shuttle Endeavour (mission STS-47) and the Space Shuttle Columbia (mission STS-52). Based on the analysis of the results, it was claimed that the measured values of diffusion coefficients were consistent with the theoretical results and that the data fit a linear relationship with a slope slightly greater than predicted by the molecular dynamics simulations. These conclusions, however, contradict the claims made in previous publications (3-5) where it was reported that the microgravity data obtained from the shuttle experiments fit the fluctuation theory (D proportional to T2). A thorough analysis of data will be presented to demonstrate that the widely-reported micro-gravity results obtained from shuttle experiments are not reliable and sufficiantly accurate to discriminate between competing theoretical models. References: 1. J.P. Garandet, G. Mathiak, V. Botton, P. Lehmann and A. Griesche, Int. J. Thermophysics, 25, 249 (2004). 2.P.J. Scott and R.W. Smith, J. Appl. Physics 104, 043706 (2008). 3. R.W. Smith, Microgravity Sci. Technol. XI (2) 78-84 (1998). 4.Smith et al, Ann. N.Y. Acad. Sci. 974:56-67 (2002) (retracted). 5.R.A. Herring et al, J. Jpn. Soc. Microgravity Appl., Vol.16, 234-244 (1999).

  5. Haematological and biochemical values in horses naturally infected with Strongylus vulgaris.

    PubMed

    Bailey, M; Kent, J; Martin, S C; Lloyd, S; Soulsby, E J

    1984-08-18

    The concentrations of serum proteins (beta 1, beta 2, gamma, alpha 1, alpha 2 globulins and albumin) and absolute numbers of eosinophils, neutrophils and lymphocytes were examined in 64 naturally infected horses and ponies in which the number of larvae of Strongylus vulgaris in the cranial mesenteric artery and the severity of the lesion of verminous arteritis could be determined. The horses were grouped according to the number of larvae found and the severity of the arteritis. The results demonstrated that, although some significant deviation from a random distribution occurred in certain of the values (chi 2 test), there was considerable individual variation in the values obtained for individual animals within groups and overlap of the range of values between groups. Also the number of larvae present in the artery did not necessarily accurately reflect the severity of the arterial lesion. Thus, the parameters examined could not be used reliably to estimate the intensity of infection with S vulgaris in an individual animal.

  6. Shrinkage regression-based methods for microarray missing value imputation.

    PubMed

    Wang, Hsiuying; Chiu, Chia-Chun; Wu, Yi-Ching; Wu, Wei-Sheng

    2013-01-01

    Missing values commonly occur in the microarray data, which usually contain more than 5% missing values with up to 90% of genes affected. Inaccurate missing value estimation results in reducing the power of downstream microarray data analyses. Many types of methods have been developed to estimate missing values. Among them, the regression-based methods are very popular and have been shown to perform better than the other types of methods in many testing microarray datasets. To further improve the performances of the regression-based methods, we propose shrinkage regression-based methods. Our methods take the advantage of the correlation structure in the microarray data and select similar genes for the target gene by Pearson correlation coefficients. Besides, our methods incorporate the least squares principle, utilize a shrinkage estimation approach to adjust the coefficients of the regression model, and then use the new coefficients to estimate missing values. Simulation results show that the proposed methods provide more accurate missing value estimation in six testing microarray datasets than the existing regression-based methods do. Imputation of missing values is a very important aspect of microarray data analyses because most of the downstream analyses require a complete dataset. Therefore, exploring accurate and efficient methods for estimating missing values has become an essential issue. Since our proposed shrinkage regression-based methods can provide accurate missing value estimation, they are competitive alternatives to the existing regression-based methods.

  7. THE EVALUATION OF METHODS FOR CREATING DEFENSIBLE, REPEATABLE, OBJECTIVE AND ACCURATE TOLERANCE VALUES

    EPA Science Inventory

    In the field of bioassessment, tolerance has traditionally referred to the degree to which organisms can withstand environmental degradation. This concept has been around for many years and its use is widespread. In numerous cases, tolerance values (TVs) have been assigned to i...

  8. Prediction of anaerobic power values from an abbreviated WAnT protocol.

    PubMed

    Stickley, Christopher D; Hetzler, Ronald K; Kimura, Iris F

    2008-05-01

    The traditional 30-second Wingate anaerobic test (WAnT) is a widely used anaerobic power assessment protocol. An abbreviated protocol has been shown to decrease the mild to severe physical discomfort often associated with the WAnT. Therefore, the purpose of this study was to determine whether a 20-second WAnT protocol could be used to accurately predict power values of a standard 30-second WAnT. In 96 college females, anaerobic power variables were assessed using a standard 30-second WAnT protocol. Maximum power values as well as instantaneous power at 10, 15, and 20 seconds were recorded. Based on these results, stepwise regression analysis was performed to determine the accuracy with which mean power, minimum power, 30-second power, and percentage of fatigue for a standard 30-second WAnT could be predicted from values obtained during the first 20 seconds of testing. Mean power values showed the highest level of predictability (R2 = 0.99) from the 20-second values. Minimum power, 30-second power, and percentage of fatigue also showed high levels of predictability (R2 = 0.91, 0.84, and 0.84, respectively) using only values obtained during the first 20 seconds of the protocol. An abbreviated (20-second) WAnT protocol appears to effectively predict results of a standard 30-second WAnT in college-age females, allowing for comparison of data to published norms. A shortened test may allow for a decrease in unwanted side effects associated with the traditional WAnT protocol.

  9. The importance of accurate interaction potentials in the melting of argon nanoclusters

    NASA Astrophysics Data System (ADS)

    Pahl, E.; Calvo, F.; Schwerdtfeger, P.

    The melting temperatures of argon clusters ArN (N = 13, 55, 147, 309, 561, and 923) and of bulk argon have been obtained from exchange Monte Carlo simulations and are compared using different two-body interaction potentials, namely the standard Lennard-Jones (LJ), Aziz and extended Lennard-Jones (ELJ) potentials. The latter potential has many advantages: while maintaining the computational efficiency of the commonly used LJ potential, it is as accurate as the Aziz potential but the computer time scales more favorably with increasing cluster size. By applying the ELJ form and extrapolating the cluster data to the infinite system, we are able to extract the melting point of argon already in good agreement with experimental measurements. By considering the additional Axilrod-Teller three-body contribution as well, we calculate a melting temperature of T meltELJ = 84.7 K compared to the experimental value of T meltexp = 83.85 K, whereas the LJ potential underestimates the melting point by more than 7 K. Thus melting temperatures within 1 K accuracy are now feasible.

  10. Flexible, fast and accurate sequence alignment profiling on GPGPU with PaSWAS.

    PubMed

    Warris, Sven; Yalcin, Feyruz; Jackson, Katherine J L; Nap, Jan Peter

    2015-01-01

    To obtain large-scale sequence alignments in a fast and flexible way is an important step in the analyses of next generation sequencing data. Applications based on the Smith-Waterman (SW) algorithm are often either not fast enough, limited to dedicated tasks or not sufficiently accurate due to statistical issues. Current SW implementations that run on graphics hardware do not report the alignment details necessary for further analysis. With the Parallel SW Alignment Software (PaSWAS) it is possible (a) to have easy access to the computational power of NVIDIA-based general purpose graphics processing units (GPGPUs) to perform high-speed sequence alignments, and (b) retrieve relevant information such as score, number of gaps and mismatches. The software reports multiple hits per alignment. The added value of the new SW implementation is demonstrated with two test cases: (1) tag recovery in next generation sequence data and (2) isotype assignment within an immunoglobulin 454 sequence data set. Both cases show the usability and versatility of the new parallel Smith-Waterman implementation.

  11. Accurate electric multipole moment, static polarizability and hyperpolarizability derivatives for N2

    NASA Astrophysics Data System (ADS)

    Maroulis, George

    2003-02-01

    We report accurate values of the electric moments, static polarizabilities, hyperpolarizabilities and their respective derivatives for N2. Our values have been extracted from finite-field Møller-Pleset perturbation theory and coupled cluster calculations performed with carefully designed basis sets. A large [15s12p9d7f] basis set consisting of 290 CGTF is expected to provide reference self-consistent-field values of near-Hartree-Fock quality for all properties. The Hartree-Fock limit for the mean hyperpolarizability is estimated at γ¯=715±4e4a04Eh-3 at the experimental bond length Re=2.074 32a0. Accurate estimates of the electron correlation effects were obtained with a [10s7p6d4f] basis set. Our best values are Θ=-1.1258ea02 for the quadrupole and Φ=-6.75ea04 for the hexadecapole moment, ᾱ=11.7709 and Δα=4.6074e2a02Eh-1 for the mean and the anisotropy of the dipole polarizability, C¯=41.63e2a04Eh-1 for the mean quadrupole polarizability and γ¯=927e4a04Eh-3 for the dipole hyperpolarizability. The latter value is quite close to Shelton's experimental estimate of 917±5e4a04Eh-3 [D. P. Shelton, Phys. Rev. A 42, 2578 (1990)]. The R dependence of all properties has been calculated with a [7s5p4d2f] basis set. At the CCSD(T) level of theory the dipole polarizability varies around Re as ᾱ(R)/e2a02Eh-1=11.8483+6.1758(R-Re)+0.9191(R-Re)2-0.8212(R-Re)3-0.0006(R-Re)4, Δα(R)/e2a02Eh-1=4.6032+7.0301(R-Re)+1.9340(R-Re)2-0.5708(R-Re)3+0.1949(R-Re)4. For the Cartesian components and the mean of γαβγδ, (dγzzzz/dR)e=1398, (dγxxxx/dR)e=867, (dγxxzz/dR)e=317, and (dγ¯/dR)e=994e4a03Eh-3. For the quadrupole polarizability Cαβ,γδ, we report (dCzz,zz/dR)e=19.20, (dCxz,xz/dR)e=16.55, (dCxx,xx/dR)e=10.20, and (dC¯/dR)e=23.31e2a03Eh-1. At the MP2 level of theory the components of the dipole-octopole polarizability (Eα,βγδ) and the mean dipole-dipole-octopole hyperpolarizability B¯ we have obtained (dEz,zzz/dR)e=36.71, (dEx,xxx/dR)e=-12.94e2a03Eh-1, and

  12. A discrete choice experiment to obtain a tariff for valuing informal care situations measured with the CarerQol instrument.

    PubMed

    Hoefman, Renske J; van Exel, Job; Rose, John M; van de Wetering, E J; Brouwer, Werner B F

    2014-01-01

    Economic evaluations adopting a societal perspective need to include informal care whenever relevant. However, in practice, informal care is often neglected, because there are few validated instruments to measure and value informal care for inclusion in economic evaluations. The CarerQol, which is such an instrument, measures the impact of informal care on 7 important burden dimensions (CarerQol-7D) and values this in terms of general quality of life (CarerQol-VAS). The objective of the study was to calculate utility scores based on relative utility weights for the CarerQol-7D. These tariffs will facilitate inclusion of informal care in economic evaluations. The CarerQol-7D tariff was derived with a discrete choice experiment conducted as an Internet survey among the general adult population in the Netherlands (N = 992). The choice set contained 2 unlabeled alternatives described in terms of the 7 CarerQol-7D dimensions (level range: "no,"some," and "a lot"). An efficient experimental design with priors obtained from a pilot study (N = 104) was used. Data were analyzed with a panel mixed multinomial parameter model including main and interaction effects of the attributes. The utility attached to informal care situations was significantly higher when this situation was more attractive in terms of fewer problems and more fulfillment or support. The interaction term between the CarerQol-7D dimensions physical health and mental health problems also significantly explained this utility. The tariff was constructed by adding up the relative utility weights per category of all CarerQol-7D dimensions and the interaction term. We obtained a tariff providing standard utility scores for caring situations described with the CarerQol-7D. This facilitates the inclusion of informal care in economic evaluations.

  13. Agreement between arterial partial pressure of carbon dioxide and saturation of hemoglobin with oxygen values obtained by direct arterial blood measurements versus noninvasive methods in conscious healthy and ill foals.

    PubMed

    Wong, David M; Alcott, Cody J; Wang, Chong; Bornkamp, Jennifer L; Young, Jessica L; Sponseller, Brett A

    2011-11-15

    To determine agreement between indirect measurements of end-tidal partial pressure of carbon dioxide (PetCO(2)) and saturation of hemoglobin with oxygen as measured by pulse oximetry (SpO(2)) with direct measurements of PaCO(2) and calculated saturation of hemoglobin with oxygen in arterial blood (SaO(2)) in conscious healthy and ill foals. Validation study. 10 healthy and 21 ill neonatal foals. Arterial blood gas analysis was performed on healthy and ill foals examined at a veterinary teaching hospital to determine direct measurements of PaCO(2) and PaO(2) along with SaO(2). Concurrently, PetCO(2) was measured with a capnograph inserted into a naris, and SpO(2) was measured with a reflectance probe placed at the base of the tail. Paired values were compared by use of Pearson correlation coefficients, and level of agreement was assessed with the Bland-Altman method. Mean ± SD difference between PaCO(2) and PetCO(2) was 0.1 ± 5.0 mm Hg. There was significant strong correlation (r = 0.779) and good agreement between PaCO(2) and PetCO(2). Mean ± SD difference between SaO(2) and SpO(2) was 2.5 ± 3.5%. There was significant moderate correlation (r = 0.499) and acceptable agreement between SaO(2) and SpO(2). Both PetCO(2) obtained by use of nasal capnography and SpO(2) obtained with a reflectance probe are clinically applicable and accurate indirect methods of estimating and monitoring PaCO(2) and SaO(2) in neonatal foals. Indirect methods should not replace periodic direct measurement of corresponding parameters.

  14. Accurate RNA 5-methylcytosine site prediction based on heuristic physical-chemical properties reduction and classifier ensemble.

    PubMed

    Zhang, Ming; Xu, Yan; Li, Lei; Liu, Zi; Yang, Xibei; Yu, Dong-Jun

    2018-06-01

    RNA 5-methylcytosine (m 5 C) is an important post-transcriptional modification that plays an indispensable role in biological processes. The accurate identification of m 5 C sites from primary RNA sequences is especially useful for deeply understanding the mechanisms and functions of m 5 C. Due to the difficulty and expensive costs of identifying m 5 C sites with wet-lab techniques, developing fast and accurate machine-learning-based prediction methods is urgently needed. In this study, we proposed a new m 5 C site predictor, called M5C-HPCR, by introducing a novel heuristic nucleotide physicochemical property reduction (HPCR) algorithm and classifier ensemble. HPCR extracts multiple reducts of physical-chemical properties for encoding discriminative features, while the classifier ensemble is applied to integrate multiple base predictors, each of which is trained based on a separate reduct of the physical-chemical properties obtained from HPCR. Rigorous jackknife tests on two benchmark datasets demonstrate that M5C-HPCR outperforms state-of-the-art m 5 C site predictors, with the highest values of MCC (0.859) and AUC (0.962). We also implemented the webserver of M5C-HPCR, which is freely available at http://cslab.just.edu.cn:8080/M5C-HPCR/. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Accurate Measurements of the Dielectric Constant of Seawater at L Band

    NASA Technical Reports Server (NTRS)

    Lang, Roger; Zhou, Yiwen; Utku, Cuneyt; Le Vine, David

    2016-01-01

    This paper describes measurements of the dielectric constant of seawater at a frequency of 1.413 GHz, the center of the protected band (i.e., passive use only) used in the measurement of sea surface salinity from space. The objective of the measurements is to accurately determine the complex dielectric constant of seawater as a function of salinity and temperature. A resonant cylindrical microwave cavity in transmission mode has been employed to make the measurements. The measurements are made using standard seawater at salinities of 30, 33, 35, and 38 practical salinity units over a range of temperatures from 0 degree C to 35 degree C in 5 degree C intervals. Repeated measurements have been made at each temperature and salinity. Mean values and standard deviations are then computed. The total error budget indicates that the real and imaginary parts of the dielectric constant have a combined standard uncertainty of about 0.3 over the range of salinities and temperatures considered. The measurements are compared with the dielectric constants obtained from the model functions of Klein and Swift and those of Meissner and Wentz. The biggest differences occur at low and high temperatures.

  16. Accurate Critical Stress Intensity Factor Griffith Crack Theory Measurements by Numerical Techniques

    PubMed Central

    Petersen, Richard C.

    2014-01-01

    Critical stress intensity factor (KIc) has been an approximation for fracture toughness using only load-cell measurements. However, artificial man-made cracks several orders of magnitude longer and wider than natural flaws have required a correction factor term (Y) that can be up to about 3 times the recorded experimental value [1-3]. In fact, over 30 years ago a National Academy of Sciences advisory board stated that empirical KIc testing was of serious concern and further requested that an accurate bulk fracture toughness method be found [4]. Now that fracture toughness can be calculated accurately by numerical integration from the load/deflection curve as resilience, work of fracture (WOF) and strain energy release (SIc) [5, 6], KIc appears to be unnecessary. However, the large body of previous KIc experimental test results found in the literature offer the opportunity for continued meta analysis with other more practical and accurate fracture toughness results using energy methods and numerical integration. Therefore, KIc is derived from the classical Griffith Crack Theory [6] to include SIc as a more accurate term for strain energy release rate (𝒢Ic), along with crack surface energy (γ), crack length (a), modulus (E), applied stress (σ), Y, crack-tip plastic zone defect region (rp) and yield strength (σys) that can all be determined from load and deflection data. Polymer matrix discontinuous quartz fiber-reinforced composites to accentuate toughness differences were prepared for flexural mechanical testing comprising of 3 mm fibers at different volume percentages from 0-54.0 vol% and at 28.2 vol% with different fiber lengths from 0.0-6.0 mm. Results provided a new correction factor and regression analyses between several numerical integration fracture toughness test methods to support KIc results. Further, bulk KIc accurate experimental values are compared with empirical test results found in literature. Also, several fracture toughness mechanisms

  17. Hartree-Fock theory of the inhomogeneous electron gas at a jellium metal surface: Rigorous upper bounds to the surface energy and accurate work functions

    NASA Astrophysics Data System (ADS)

    Sahni, V.; Ma, C. Q.

    1980-12-01

    The inhomogeneous electron gas at a jellium metal surface is studied in the Hartree-Fock approximation by Kohn-Sham density functional theory. Rigorous upper bounds to the surface energy are derived by application of the Rayleigh-Ritz variational principle for the energy, the surface kinetic, electrostatic, and nonlocal exchange energy functionals being determined exactly for the accurate linear-potential model electronic wave functions. The densities obtained by the energy minimization constraint are then employed to determine work-function results via the variationally accurate "displaced-profile change-in-self-consistent-field" expression. The theoretical basis of this non-self-consistent procedure and its demonstrated accuracy for the fully correlated system (as treated within the local-density approximation for exchange and correlation) leads us to conclude these results for the surface energies and work functions to be essentially exact. Work-function values are also determined by the Koopmans'-theorem expression, both for these densities as well as for those obtained by satisfaction of the constraint set on the electrostatic potential by the Budd-Vannimenus theorem. The use of the Hartree-Fock results in the accurate estimation of correlation-effect contributions to these surface properties of the nonuniform electron gas is also indicated. In addition, the original work and approximations made by Bardeen in this attempt at a solution of the Hartree-Fock problem are briefly reviewed in order to contrast with the present work.

  18. An easy way to measure accurately the direct magnetoelectric voltage coefficient of thin film devices

    NASA Astrophysics Data System (ADS)

    Poullain, Gilles; More-Chevalier, Joris; Cibert, Christophe; Bouregba, Rachid

    2017-01-01

    TbxDy1-xFe2/Pt/Pb(Zrx, Ti1-x)O3 thin films were grown on Pt/TiO2/SiO2/Si substrate by multi-target sputtering. The magnetoelectric voltage coefficient αΗΜΕ was determined at room temperature using a lock-in amplifier. By adding, in series in the circuit, a capacitor of the same value as that of the device under test, we were able to demonstrate that the magnetoelectric device behaves as a voltage source. Furthermore, a simple way to subtract the stray voltage arising from the flow of eddy currents in the measurement set-up, is proposed. This allows the easy and accurate determination of the true magnetoelectric voltage coefficient. A large αΗΜΕ of 8.3 V/cm. Oe was thus obtained for a Terfenol-D/Pt/PZT thin film device, without DC magnetic field nor mechanical resonance.

  19. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    NASA Technical Reports Server (NTRS)

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  20. Ultra-accurate collaborative information filtering via directed user similarity

    NASA Astrophysics Data System (ADS)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  1. Analysis shear wave velocity structure obtained from surface wave methods in Bornova, Izmir

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pamuk, Eren, E-mail: eren.pamuk@deu.edu.tr; Akgün, Mustafa, E-mail: mustafa.akgun@deu.edu.tr; Özdağ, Özkan Cevdet, E-mail: cevdet.ozdag@deu.edu.tr

    2016-04-18

    Properties of the soil from the bedrock is necessary to describe accurately and reliably for the reduction of earthquake damage. Because seismic waves change their amplitude and frequency content owing to acoustic impedance difference between soil and bedrock. Firstly, shear wave velocity and depth information of layers on bedrock is needed to detect this changing. Shear wave velocity can be obtained using inversion of Rayleigh wave dispersion curves obtained from surface wave methods (MASW- the Multichannel Analysis of Surface Waves, ReMi-Refraction Microtremor, SPAC-Spatial Autocorrelation). While research depth is limeted in active source study, a passive source methods are utilized formore » deep depth which is not reached using active source methods. ReMi method is used to determine layer thickness and velocity up to 100 m using seismic refraction measurement systems.The research carried out up to desired depth depending on radius using SPAC which is utilized easily in conditions that district using of seismic studies in the city. Vs profiles which are required to calculate deformations in under static and dynamic loads can be obtained with high resolution using combining rayleigh wave dispersion curve obtained from active and passive source methods. In the this study, Surface waves data were collected using the measurements of MASW, ReMi and SPAC at the İzmir Bornova region. Dispersion curves obtained from surface wave methods were combined in wide frequency band and Vs-depth profiles were obtained using inversion. Reliability of the resulting soil profiles were provided by comparison with theoretical transfer function obtained from soil paremeters and observed soil transfer function from Nakamura technique and by examination of fitting between these functions. Vs values are changed between 200-830 m/s and engineering bedrock (Vs>760 m/s) depth is approximately 150 m.« less

  2. Protocol to determine accurate absorption coefficients for iron containing transferrins

    PubMed Central

    James, Nicholas G.; Mason, Anne B.

    2008-01-01

    An accurate protein concentration is an essential component of most biochemical experiments. The simplest method to determine a protein concentration is by measuring the A280, using an absorption coefficient (ε), and applying the Beer-Lambert law. For some metalloproteins (including all transferrin family members) difficulties arise because metal binding contributes to the A280 in a non-linear manner. The Edelhoch method is based on the assumption that the ε of a denatured protein in 6 M guanidine-HCl can be calculated from the number of the tryptophan, tyrosine, and cystine residues. We extend this method to derive ε values for both apo- and iron-bound transferrins. The absorbance of an identical amount of iron containing protein is measured in: 1) 6 M guanidine-HCl (denatured, no iron); 2) pH 7.4 buffer (non-denatured with iron); and 3) pH 5.6 (or lower) buffer with a chelator (non-denatured without iron). Since the iron free apo-protein has an identical A280 under non-denaturing conditions, the difference between the reading at pH 7.4 and the lower pH directly reports the contribution of the iron. The method is fast and consumes ~1 mg of sample. The ability to determine accurate ε values for transferrin mutants that bind iron with a wide range of affinities has proven very useful; furthermore a similar approach could easily be followed to determine ε values for other metalloproteins in which metal binding contributes to the A280. PMID:18471984

  3. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    NASA Technical Reports Server (NTRS)

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  4. Discrete sensors distribution for accurate plantar pressure analyses.

    PubMed

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  5. Accurate modelling of unsteady flows in collapsible tubes.

    PubMed

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  6. Measuring the value of healthcare business assets.

    PubMed

    Evans, C J

    2000-04-01

    Healthcare organizations obtain valuations of business assets for many reasons, including to support decisions regarding potential mergers, sale of business components, or financing; for tax assessments; and for defense against law-suits. If compliance with regulations may be an issue, such as when a not-for-profit organization is involved in a transaction, healthcare organizations should seek an independent appraisal to ensure that applicable legal standards are met. Whether or not regulatory issues are involved, however, an accurate and useful valuation of business assets depends on many factors. Financial managers must understand the purpose and function of the valuation, choice of appropriate valuation techniques, proper assessment of intangible value, use of realistic growth rates, appropriate emphasis on key focus areas of the valuation (e.g., risk and future income streams), and an accounting of physician compensation.

  7. Accurate high-speed liquid handling of very small biological samples.

    PubMed

    Schober, A; Günther, R; Schwienhorst, A; Döring, M; Lindemann, B F

    1993-08-01

    Molecular biology techniques require the accurate pipetting of buffers and solutions with volumes in the microliter range. Traditionally, hand-held pipetting devices are used to fulfill these requirements, but many laboratories have also introduced robotic workstations for the handling of liquids. Piston-operated pumps are commonly used in manually as well as automatically operated pipettors. These devices cannot meet the demands for extremely accurate pipetting of very small volumes at the high speed that would be necessary for certain applications (e.g., in sequencing projects with high throughput). In this paper we describe a technique for the accurate microdispensation of biochemically relevant solutions and suspensions with the aid of a piezoelectric transducer. It is suitable for liquids of a viscosity between 0.5 and 500 milliPascals. The obtainable drop sizes range from 5 picoliters to a few nanoliters with up to 10,000 drops per second. Liquids can be dispensed in single or accumulated drops to handle a wide volume range. The system proved to be excellently suitable for the handling of biological samples. It did not show any detectable negative impact on the biological function of dissolved or suspended molecules or particles.

  8. Accurate measurements of the thermal diffusivity of thin filaments by lock-in thermography

    NASA Astrophysics Data System (ADS)

    Salazar, Agustín; Mendioroz, Arantza; Fuente, Raquel; Celorrio, Ricardo

    2010-02-01

    In lock-in (modulated) thermography the lateral thermal diffusivity can be obtained from the slope of the linear relation between the phase of the surface temperature and the distance to the heating spot. However, this slope is greatly affected by heat losses, leading to an overestimation of the thermal diffusivity, especially for thin samples of poor thermal conducting materials. In this paper, we present a complete theoretical model to calculate the surface temperature of filaments heated by a focused and modulated laser beam. All heat losses have been included: conduction to the gas, convection, and radiation. Monofilaments and coated wires have been studied. Conduction to the gas has been identified as the most disturbing effect preventing from the direct use of the slope method to measure the thermal diffusivity. As a result, by keeping the sample in vacuum a slope method combining amplitude and phase can be used to obtain the accurate diffusivity value. Measurements performed in a wide variety of filaments confirm the validity of the conclusion. On the other hand, in the case of coated wires, the slope method gives an effective thermal diffusivity, which verifies the in-parallel thermal resistor model. As an application, the slope method has been used to retrieve the thermal conductivity of thin tubes by filling them with a liquid of known thermal properties.

  9. A highly accurate dynamic contact angle algorithm for drops on inclined surface based on ellipse-fitting.

    PubMed

    Xu, Z N; Wang, S Y

    2015-02-01

    To improve the accuracy in the calculation of dynamic contact angle for drops on the inclined surface, a significant number of numerical drop profiles on the inclined surface with different inclination angles, drop volumes, and contact angles are generated based on the finite difference method, a least-squares ellipse-fitting algorithm is used to calculate the dynamic contact angle. The influences of the above three factors are systematically investigated. The results reveal that the dynamic contact angle errors, including the errors of the left and right contact angles, evaluated by the ellipse-fitting algorithm tend to increase with inclination angle/drop volume/contact angle. If the drop volume and the solid substrate are fixed, the errors of the left and right contact angles increase with inclination angle. After performing a tremendous amount of computation, the critical dimensionless drop volumes corresponding to the critical contact angle error are obtained. Based on the values of the critical volumes, a highly accurate dynamic contact angle algorithm is proposed and fully validated. Within nearly the whole hydrophobicity range, it can decrease the dynamic contact angle error in the inclined plane method to less than a certain value even for different types of liquids.

  10. Can Value-Added Measures of Teacher Performance Be Trusted?

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.

    2015-01-01

    We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…

  11. Accurate analytical periodic solution of the elliptical Kepler equation using the Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alshaery, Aisha; Ebaid, Abdelhalim

    2017-11-01

    Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.

  12. Calculating accurate aboveground dry weight biomass of herbaceous vegetation in the Great Plains: A comparison of three calculations to determine the least resource intensive and most accurate method

    Treesearch

    Ben Butler

    2007-01-01

    Obtaining accurate biomass measurements is often a resource-intensive task. Data collection crews often spend large amounts of time in the field clipping, drying, and weighing grasses to calculate the biomass of a given vegetation type. Such a problem is currently occurring in the Great Plains region of the Bureau of Indian Affairs. A study looked at six reservations...

  13. Comparing capacity value estimation techniques for photovoltaic solar power

    DOE PAGES

    Madaeni, Seyed Hossein; Sioshansi, Ramteen; Denholm, Paul

    2012-09-28

    In this paper, we estimate the capacity value of photovoltaic (PV) solar plants in the western U.S. Our results show that PV plants have capacity values that range between 52% and 93%, depending on location and sun-tracking capability. We further compare more robust but data- and computationally-intense reliability-based estimation techniques with simpler approximation methods. We show that if implemented properly, these techniques provide accurate approximations of reliability-based methods. Overall, methods that are based on the weighted capacity factor of the plant provide the most accurate estimate. As a result, we also examine the sensitivity of PV capacity value to themore » inclusion of sun-tracking systems.« less

  14. Comparison of the T2-star Values of Placentas Obtained from Pre-eclamptic Patients with Those of a Control Group: an Ex-vivo Magnetic Resonance Imaging Study.

    PubMed

    Yurttutan, Nursel; Bakacak, Murat; Kızıldağ, Betül

    2017-09-29

    Endotel dysfunction, vasoconstriction, and oxidative stress are described in the pathophysiology of pre-eclampsia, but its aetiology has not been revealed clearly. To examine whether there is a difference between the placentas of pre-eclamptic pregnant women and those of a control group in terms of their T2 star values. Case-control study. Twenty patients diagnosed with pre-eclampsia and 22 healthy controls were included in this study. The placentas obtained after births performed via Caesarean section were taken into the magnetic resonance imaging area in plastic bags within the first postnatal hour, and imaging was performed via modified DIXON-Quant sequence. Average values were obtained by performing T2 star measurements from four localisations on the placentas. T2 star values measured in the placentas of the control group were found to be significantly lower than those in the pre-eclampsia group (p<0.01). While the mean T2 star value in the pre-eclamptic group was found to be 37.48 ms (standard deviation ± 11.3), this value was 28.74 (standard deviation ± 8.08) in the control group. The cut-off value for the T2 star value, maximising the accuracy of diagnosis, was 28.59 ms (area under curve: 0.741; 95% confidence interval: 0.592-0.890); sensitivity and specificity were 70% and 63.6%, respectively. This study, the T2 star value, which is an indicator of iron amount, was found to be significantly lower in the control group than in the pre-eclampsia group. This may be related to the reduction in blood flow to the placenta due to endothelial dysfunction and vasoconstriction, which are important in pre-eclampsia pathophysiology.

  15. Correlation of Lactic Acid and Base Deficit Values Obtained From Arterial and Peripheral Venous Samples in a Pediatric Population During Intraoperative Care.

    PubMed

    Bordes, Brianne M; Walia, Hina; Sebastian, Roby; Martin, David; Tumin, Dmitry; Tobias, Joseph D

    2017-12-01

    Lactic acid and base deficit (BD) values are frequently monitored in the intensive care unit and operating room setting to evaluate oxygenation, ventilation, cardiac output, and peripheral perfusion. Although generally obtained from an arterial cannula, such access may not always be available. The current study prospectively investigates the correlation of arterial and peripheral venous values of BD and lactic acid. The study cohort included 48 patients. Arterial BD values ranged from -8 to 4 mEq/L and peripheral venous BD values ranged from -8 to 4 mEq/L. Arterial lactic acid values ranged from 0.36 to 2.45 μmol/L and peripheral venous lactic acid values ranged from 0.38 to 4 μmol/L. The arterial BD (-0.4 ± 2.2 mEq/L) was not significantly different from the peripheral venous BD (-0.6 ± 2.2 mEq/L). The arterial lactic acid (1.0 ± 0.5 μmol/L) was not significantly different from the peripheral venous lactic acid (1.1 ± 0.6 μmol/L). Pearson correlation coefficients demonstrated a very high correlation between arterial and peripheral venous BD ( r = .88, P < .001) and between arterial and peripheral venous lactic acid ( r = .67, P < .001). Bland-Altman plots of both pairs of measures showed that the majority of observations fell within the 95% limits of agreement. Least-squares regression indicated that a 1-unit increase in arterial BD corresponded to a 0.9-unit increase in peripheral venous BD (95% confidence interval [CI]: 0.7-1.0; P < .001) and a 1-unit increase in arterial lactic acid corresponded to a 0.9-unit increase in peripheral venous lactic acid (95% CI: 0.6-1.2; P < .001). These data demonstrate that there is a clinically useful correlation between arterial and peripheral venous lactic acid and BD values.

  16. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  17. Fast and accurate calculation of dilute quantum gas using Uehling–Uhlenbeck model equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yano, Ryosuke, E-mail: ryosuke.yano@tokiorisk.co.jp

    The Uehling–Uhlenbeck (U–U) model equation is studied for the fast and accurate calculation of a dilute quantum gas. In particular, the direct simulation Monte Carlo (DSMC) method is used to solve the U–U model equation. DSMC analysis based on the U–U model equation is expected to enable the thermalization to be accurately obtained using a small number of sample particles and the dilute quantum gas dynamics to be calculated in a practical time. Finally, the applicability of DSMC analysis based on the U–U model equation to the fast and accurate calculation of a dilute quantum gas is confirmed by calculatingmore » the viscosity coefficient of a Bose gas on the basis of the Green–Kubo expression and the shock layer of a dilute Bose gas around a cylinder.« less

  18. Using the Climbing Drum Peel (CDP) Test to Obtain a G(sub IC) value for Core/Facesheet Bonds

    NASA Technical Reports Server (NTRS)

    Nettles, A. T.; Gregory, Elizabeth D.; Jackson, Justin R.

    2006-01-01

    A method of measuring the Mode I fracture toughness of core/facesheet bonds in sandwich Structures is desired, particularly with the widespread use of models that need this data as input. This study examined if a critical strain energy release rate, G(sub IC), can be obtained from the climbing drum peel (CDP) test. The CDP test is relatively simple to perform and does not rely on measuring small crack lengths such as required by the double cantilever beam (DCB) test. Simple energy methods were used to calculate G(sub IC) from CDP test data on composite facesheets bonded to a honeycomb core. Facesheet thicknesses from 2 to 5 plies were tested to examine the upper and lower bounds on facesheet thickness requirements. Results from the study suggest that the CDP test, with certain provisions, can be used to find the GIG value of a core/facesheet bond.

  19. Accurate energy levels for singly ionized platinum (Pt II)

    NASA Technical Reports Server (NTRS)

    Reader, Joseph; Acquista, Nicolo; Sansonetti, Craig J.; Engleman, Rolf, Jr.

    1988-01-01

    New observations of the spectrum of Pt II have been made with hollow-cathode lamps. The region from 1032 to 4101 A was observed photographically with a 10.7-m normal-incidence spectrograph. The region from 2245 to 5223 A was observed with a Fourier-transform spectrometer. Wavelength measurements were made for 558 lines. The uncertainties vary from 0.0005 to 0.004 A. From these measurements and three parity-forbidden transitions in the infrared, accurate values were determined for 28 even and 72 odd energy levels of Pt II.

  20. Kinetic determinations of accurate relative oxidation potentials of amines with reactive radical cations.

    PubMed

    Gould, Ian R; Wosinska, Zofia M; Farid, Samir

    2006-01-01

    Accurate oxidation potentials for organic compounds are critical for the evaluation of thermodynamic and kinetic properties of their radical cations. Except when using a specialized apparatus, electrochemical oxidation of molecules with reactive radical cations is usually an irreversible process, providing peak potentials, E(p), rather than thermodynamically meaningful oxidation potentials, E(ox). In a previous study on amines with radical cations that underwent rapid decarboxylation, we estimated E(ox) by correcting the E(p) from cyclic voltammetry with rate constants for decarboxylation obtained using laser flash photolysis. Here we use redox equilibration experiments to determine accurate relative oxidation potentials for the same amines. We also describe an extension of these experiments to show how relative oxidation potentials can be obtained in the absence of equilibrium, from a complete kinetic analysis of the reversible redox kinetics. The results provide support for the previous cyclic voltammetry/laser flash photolysis method for determining oxidation potentials.

  1. A Multiscale Red Blood Cell Model with Accurate Mechanics, Rheology, and Dynamics

    PubMed Central

    Fedosov, Dmitry A.; Caswell, Bruce; Karniadakis, George Em

    2010-01-01

    Abstract Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. PMID:20483330

  2. Toward Accurate and Quantitative Comparative Metagenomics

    PubMed Central

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  3. 47 CFR 54.615 - Obtaining services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... provided under § 54.621, that the requester cannot obtain toll-free access to an Internet service provider... thing of value; (6) If the service or services are being purchased as part of an aggregated purchase...

  4. Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies

    PubMed Central

    Zhang, Yu; Liu, Jun S.

    2011-01-01

    Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288

  5. Accurate and self-consistent procedure for determining pH in seawater desalination brines and its manifestation in reverse osmosis modeling.

    PubMed

    Nir, Oded; Marvin, Esra; Lahav, Ori

    2014-11-01

    Measuring and modeling pH in concentrated aqueous solutions in an accurate and consistent manner is of paramount importance to many R&D and industrial applications, including RO desalination. Nevertheless, unified definitions and standard procedures have yet to be developed for solutions with ionic strength higher than ∼0.7 M, while implementation of conventional pH determination approaches may lead to significant errors. In this work a systematic yet simple methodology for measuring pH in concentrated solutions (dominated by Na(+)/Cl(-)) was developed and evaluated, with the aim of achieving consistency with the Pitzer ion-interaction approach. Results indicate that the addition of 0.75 M of NaCl to NIST buffers, followed by assigning a new standard pH (calculated based on the Pitzer approach), enabled reducing measured errors to below 0.03 pH units in seawater RO brines (ionic strength up to 2 M). To facilitate its use, the method was developed to be both conceptually and practically analogous to the conventional pH measurement procedure. The method was used to measure the pH of seawater RO retentates obtained at varying recovery ratios. The results matched better the pH values predicted by an accurate RO transport model. Calibrating the model by the measured pH values enabled better boron transport prediction. A Donnan-induced phenomenon, affecting pH in both retentate and permeate streams, was identified and quantified. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. MR evaluation of breast lesions obtained by diffusion-weighted imaging with background body signal suppression (DWIBS) and correlations with histological findings.

    PubMed

    Moschetta, Marco; Telegrafo, Michele; Rella, Leonarda; Capolongo, Arcangela; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe

    2014-07-01

    Diffusion imaging represents a new imaging tool for the diagnosis of breast cancer. This study aims to investigate the role of diffusion-weighted MRI with background body signal suppression (DWIBS) for evaluating breast lesions. 90 patients were prospectively evaluated by MRI with STIR, TSE-T2, contrast enhanced THRIVE-T1 and DWIBS sequences. DWIBS were analyzed searching for the presence of breast lesions and calculating the ADC value. ADC values of ≤1.44×10(-3)mm(2)/s were considered suspicious for malignancy. This analysis was then compared with the histological findings. Sensitivity, specificity, diagnostic accuracy (DA), positive predictive value (PPV) and negative (NPV) were calculated. In 53/90 (59%) patients, DWIBS indicated the presence of breast lesions, 16 (30%) with ADC values of >1.44 and 37 (70%) with ADC≤1.44. The comparison with histology showed 25 malignant and 28 benign lesions. DWIBS sequences obtained sensitivity, specificity, DA, PPV and NPV values of 100, 82, 87, 68 and 100%, respectively. DWIBS can be proposed in the MRI breast protocol representing an accurate diagnostic complement. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Establishing traceability of photometric absorbance values for accurate measurements of the haemoglobin concentration in blood

    NASA Astrophysics Data System (ADS)

    Witt, K.; Wolf, H. U.; Heuck, C.; Kammel, M.; Kummrow, A.; Neukammer, J.

    2013-10-01

    Haemoglobin concentration in blood is one of the most frequently measured analytes in laboratory medicine. Reference and routine methods for the determination of the haemoglobin concentration in blood are based on the conversion of haeme, haemoglobin and haemiglobin species into uniform end products. The total haemoglobin concentration in blood is measured using the absorbance of the reaction products. Traceable absorbance measurement values on the highest metrological level are a prerequisite for the calibration and evaluation of procedures with respect to their suitability for routine measurements and their potential as reference measurement procedures. For this purpose, we describe a procedure to establish traceability of spectral absorbance measurements for the haemiglobincyanide (HiCN) method and for the alkaline haematin detergent (AHD) method. The latter is characterized by a higher stability of the reaction product. In addition, the toxic hazard of cyanide, which binds to the iron ion of the haem group and thus inhibits the oxygen transport, is avoided. Traceability is established at different wavelengths by applying total least-squares analysis to derive the conventional quantity values for the absorbance from the measured values. Extrapolation and interpolation are applied to get access to the spectral regions required to characterize the Q-absorption bands of the HiCN and AHD methods, respectively. For absorbance values between 0.3 and 1.8, the contributions of absorbance measurements to the total expanded uncertainties (95% level of confidence) of absorbance measurements range from 1% to 0.4%.

  8. EpHLA software: a timesaving and accurate tool for improving identification of acceptable mismatches for clinical purposes.

    PubMed

    Filho, Herton Luiz Alves Sales; da Mata Sousa, Luiz Claudio Demes; von Glehn, Cristina de Queiroz Carrascosa; da Silva, Adalberto Socorro; dos Santos Neto, Pedro de Alcântara; do Nascimento, Ferraz; de Castro, Adail Fonseca; do Nascimento, Liliane Machado; Kneib, Carolina; Bianchi Cazarote, Helena; Mayumi Kitamura, Daniele; Torres, Juliane Roberta Dias; da Cruz Lopes, Laiane; Barros, Aryela Loureiro; da Silva Edlin, Evelin Nildiane; de Moura, Fernanda Sá Leal; Watanabe, Janine Midori Figueiredo; do Monte, Semiramis Jamil Hadad

    2012-06-01

    The HLAMatchmaker algorithm, which allows the identification of “safe” acceptable mismatches (AMMs) for recipients of solid organ and cell allografts, is rarely used in part due to the difficulty in using it in the current Excel format. The automation of this algorithm may universalize its use to benefit the allocation of allografts. Recently, we have developed a new software called EpHLA, which is the first computer program automating the use of the HLAMatchmaker algorithm. Herein, we present the experimental validation of the EpHLA program by showing the time efficiency and the quality of operation. The same results, obtained by a single antigen bead assay with sera from 10 sensitized patients waiting for kidney transplants, were analyzed either by conventional HLAMatchmaker or by automated EpHLA method. Users testing these two methods were asked to record: (i) time required for completion of the analysis (in minutes); (ii) number of eplets obtained for class I and class II HLA molecules; (iii) categorization of eplets as reactive or non-reactive based on the MFI cutoff value; and (iv) determination of AMMs based on eplets' reactivities. We showed that although both methods had similar accuracy, the automated EpHLA method was over 8 times faster in comparison to the conventional HLAMatchmaker method. In particular the EpHLA software was faster and more reliable but equally accurate as the conventional method to define AMMs for allografts. The EpHLA software is an accurate and quick method for the identification of AMMs and thus it may be a very useful tool in the decision-making process of organ allocation for highly sensitized patients as well as in many other applications.

  9. 19 CFR 145.11 - Declarations of value and invoices.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 19 Customs Duties 2 2010-04-01 2010-04-01 false Declarations of value and invoices. 145.11 Section... value and invoices. (a) Customs declaration. A clear and complete Customs declaration on the form provided by the foreign post office, giving a full and accurate description of the contents and value of...

  10. WE-G-18C-02: Estimation of Optimal B-Value Set for Obtaining Apparent Diffusion Coefficient Free From Perfusion in Non-Small Cell Lung Cancer

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karki, K; Hugo, G; Ford, J

    2014-06-15

    Purpose: Diffusion-weighted MRI (DW-MRI) is increasingly being investigated for radiotherapy planning and response assessment. Selection of a limited number of b-values in DW-MRI is important to keep geometrical variations low and imaging time short. We investigated various b-value sets to determine an optimal set for obtaining monoexponential apparent diffusion coefficient (ADC) close to perfusion-insensitive intravoxel incoherent motion (IVIM) model ADC (ADC IVIM) in nonsmall cell lung cancer. Methods: Seven patients had 27 DW-MRI scans before and during radiotherapy in a 1.5T scanner. Respiratory triggering was applied to the echo-planar DW-MRI with TR=4500ms approximately, TE=74ms, pixel size=1.98X1.98mm{sub 2}, slice thickness=4–6mm andmore » 7 axial slices. Diffusion gradients were applied to all three axes producing traceweighted images with eight b-values of 0–1000μs/μm{sup 2}. Monoexponential model ADC values using various b-value sets were compared to ADC IVIM using all b-values. To compare the relative noise in ADC maps, intra-scan coefficient of variation (CV) of active tumor volumes was computed. Results: ADC IVIM, perfusion coefficient and perfusion fraction for tumor volumes were in the range of 880-1622 μm{sup 2}/s, 8119-33834 μm{sup 2}/s and 0.104–0.349, respectively. ADC values using sets of 250, 800 and 1000; 250, 650 and 1000; and 250–1000μs/μm{sup 2} only were not significantly different from ADC IVIM(p>0.05, paired t-test). Error in ADC values for 0–1000, 50–1000, 100–1000, 250–1000, 500–1000, and three b-value sets- 250, 500 and 1000; 250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2} were 15.0, 9.4, 5.6, 1.4, 11.7, 3.7, 2.0 and 0.2% relative to the reference-standard ADC IVIM, respectively. Mean intrascan CV was 20.2, 20.9, 21.9, 24.9, 32.6, 25.8, 25.4 and 24.8%, respectively, whereas that for ADC IVIM was 23.3%. Conclusion: ADC values of two 3 b-value sets (250, 650 and 1000; and 250, 800 and 1000μs/μm{sup 2

  11. Improved Algorithms for Accurate Retrieval of UV - Visible Diffuse Attenuation Coefficients in Optically Complex, Inshore Waters

    NASA Technical Reports Server (NTRS)

    Cao, Fang; Fichot, Cedric G.; Hooker, Stanford B.; Miller, William L.

    2014-01-01

    Photochemical processes driven by high-energy ultraviolet radiation (UVR) in inshore, estuarine, and coastal waters play an important role in global bio geochemical cycles and biological systems. A key to modeling photochemical processes in these optically complex waters is an accurate description of the vertical distribution of UVR in the water column which can be obtained using the diffuse attenuation coefficients of down welling irradiance (Kd()). The Sea UV Sea UVc algorithms (Fichot et al., 2008) can accurately retrieve Kd ( 320, 340, 380,412, 443 and 490 nm) in oceanic and coastal waters using multispectral remote sensing reflectances (Rrs(), Sea WiFS bands). However, SeaUVSeaUVc algorithms are currently not optimized for use in optically complex, inshore waters, where they tend to severely underestimate Kd(). Here, a new training data set of optical properties collected in optically complex, inshore waters was used to re-parameterize the published SeaUVSeaUVc algorithms, resulting in improved Kd() retrievals for turbid, estuarine waters. Although the updated SeaUVSeaUVc algorithms perform best in optically complex waters, the published SeaUVSeaUVc models still perform well in most coastal and oceanic waters. Therefore, we propose a composite set of SeaUVSeaUVc algorithms, optimized for Kd() retrieval in almost all marine systems, ranging from oceanic to inshore waters. The composite algorithm set can retrieve Kd from ocean color with good accuracy across this wide range of water types (e.g., within 13 mean relative error for Kd(340)). A validation step using three independent, in situ data sets indicates that the composite SeaUVSeaUVc can generate accurate Kd values from 320 490 nm using satellite imagery on a global scale. Taking advantage of the inherent benefits of our statistical methods, we pooled the validation data with the training set, obtaining an optimized composite model for estimating Kd() in UV wavelengths for almost all marine waters. This

  12. Individuals Achieve More Accurate Results with Meters That Are Codeless and Employ Dynamic Electrochemistry

    PubMed Central

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Background Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Methods Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Results Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Conclusions Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. PMID:20167178

  13. Individuals achieve more accurate results with meters that are codeless and employ dynamic electrochemistry.

    PubMed

    Rao, Anoop; Wiley, Meg; Iyengar, Sridhar; Nadeau, Dan; Carnevale, Julie

    2010-01-01

    Studies have shown that controlling blood glucose can reduce the onset and progression of the long-term microvascular and neuropathic complications associated with the chronic course of diabetes mellitus. Improved glycemic control can be achieved by frequent testing combined with changes in medication, exercise, and diet. Technological advancements have enabled improvements in analytical accuracy of meters, and this paper explores two such parameters to which that accuracy can be attributed. Four blood glucose monitoring systems (with or without dynamic electrochemistry algorithms, codeless or requiring coding prior to testing) were evaluated and compared with respect to their accuracy. Altogether, 108 blood glucose values were obtained for each system from 54 study participants and compared with the reference values. The analysis depicted in the International Organization for Standardization table format indicates that the devices with dynamic electrochemistry and the codeless feature had the highest proportion of acceptable results overall (System A, 101/103). Results were significant when compared at the 10% bias level with meters that were codeless and utilized static electrochemistry (p = .017) or systems that had static electrochemistry but needed coding (p = .008). Analytical performance of these blood glucose meters differed significantly depending on their technologic features. Meters that utilized dynamic electrochemistry and did not require coding were more accurate than meters that used static electrochemistry or required coding. 2010 Diabetes Technology Society.

  14. Accurate coarse-grained models for mixtures of colloids and linear polymers under good-solvent conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    D’Adamo, Giuseppe, E-mail: giuseppe.dadamo@sissa.it; Pelissetto, Andrea, E-mail: andrea.pelissetto@roma1.infn.it; Pierleoni, Carlo, E-mail: carlo.pierleoni@aquila.infn.it

    2014-12-28

    A coarse-graining strategy, previously developed for polymer solutions, is extended here to mixtures of linear polymers and hard-sphere colloids. In this approach, groups of monomers are mapped onto a single pseudoatom (a blob) and the effective blob-blob interactions are obtained by requiring the model to reproduce some large-scale structural properties in the zero-density limit. We show that an accurate parametrization of the polymer-colloid interactions is obtained by simply introducing pair potentials between blobs and colloids. For the coarse-grained (CG) model in which polymers are modelled as four-blob chains (tetramers), the pair potentials are determined by means of the iterative Boltzmannmore » inversion scheme, taking full-monomer (FM) pair correlation functions at zero-density as targets. For a larger number n of blobs, pair potentials are determined by using a simple transferability assumption based on the polymer self-similarity. We validate the model by comparing its predictions with full-monomer results for the interfacial properties of polymer solutions in the presence of a single colloid and for thermodynamic and structural properties in the homogeneous phase at finite polymer and colloid density. The tetramer model is quite accurate for q ≲ 1 (q=R{sup ^}{sub g}/R{sub c}, where R{sup ^}{sub g} is the zero-density polymer radius of gyration and R{sub c} is the colloid radius) and reasonably good also for q = 2. For q = 2, an accurate coarse-grained description is obtained by using the n = 10 blob model. We also compare our results with those obtained by using single-blob models with state-dependent potentials.« less

  15. In vitro comparison between the image obtained using PSP plates and Kodak E-speed films.

    PubMed

    Petel, R; Yaroslavsky, L; Kaffe, I

    2014-07-01

    The aim of this study was to compare the intra-oral radiographic images obtained by a PSP digital radiography system ("Orex", Israel) with that obtained using Kodak Ultra speed films in terms of image quality, radiation dosage and diagnostic value. The physical measurement of image quality was conducted with an aluminum step-wedge. Radiation dosage was measured with a dosimeter. Fog and base levels were measured by developing unexposed films and scanning unexposed PSP plates. The in vitro model included preparation and radiographic evaluation of approximal artificial lesions in premolars and molars in depths ranging from 0.25 mm to 1.00 mm. Radiographs were evaluated for the existence of a lesion and its size by 8 experienced clinicians. Relative contrast was similar in both methods. The resolving power of the digital system was lower than that of the E-speed film. As for the subjective evaluation of artificial lesions, there was no significant difference between the two methods excluding those tooth images without lesions, where the analog method was found to be more accurate. The PSP system ("Orex") provides good image quality and diagnostic information with reduced exposure when compared with E-speed film.

  16. Accurate Arabic Script Language/Dialect Classification

    DTIC Science & Technology

    2014-01-01

    Army Research Laboratory Accurate Arabic Script Language/Dialect Classification by Stephen C. Tratz ARL-TR-6761 January 2014 Approved for public...1197 ARL-TR-6761 January 2014 Accurate Arabic Script Language/Dialect Classification Stephen C. Tratz Computational and Information Sciences...Include area code) Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 January 2014 Final Accurate Arabic Script Language/Dialect Classification

  17. Comprehensive theoretical study towards the accurate proton affinity values of naturally occurring amino acids

    NASA Astrophysics Data System (ADS)

    Dinadayalane, T. C.; Sastry, G. Narahari; Leszczynski, Jerzy

    Systematic quantum chemical studies of Hartree-Fock (HF) and second-order Møller-Plesset (MP2) methods, and B3LYP functional, with a range of basis sets were employed to evaluate proton affinity values of all naturally occurring amino acids. The B3LYP and MP2 in conjunction with 6-311+G(d,p) basis set provide the proton affinity values that are in very good agreement with the experimental results, with an average deviation of ?1 kcal/mol. The number and the relative strength of intramolecular hydrogen bonding play a key role in the proton affinities of amino acids. The computational exploration of the conformers reveals that the global minima conformations of the neutral and protonated amino acids are different in eight cases. The present study reveals that B3LYP/6-311+G(d,p) is a very good choice of technique to evaluate the proton affinities of amino acids and the compounds derived from them reliably and economically.

  18. Reliability of cystometrically obtained intravesical pressures in patients with neurogenic bladders.

    PubMed

    Hess, Marika J; Lim, Lance; Yalla, Subbarao V

    2002-01-01

    Urodynamic studies in patients with neurogenic bladder detect and categorize neurourodynamic states, identify the risk for urologic sequelae, and determine the necessity for interventions. Because urodynamic studies serves as a prognostic indicator and guides patient management, pressure measurements during the study must accurately represent bladder function under physiologic conditions. Because nonphysiologic bladder filling used during conventional urodynamic studies may alter the bladder's accommodative properties, we studied how closely the intravesical pressures obtained before filling cystometry resembled those obtained during the filling phase of the cystometrogram. Twenty-two patients (21 men, 1 woman) with neurogenic bladders underwent standard urodynamic studies. A 16F triple-lumen catheter was inserted into the bladder, and the intravesical pressures were recorded (physiologic volume-specific pressures, PVSP). After emptying the bladder, an equal volume of normal saline solution was reinfused, and the pressures were recorded again (cystometric volume-specific pressure, CVSP). All patients underwent routine fluoroscopically assisted urodynamic testing. The PVSP and the CVSP were compared using the Wilcoxon signed ranks test. P value of .05 was significant. The mean PVSP was 14.5 cmH2O (range, 4-42 cmH2O) and mean CVSP was 20.6 cmH2O (range, 6-70 cmH2O). The CVSP was significantly higher than the PVSP (P = .01). Filling pressures during cystometry (CVSP) were significantly higher than the pressures measured at rest (PVSP). This study also suggests a strong correlation between PVSP and CVSP.

  19. Intraocular pressure values obtained by ocular response analyzer, dynamic contour tonometry, and goldmann tonometry in keratokonic corneas.

    PubMed

    Bayer, Atilla; Sahin, Afsun; Hürmeriç, Volkan; Ozge, Gökhan

    2010-01-01

    To determine the agreement between dynamic contour tonometer (DCT), Goldmann applanation tonometer (GAT), and Ocular Response Analyzer (ORA) in keratoconic corneas and to find out the effect of corneal biomechanics on intraocular pressure (IOP) measurements obtained by these devices. IOP was measured with the ORA, DCT, and GAT in random order in 120 eyes of 61 keratoconus patients. Central corneal thickness (CCT) and keratometry were measured after all IOP determinations had been made. The mean IOP measurement by the ORA and DCT was compared with the measurement by the GAT, using Student t test. Bland-Altman analysis was performed to assess the clinical agreement between these methods. The effect of corneal hysteresis (CH), corneal resistance factor (CRF), and CCT on measured IOP was explored by multiple backward stepwise linear regression analysis. The mean±SD patient age was 30.6±11.2 years. The mean±SD IOP measurement obtained with GAT, ORA Goldmann-correlated IOP (IOPg), ORA corneal-compensated IOP (IOPcc), and DCT was 10.96±2.8, 10.23±3.5, 14.65±2.8, and 15.42±2.7 mm Hg, respectively. The mean±SD CCT was 464.08±58.4 microns. The mean difference between IOPcc and GAT (P<0.0001), IOPcc and DCT (P<0.001), GAT and DCT (P<0.0001), IOPg and GAT (P<0.002), and IOPg and DCT (P<0.0001), was highly statistically significant. In multivariable regression analysis, DCT IOP and GAT IOP measurements were significantly associated with CH and CRF (P<0.0001 for both). DCT seemed to be affected by CH and CRF, and the IOP values tended to be higher when compared with GAT. ORA-measured IOPcc was found to be independent of CCT and suitable in comparison to the DCT in keratoconic eyes.

  20. On canonical cylinder sections for accurate determination of contact angle in microgravity

    NASA Technical Reports Server (NTRS)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  1. Toward Accurate and Quantitative Comparative Metagenomics.

    PubMed

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Accurate mass measurement by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry. I. Measurement of positive radical ions using porphyrin standard reference materials.

    PubMed

    Griffiths, Nia W; Wyatt, Mark F; Kean, Suzanna D; Graham, Andrew E; Stein, Bridget K; Brenton, A Gareth

    2010-06-15

    A method for the accurate mass measurement of positive radical ions by matrix-assisted laser desorption/ionisation time-of-flight mass spectrometry (MALDI-TOFMS) is described. Initial use of a conjugated oligomeric calibration material was rejected in favour of a series of meso-tetraalkyl/tetraalkylaryl-functionalised porphyrins, from which the two calibrants required for a particular accurate mass measurement were chosen. While all measurements of monoisotopic species were within +/-5 ppm, and the method was rigorously validated using chemometrics, mean values of five measurements were used for extra confidence in the generation of potential elemental formulae. Potential difficulties encountered when measuring compounds containing multi-isotopic elements are discussed, where the monoisotopic peak is no longer the lowest mass peak, and a simple mass-correction solution can be applied. The method requires no significant expertise to implement, but care and attention is required to obtain valid measurements. The method is operationally simple and will prove useful to the analytical chemistry community. Copyright (c) 2010 John Wiley & Sons, Ltd.

  3. The impact of reliable prebolus T 1 measurements or a fixed T 1 value in the assessment of glioma patients with dynamic contrast enhancing MRI.

    PubMed

    Tietze, Anna; Mouridsen, Kim; Mikkelsen, Irene Klærke

    2015-06-01

    Accurate quantification of hemodynamic parameters using dynamic contrast enhanced (DCE) MRI requires a measurement of tissue T 1 prior to contrast injection (T 1). We evaluate (i) T 1 estimation using the variable flip angle (VFA) and the saturation recovery (SR) techniques and (ii) investigate if accurate estimation of DCE parameters outperform a time-saving approach with a predefined T 1 value when differentiating high- from low-grade gliomas. The accuracy and precision of T 1 measurements, acquired by VFA and SR, were investigated by computer simulations and in glioma patients using an equivalence test (p > 0.05 showing significant difference). The permeability measure, K trans, cerebral blood flow (CBF), and - volume, V p, were calculated in 42 glioma patients, using fixed T 1 of 1500 ms or an individual T 1 measurement, using SR. The areas under the receiver operating characteristic curves (AUCs) were used as measures for accuracy to differentiate tumor grade. The T 1 values obtained by VFA showed larger variation compared to those obtained using SR both in the digital phantom and the human data (p > 0.05). Although a fixed T 1 introduced a bias into the DCE calculation, this had only minor impact on the accuracy differentiating high-grade from low-grade gliomas, (AUCfix = 0.906 and AUCind = 0.884 for K trans; AUCfix = 0.863 and AUCind = 0.856 for V p; p for AUC comparison > 0.05). T 1 measurements by VFA were less precise, and the SR method is preferable, when accurate parameter estimation is required. Semiquantitative DCE values, based on predefined T 1 values, were sufficient to perform tumor grading in our study.

  4. A Cost-Effective Transparency-Based Digital Imaging for Efficient and Accurate Wound Area Measurement

    PubMed Central

    Li, Pei-Nan; Li, Hong; Wu, Mo-Li; Wang, Shou-Yu; Kong, Qing-You; Zhang, Zhen; Sun, Yuan; Liu, Jia; Lv, De-Cheng

    2012-01-01

    Wound measurement is an objective and direct way to trace the course of wound healing and to evaluate therapeutic efficacy. Nevertheless, the accuracy and efficiency of the current measurement methods need to be improved. Taking the advantages of reliability of transparency tracing and the accuracy of computer-aided digital imaging, a transparency-based digital imaging approach is established, by which data from 340 wound tracing were collected from 6 experimental groups (8 rats/group) at 8 experimental time points (Day 1, 3, 5, 7, 10, 12, 14 and 16) and orderly archived onto a transparency model sheet. This sheet was scanned and its image was saved in JPG form. Since a set of standard area units from 1 mm2 to 1 cm2 was integrated into the sheet, the tracing areas in JPG image were measured directly, using the “Magnetic lasso tool” in Adobe Photoshop program. The pixel values/PVs of individual outlined regions were obtained and recorded in an average speed of 27 second/region. All PV data were saved in an excel form and their corresponding areas were calculated simultaneously by the formula of Y (PV of the outlined region)/X (PV of standard area unit) × Z (area of standard unit). It took a researcher less than 3 hours to finish area calculation of 340 regions. In contrast, over 3 hours were expended by three skillful researchers to accomplish the above work with traditional transparency-based method. Moreover, unlike the results obtained traditionally, little variation was found among the data calculated by different persons and the standard area units in different sizes and shapes. Given its accurate, reproductive and efficient properties, this transparency-based digital imaging approach would be of significant values in basic wound healing research and clinical practice. PMID:22666449

  5. Obtaining orthotropic elasticity tensor using entries zeroing method.

    NASA Astrophysics Data System (ADS)

    Gierlach, Bartosz; Danek, Tomasz

    2017-04-01

    A generally anisotropic elasticity tensor obtained from measurements can be represented by a tensor belonging to one of eight material symmetry classes. Knowledge of symmetry class and orientation is helpful for describing physical properties of a medium. For each non-trivial symmetry class except isotropic this problem is nonlinear. A common method of obtaining effective tensor is a choosing its non-trivial symmetry class and minimizing Frobenius norm between measured and effective tensor in the same coordinate system. Global optimization algorithm has to be used to determine the best rotation of a tensor. In this contribution, we propose a new approach to obtain optimal tensor, with the assumption that it is orthotropic (or at least has a similar shape to the orthotropic one). In orthotropic form tensor 24 out of 36 entries are zeros. The idea is to minimize the sum of squared entries which are supposed to be equal to zero through rotation calculated with optimization algorithm - in this case Particle Swarm Optimization (PSO) algorithm. Quaternions were used to parametrize rotations in 3D space to improve computational efficiency. In order to avoid a choice of local minima we apply PSO several times and only if we obtain similar results for the third time we consider it as a correct value and finish computations. To analyze obtained results Monte-Carlo method was used. After thousands of single runs of PSO optimization, we obtained values of quaternion parts and plot them. Points concentrate in several points of the graph following the regular pattern. It suggests the existence of more complex symmetry in the analyzed tensor. Then thousands of realizations of generally anisotropic tensor were generated - each tensor entry was replaced with a random value drawn from normal distribution having a mean equal to measured tensor entry and standard deviation of the measurement. Each of these tensors was subject of PSO based optimization delivering quaternion for optimal

  6. Identification of Microorganisms by High Resolution Tandem Mass Spectrometry with Accurate Statistical Significance

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Suffredini, Anthony F.; Sacks, David B.; Yu, Yi-Kuo

    2016-02-01

    Correct and rapid identification of microorganisms is the key to the success of many important applications in health and safety, including, but not limited to, infection treatment, food safety, and biodefense. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is challenging correct microbial identification because of the large number of choices present. To properly disentangle candidate microbes, one needs to go beyond apparent morphology or simple `fingerprinting'; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptidome profiles of microbes to better separate them and by designing an analysis method that yields accurate statistical significance. Here, we present an analysis pipeline that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using MS/MS data of 81 samples, each composed of a single known microorganism, that the proposed pipeline can correctly identify microorganisms at least at the genus and species levels. We have also shown that the proposed pipeline computes accurate statistical significances, i.e., E-values for identified peptides and unified E-values for identified microorganisms. The proposed analysis pipeline has been implemented in MiCId, a freely available software for Microorganism Classification and Identification. MiCId is available for download at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  7. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  8. Accurate color synthesis of three-dimensional objects in an image

    NASA Astrophysics Data System (ADS)

    Xin, John H.; Shen, Hui-Liang

    2004-05-01

    Our study deals with color synthesis of a three-dimensional object in an image; i.e., given a single image, a target color can be accurately mapped onto the object such that the color appearance of the synthesized object closely resembles that of the actual one. As it is almost impossible to acquire the complete geometric description of the surfaces of an object in an image, this study attempted to recover the implicit description of geometry for the color synthesis. The description was obtained from either a series of spectral reflectances or the RGB signals at different surface positions on the basis of the dichromatic reflection model. The experimental results showed that this implicit image-based representation is related to the object geometry and is sufficient for accurate color synthesis of three-dimensional objects in an image. The method established is applicable to the color synthesis of both rigid and deformable objects and should contribute to color fidelity in virtual design, manufacturing, and retailing.

  9. An algorithm to extract more accurate stream longitudinal profiles from unfilled DEMs

    NASA Astrophysics Data System (ADS)

    Byun, Jongmin; Seong, Yeong Bae

    2015-08-01

    Morphometric features observed from a stream longitudinal profile (SLP) reflect channel responses to lithological variation and changes in uplift or climate; therefore, they constitute essential indicators in the studies for the dynamics between tectonics, climate, and surface processes. The widespread availability of digital elevation models (DEMs) and their processing enable semi-automatic extraction of SLPs as well as additional stream profile parameters, thus reducing the time spent for extracting them and simultaneously allowing regional-scale studies of SLPs. However, careful consideration is required to extract SLPs directly from a DEM, because the DEM must be altered by depression filling process to ensure the continuity of flows across it. Such alteration inevitably introduces distortions to the SLP, such as stair steps, bias of elevation values, and inaccurate stream paths. This paper proposes a new algorithm, called maximum depth tracing algorithm (MDTA), to extract more accurate SLPs using depression-unfilled DEMs. The MDTA supposes that depressions in DEMs are not necessarily artifacts to be removed, and that elevation values within them are useful to represent more accurately the real landscape. To ensure the continuity of flows even across the unfilled DEM, the MDTA first determines the outlet of each depression and then reverses flow directions of the cells on the line of maximum depth within each depression, beginning from the outlet and toward the sink. It also calculates flow accumulation without disruption across the unfilled DEM. Comparative analysis with the profiles extracted by the hydrologic functions implemented in the ArcGIS™ was performed to illustrate the benefits from the MDTA. It shows that the MDTA provides more accurate stream paths on depression areas, and consequently reduces distortions of the SLPs derived from the paths, such as exaggerated elevation values and negatively biased slopes that are commonly observed in the SLPs

  10. Rapid and accurate peripheral nerve detection using multipoint Raman imaging (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Kumamoto, Yasuaki; Minamikawa, Takeo; Kawamura, Akinori; Matsumura, Junichi; Tsuda, Yuichiro; Ukon, Juichiro; Harada, Yoshinori; Tanaka, Hideo; Takamatsu, Tetsuro

    2017-02-01

    Nerve-sparing surgery is essential to avoid functional deficits of the limbs and organs. Raman scattering, a label-free, minimally invasive, and accurate modality, is one of the best candidate technologies to detect nerves for nerve-sparing surgery. However, Raman scattering imaging is too time-consuming to be employed in surgery. Here we present a rapid and accurate nerve visualization method using a multipoint Raman imaging technique that has enabled simultaneous spectra measurement from different locations (n=32) of a sample. Five sec is sufficient for measuring n=32 spectra with good S/N from a given tissue. Principal component regression discriminant analysis discriminated spectra obtained from peripheral nerves (n=863 from n=161 myelinated nerves) and connective tissue (n=828 from n=121 tendons) with sensitivity and specificity of 88.3% and 94.8%, respectively. To compensate the spatial information of a multipoint-Raman-derived tissue discrimination image that is too sparse to visualize nerve arrangement, we used morphological information obtained from a bright-field image. When merged with the sparse tissue discrimination image, a morphological image of a sample shows what portion of Raman measurement points in arbitrary structure is determined as nerve. Setting a nerve detection criterion on the portion of "nerve" points in the structure as 40% or more, myelinated nerves (n=161) and tendons (n=121) were discriminated with sensitivity and specificity of 97.5%. The presented technique utilizing a sparse multipoint Raman image and a bright-field image has enabled rapid, safe, and accurate detection of peripheral nerves.

  11. Rapid and Accurate Multiple Testing Correction and Power Estimation for Millions of Correlated Markers

    PubMed Central

    Han, Buhm; Kang, Hyun Min; Eskin, Eleazar

    2009-01-01

    With the development of high-throughput sequencing and genotyping technologies, the number of markers collected in genetic association studies is growing rapidly, increasing the importance of methods for correcting for multiple hypothesis testing. The permutation test is widely considered the gold standard for accurate multiple testing correction, but it is often computationally impractical for these large datasets. Recently, several studies proposed efficient alternative approaches to the permutation test based on the multivariate normal distribution (MVN). However, they cannot accurately correct for multiple testing in genome-wide association studies for two reasons. First, these methods require partitioning of the genome into many disjoint blocks and ignore all correlations between markers from different blocks. Second, the true null distribution of the test statistic often fails to follow the asymptotic distribution at the tails of the distribution. We propose an accurate and efficient method for multiple testing correction in genome-wide association studies—SLIDE. Our method accounts for all correlation within a sliding window and corrects for the departure of the true null distribution of the statistic from the asymptotic distribution. In simulations using the Wellcome Trust Case Control Consortium data, the error rate of SLIDE's corrected p-values is more than 20 times smaller than the error rate of the previous MVN-based methods' corrected p-values, while SLIDE is orders of magnitude faster than the permutation test and other competing methods. We also extend the MVN framework to the problem of estimating the statistical power of an association study with correlated markers and propose an efficient and accurate power estimation method SLIP. SLIP and SLIDE are available at http://slide.cs.ucla.edu. PMID:19381255

  12. The Application of FT-IR Spectroscopy for Quality Control of Flours Obtained from Polish Producers

    PubMed Central

    Ceglińska, Alicja; Reder, Magdalena; Ciemniewska-Żytkiewicz, Hanna

    2017-01-01

    Samples of wheat, spelt, rye, and triticale flours produced by different Polish mills were studied by both classic chemical methods and FT-IR MIR spectroscopy. An attempt was made to statistically correlate FT-IR spectral data with reference data with regard to content of various components, for example, proteins, fats, ash, and fatty acids as well as properties such as moisture, falling number, and energetic value. This correlation resulted in calibrated and validated statistical models for versatile evaluation of unknown flour samples. The calibration data set was used to construct calibration models with use of the CSR and the PLS with the leave one-out, cross-validation techniques. The calibrated models were validated with a validation data set. The results obtained confirmed that application of statistical models based on MIR spectral data is a robust, accurate, precise, rapid, inexpensive, and convenient methodology for determination of flour characteristics, as well as for detection of content of selected flour ingredients. The obtained models' characteristics were as follows: R2 = 0.97, PRESS = 2.14; R2 = 0.96, PRESS = 0.69; R2 = 0.95, PRESS = 1.27; R2 = 0.94, PRESS = 0.76, for content of proteins, lipids, ash, and moisture level, respectively. Best results of CSR models were obtained for protein, ash, and crude fat (R2 = 0.86; 0.82; and 0.78, resp.). PMID:28243483

  13. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  14. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  15. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  16. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  17. 38 CFR 4.46 - Accurate measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate... indispensable in examinations conducted within the Department of Veterans Affairs. Muscle atrophy must also be...

  18. Spectral multivariate calibration without laboratory prepared or determined reference analyte values.

    PubMed

    Ottaway, Josh; Farrell, Jeremy A; Kalivas, John H

    2013-02-05

    An essential part to calibration is establishing the analyte calibration reference samples. These samples must characterize the sample matrix and measurement conditions (chemical, physical, instrumental, and environmental) of any sample to be predicted. Calibration usually requires measuring spectra for numerous reference samples in addition to determining the corresponding analyte reference values. Both tasks are typically time-consuming and costly. This paper reports on a method named pure component Tikhonov regularization (PCTR) that does not require laboratory prepared or determined reference values. Instead, an analyte pure component spectrum is used in conjunction with nonanalyte spectra for calibration. Nonanalyte spectra can be from different sources including pure component interference samples, blanks, and constant analyte samples. The approach is also applicable to calibration maintenance when the analyte pure component spectrum is measured in one set of conditions and nonanalyte spectra are measured in new conditions. The PCTR method balances the trade-offs between calibration model shrinkage and the degree of orthogonality to the nonanalyte content (model direction) in order to obtain accurate predictions. Using visible and near-infrared (NIR) spectral data sets, the PCTR results are comparable to those obtained using ridge regression (RR) with reference calibration sets. The flexibility of PCTR also allows including reference samples if such samples are available.

  19. A multiscale red blood cell model with accurate mechanics, rheology, and dynamics.

    PubMed

    Fedosov, Dmitry A; Caswell, Bruce; Karniadakis, George Em

    2010-05-19

    Red blood cells (RBCs) have highly deformable viscoelastic membranes exhibiting complex rheological response and rich hydrodynamic behavior governed by special elastic and bending properties and by the external/internal fluid and membrane viscosities. We present a multiscale RBC model that is able to predict RBC mechanics, rheology, and dynamics in agreement with experiments. Based on an analytic theory, the modeled membrane properties can be uniquely related to the experimentally established RBC macroscopic properties without any adjustment of parameters. The RBC linear and nonlinear elastic deformations match those obtained in optical-tweezers experiments. The rheological properties of the membrane are compared with those obtained in optical magnetic twisting cytometry, membrane thermal fluctuations, and creep followed by cell recovery. The dynamics of RBCs in shear and Poiseuille flows is tested against experiments and theoretical predictions, and the applicability of the latter is discussed. Our findings clearly indicate that a purely elastic model for the membrane cannot accurately represent the RBC's rheological properties and its dynamics, and therefore accurate modeling of a viscoelastic membrane is necessary. Copyright 2010 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  20. Feedback about More Accurate versus Less Accurate Trials: Differential Effects on Self-Confidence and Activation

    ERIC Educational Resources Information Center

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-01-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected by feedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On Day 1, participants performed a golf putting task under one of…

  1. Differences in liver stiffness values obtained with new ultrasound elastography machines and Fibroscan: A comparative study.

    PubMed

    Piscaglia, Fabio; Salvatore, Veronica; Mulazzani, Lorenzo; Cantisani, Vito; Colecchia, Antonio; Di Donato, Roberto; Felicani, Cristina; Ferrarini, Alessia; Gamal, Nesrine; Grasso, Valentina; Marasco, Giovanni; Mazzotta, Elena; Ravaioli, Federico; Ruggieri, Giacomo; Serio, Ilaria; Sitouok Nkamgho, Joules Fabrice; Serra, Carla; Festi, Davide; Schiavone, Cosima; Bolondi, Luigi

    2017-07-01

    Whether Fibroscan thresholds can be immediately adopted for none, some or all other shear wave elastography techniques has not been tested. The aim of the present study was to test the concordance of the findings obtained from 7 of the most recent ultrasound elastography machines with respect to Fibroscan. Sixteen hepatitis C virus-related patients with fibrosis ≥2 and having reliable results at Fibroscan were investigated in two intercostal spaces using 7 different elastography machines. Coefficients of both precision (an index of data dispersion) and accuracy (an index of bias correction factors expressing different magnitudes of changes in comparison to the reference) were calculated. Median stiffness values differed among the different machines as did coefficients of both precision (range 0.54-0.72) and accuracy (range 0.28-0.87). When the average of the measurements of two intercostal spaces was considered, coefficients of precision significantly increased with all machines (range 0.72-0.90) whereas of accuracy improved more scatteredly and by a smaller degree (range 0.40-0.99). The present results showed only moderate concordance of the majority of elastography machines with the Fibroscan results, preventing the possibility of the immediate universal adoption of Fibroscan thresholds for defining liver fibrosis staging for all new machines. Copyright © 2017 Editrice Gastroenterologica Italiana S.r.l. Published by Elsevier Ltd. All rights reserved.

  2. Uncertainties in obtaining high reliability from stress-strength models

    NASA Technical Reports Server (NTRS)

    Neal, Donald M.; Matthews, William T.; Vangel, Mark G.

    1992-01-01

    There has been a recent interest in determining high statistical reliability in risk assessment of aircraft components. The potential consequences are identified of incorrectly assuming a particular statistical distribution for stress or strength data used in obtaining the high reliability values. The computation of the reliability is defined as the probability of the strength being greater than the stress over the range of stress values. This method is often referred to as the stress-strength model. A sensitivity analysis was performed involving a comparison of reliability results in order to evaluate the effects of assuming specific statistical distributions. Both known population distributions, and those that differed slightly from the known, were considered. Results showed substantial differences in reliability estimates even for almost nondetectable differences in the assumed distributions. These differences represent a potential problem in using the stress-strength model for high reliability computations, since in practice it is impossible to ever know the exact (population) distribution. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability. An alternative reliability computation procedure is examined involving determination of a lower bound on the reliability values using extreme value distributions. This procedure reduces the possibility of obtaining nonconservative reliability estimates. Results indicated the method can provide conservative bounds when computing high reliability.

  3. Measuring Value-at-Risk and Expected Shortfall of crude oil portfolio using extreme value theory and vine copula

    NASA Astrophysics Data System (ADS)

    Yu, Wenhua; Yang, Kun; Wei, Yu; Lei, Likun

    2018-01-01

    Volatilities of crude oil price have important impacts on the steady and sustainable development of world real economy. Thus it is of great academic and practical significance to model and measure the volatility and risk of crude oil markets accurately. This paper aims to measure the Value-at-Risk (VaR) and Expected Shortfall (ES) of a portfolio consists of four crude oil assets by using GARCH-type models, extreme value theory (EVT) and vine copulas. The backtesting results show that the combination of GARCH-type-EVT models and vine copula methods can produce accurate risk measures of the oil portfolio. Mixed R-vine copula is more flexible and superior to other vine copulas. Different GARCH-type models, which can depict the long-memory and/or leverage effect of oil price volatilities, however offer similar marginal distributions of the oil returns.

  4. The influence of attention on value integration.

    PubMed

    Kunar, Melina A; Watson, Derrick G; Tsetsos, Konstantinos; Chater, Nick

    2017-08-01

    People often have to make decisions based on many pieces of information. Previous work has found that people are able to integrate values presented in a rapid serial visual presentation (RSVP) stream to make informed judgements on the overall stream value (Tsetsos et al. Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659-9664, 2012). It is also well known that attentional mechanisms influence how people process information. However, it is unknown how attentional factors impact value judgements of integrated material. The current study is the first of its kind to investigate whether value judgements are influenced by attentional processes when assimilating information. Experiments 1-3 examined whether the attentional salience of an item within an RSVP stream affected judgements of overall stream value. The results showed that the presence of an irrelevant high or low value salient item biased people to judge the stream as having a higher or lower overall mean value, respectively. Experiments 4-7 directly tested Tsetsos et al.'s (Proceedings of the National Academy of Sciences of the United States of America, 109(24), 9659-9664, 2012) theory examining whether extreme values in an RSVP stream become over-weighted, thereby capturing attention more than other values in the stream. The results showed that the presence of both a high (Experiments 4, 6 and 7) and a low (Experiment 5) value outlier captures attention leading to less accurate report of subsequent items in the stream. Taken together, the results showed that valuations can be influenced by attentional processes, and can lead to less accurate subjective judgements.

  5. Time-Accurate Numerical Prediction of Free Flight Aerodynamics of a Finned Projectile

    DTIC Science & Technology

    2005-09-01

    develop (with fewer dollars) more lethal and effective munitions. The munitions must stay abreast of the latest technology available to our...consuming. Computer simulations can and have provided an effective means of determining the unsteady aerodynamics and flight mechanics of guided projectile...Recently, the time-accurate technique was used to obtain improved results for Magnus moment and roll damping moment of a spinning projectile at transonic

  6. A new automatic blood pressure kit auscultates for accurate reading with a smartphone: A diagnostic accuracy study.

    PubMed

    Wu, Hongjun; Wang, Bingjian; Zhu, Xinpu; Chu, Guang; Zhang, Zhi

    2016-08-01

    The widely used oscillometric automated blood pressure (BP) monitor was continuously questioned on its accuracy. A novel BP kit named Accutension which adopted Korotkoff auscultation method was then devised. Accutension worked with a miniature microphone, a pressure sensor, and a smartphone. The BP values were automatically displayed on the smartphone screen through the installed App. Data recorded in the phone could be played back and reconfirmed after measurement. They could also be uploaded and saved to the iCloud. The accuracy and consistency of this novel electronic auscultatory sphygmomanometer was preliminarily verified here. Thirty-two subjects were included and 82 qualified readings were obtained. The mean differences ± SD for systolic and diastolic BP readings between Accutension and mercury sphygmomanometer were 0.87 ± 2.86 and -0.94 ± 2.93 mm Hg. Agreements between Accutension and mercury sphygmomanometer were highly significant for systolic (ICC = 0.993, 95% confidence interval (CI): 0.989-0.995) and diastolic (ICC = 0.987, 95% CI: 0.979-0.991). In conclusion, Accutension worked accurately based on our pilot study data. The difference was acceptable. ICC and Bland-Altman plot charts showed good agreements with manual measurements. Systolic readings of Accutension were slightly higher than those of manual measurement, while diastolic readings were slightly lower. One possible reason was that Accutension captured the first and the last korotkoff sound more sensitively than human ear during manual measurement and avoided sound missing, so that it might be more accurate than traditional mercury sphygmomanometer. By documenting and analyzing of variant tendency of BP values, Accutension helps management of hypertension and therefore contributes to the mobile heath service.

  7. [Gas Concentration Measurement Based on the Integral Value of Absorptance Spectrum].

    PubMed

    Liu, Hui-jun; Tao, Shao-hua; Yang, Bing-chu; Deng, Hong-gui

    2015-12-01

    of the absorptance spectrum varied with temperature and the calculating error for the integral value fluctuates with ranges of temperature, in the gas measurement when we usd integral values of the absoptance spectrum, we should select a suitable temperature variation and obtain a more accurate measurement result.

  8. Multifractal Value at Risk model

    NASA Astrophysics Data System (ADS)

    Lee, Hojin; Song, Jae Wook; Chang, Woojin

    2016-06-01

    In this paper new Value at Risk (VaR) model is proposed and investigated. We consider the multifractal property of financial time series and develop a multifractal Value at Risk (MFVaR). MFVaR introduced in this paper is analytically tractable and not based on simulation. Empirical study showed that MFVaR can provide the more stable and accurate forecasting performance in volatile financial markets where large loss can be incurred. This implies that our multifractal VaR works well for the risk measurement of extreme credit events.

  9. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2002-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  10. Accurate Thermal Stresses for Beams: Normal Stress

    NASA Technical Reports Server (NTRS)

    Johnson, Theodore F.; Pilkey, Walter D.

    2003-01-01

    Formulations for a general theory of thermoelasticity to generate accurate thermal stresses for structural members of aeronautical vehicles were developed in 1954 by Boley. The formulation also provides three normal stresses and a shear stress along the entire length of the beam. The Poisson effect of the lateral and transverse normal stresses on a thermally loaded beam is taken into account in this theory by employing an Airy stress function. The Airy stress function enables the reduction of the three-dimensional thermal stress problem to a two-dimensional one. Numerical results from the general theory of thermoelasticity are compared to those obtained from strength of materials. It is concluded that the theory of thermoelasticity for prismatic beams proposed in this paper can be used instead of strength of materials when precise stress results are desired.

  11. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    NASA Astrophysics Data System (ADS)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  12. The contribution of an asthma diagnostic consultation service in obtaining an accurate asthma diagnosis for primary care patients: results of a real-life study.

    PubMed

    Gillis, R M E; van Litsenburg, W; van Balkom, R H; Muris, J W; Smeenk, F W

    2017-05-19

    Previous studies showed that general practitioners have problems in diagnosing asthma accurately, resulting in both under and overdiagnosis. To support general practitioners in their diagnostic process, an asthma diagnostic consultation service was set up. We evaluated the performance of this asthma diagnostic consultation service by analysing the (dis)concordance between the general practitioners working hypotheses and the asthma diagnostic consultation service diagnoses and possible consequences this had on the patients' pharmacotherapy. In total 659 patients were included in this study. At this service the patients' medical history was taken and a physical examination and a histamine challenge test were carried out. We compared the general practitioners working hypotheses with the asthma diagnostic consultation service diagnoses and the change in medication that was incurred. In 52% (n = 340) an asthma diagnosis was excluded. The diagnosis was confirmed in 42% (n = 275). Furthermore, chronic rhinitis was diagnosed in 40% (n = 261) of the patients whereas this was noted in 25% (n = 163) by their general practitioner. The adjusted diagnosis resulted in a change of medication for more than half of all patients. In 10% (n = 63) medication was started because of a new asthma diagnosis. The 'one-stop-shop' principle was met with 53% of patients and 91% (n = 599) were referred back to their general practitioner, mostly within 6 months. Only 6% (n = 41) remained under control of the asthma diagnostic consultation service because of severe unstable asthma. In conclusion, the asthma diagnostic consultation service helped general practitioners significantly in setting accurate diagnoses for their patients with an asthma hypothesis. This may contribute to diminish the problem of over and underdiagnosis and may result in more appropriate treatment regimens. SERVICE HELPS GENERAL PRACTITIONERS MAKE ACCURATE DIAGNOSES: A consultation service can

  13. Accurate determination of the fine-structure intervals in the 3P ground states of C-13 and C-12 by far-infrared laser magnetic resonance

    NASA Technical Reports Server (NTRS)

    Cooksy, A. L.; Saykally, R. J.; Brown, J. M.; Evenson, K. M.

    1986-01-01

    Accurate values are presented for the fine-structure intervals in the 3P ground state of neutral atomic C-12 and C-13 as obtained from laser magnetic resonance spectroscopy. The rigorous analysis of C-13 hyperfine structure, the measurement of resonant fields for C-12 transitions at several additional far-infrared laser frequencies, and the increased precision of the C-12 measurements, permit significant improvement in the evaluation of these energies relative to earlier work. These results will expedite the direct and precise measurement of these transitions in interstellar sources and should assist in the determination of the interstellar C-12/C-13 abundance ratio.

  14. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    NASA Astrophysics Data System (ADS)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert; GDML Collaboration

    Efficient and accurate access to the Born-Oppenheimer potential energy surface (PES) is essential for long time scale molecular dynamics (MD) simulations. Using conservation of energy - a fundamental property of closed classical and quantum mechanical systems - we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio MD trajectories (AIMD). The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/Å for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, malonaldehyde, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative MD simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  15. The Data Evaluation for Obtaining Accuracy and Reliability

    NASA Astrophysics Data System (ADS)

    Kim, Chang Geun; Chae, Kyun Shik; Lee, Sang Tae; Bhang, Gun Woong

    2012-11-01

    Nemours scientific measurement results are flooded from the paper, data book, etc. as fast growing of internet. We meet many different measurement results on the same measurand. In this moment, we are face to choose most reliable one out of them. But it is not easy to choose and use the accurate and reliable data as we do at an ice cream parlor. Even expert users feel difficult to distinguish the accurate and reliable scientific data from huge amount of measurement results. For this reason, the data evaluation is getting more important as the fast growing of internet and globalization. Furthermore the expressions of measurement results are not in standardi-zation. As these need, the international movement has been enhanced. At the first step, the global harmonization of terminology used in metrology and the expression of uncertainty in measurement were published in ISO. These methods are wide spread to many area of science on their measurement to obtain the accuracy and reliability. In this paper, it is introduced that the GUM, SRD and data evaluation on atomic collisions.

  16. Accurate Semilocal Density Functional for Condensed-Matter Physics and Quantum Chemistry.

    PubMed

    Tao, Jianmin; Mo, Yuxiang

    2016-08-12

    Most density functionals have been developed by imposing the known exact constraints on the exchange-correlation energy, or by a fit to a set of properties of selected systems, or by both. However, accurate modeling of the conventional exchange hole presents a great challenge, due to the delocalization of the hole. Making use of the property that the hole can be made localized under a general coordinate transformation, here we derive an exchange hole from the density matrix expansion, while the correlation part is obtained by imposing the low-density limit constraint. From the hole, a semilocal exchange-correlation functional is calculated. Our comprehensive test shows that this functional can achieve remarkable accuracy for diverse properties of molecules, solids, and solid surfaces, substantially improving upon the nonempirical functionals proposed in recent years. Accurate semilocal functionals based on their associated holes are physically appealing and practically useful for developing nonlocal functionals.

  17. Benchmarking singlet and triplet excitation energies of molecular semiconductors for singlet fission: Tuning the amount of HF exchange and adjusting local correlation to obtain accurate functionals for singlet-triplet gaps

    NASA Astrophysics Data System (ADS)

    Brückner, Charlotte; Engels, Bernd

    2017-01-01

    Vertical and adiabatic singlet and triplet excitation energies of molecular p-type semiconductors calculated with various DFT functionals and wave-function based approaches are benchmarked against MS-CASPT2/cc-pVTZ reference values. A special focus lies on the singlet-triplet gaps that are very important in the process of singlet fission. Singlet fission has the potential to boost device efficiencies of organic solar cells, but the scope of existing singlet-fission compounds is still limited. A computational prescreening of candidate molecules could enlarge it; yet it requires efficient methods accurately predicting singlet and triplet excitation energies. Different DFT formulations (Tamm-Dancoff approximation, linear response time-dependent DFT, Δ-SCF) and spin scaling schemes along with several ab initio methods (CC2, ADC(2)/MP2, CIS(D), CIS) are evaluated. While wave-function based methods yield rather reliable singlet-triplet gaps, many DFT functionals are shown to systematically underestimate triplet excitation energies. To gain insight, the impact of exact exchange and correlation is in detail addressed.

  18. Accurate segmentation framework for the left ventricle wall from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Sliman, H.; Khalifa, F.; Elnakib, A.; Soliman, A.; Beache, G. M.; Gimel'farb, G.; Emam, A.; Elmaghraby, A.; El-Baz, A.

    2013-10-01

    We propose a novel, fast, robust, bi-directional coupled parametric deformable model to segment the left ventricle (LV) wall borders using first- and second-order visual appearance features. These features are embedded in a new stochastic external force that preserves the topology of LV wall to track the evolution of the parametric deformable models control points. To accurately estimate the marginal density of each deformable model control point, the empirical marginal grey level distributions (first-order appearance) inside and outside the boundary of the deformable model are modeled with adaptive linear combinations of discrete Gaussians (LCDG). The second order visual appearance of the LV wall is accurately modeled with a new rotationally invariant second-order Markov-Gibbs random field (MGRF). We tested the proposed segmentation approach on 15 data sets in 6 infarction patients using the Dice similarity coefficient (DSC) and the average distance (AD) between the ground truth and automated segmentation contours. Our approach achieves a mean DSC value of 0.926±0.022 and AD value of 2.16±0.60 compared to two other level set methods that achieve 0.904±0.033 and 0.885±0.02 for DSC; and 2.86±1.35 and 5.72±4.70 for AD, respectively.

  19. The high cost of accurate knowledge.

    PubMed

    Sutcliffe, Kathleen M; Weber, Klaus

    2003-05-01

    Many business thinkers believe it's the role of senior managers to scan the external environment to monitor contingencies and constraints, and to use that precise knowledge to modify the company's strategy and design. As these thinkers see it, managers need accurate and abundant information to carry out that role. According to that logic, it makes sense to invest heavily in systems for collecting and organizing competitive information. Another school of pundits contends that, since today's complex information often isn't precise anyway, it's not worth going overboard with such investments. In other words, it's not the accuracy and abundance of information that should matter most to top executives--rather, it's how that information is interpreted. After all, the role of senior managers isn't just to make decisions; it's to set direction and motivate others in the face of ambiguities and conflicting demands. Top executives must interpret information and communicate those interpretations--they must manage meaning more than they must manage information. So which of these competing views is the right one? Research conducted by academics Sutcliffe and Weber found that how accurate senior executives are about their competitive environments is indeed less important for strategy and corresponding organizational changes than the way in which they interpret information about their environments. Investments in shaping those interpretations, therefore, may create a more durable competitive advantage than investments in obtaining and organizing more information. And what kinds of interpretations are most closely linked with high performance? Their research suggests that high performers respond positively to opportunities, yet they aren't overconfident in their abilities to take advantage of those opportunities.

  20. Accurate phylogenetic classification of DNA fragments based onsequence composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequencemore » characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.« less

  1. Accurate measurement of junctional conductance between electrically coupled cells with dual whole-cell voltage-clamp under conditions of high series resistance.

    PubMed

    Hartveit, Espen; Veruki, Margaret Lin

    2010-03-15

    Accurate measurement of the junctional conductance (G(j)) between electrically coupled cells can provide important information about the functional properties of coupling. With the development of tight-seal, whole-cell recording, it became possible to use dual, single-electrode voltage-clamp recording from pairs of small cells to measure G(j). Experiments that require reduced perturbation of the intracellular environment can be performed with high-resistance pipettes or the perforated-patch technique, but an accompanying increase in series resistance (R(s)) compromises voltage-clamp control and reduces the accuracy of G(j) measurements. Here, we present a detailed analysis of methodologies available for accurate determination of steady-state G(j) and related parameters under conditions of high R(s), using continuous or discontinuous single-electrode voltage-clamp (CSEVC or DSEVC) amplifiers to quantify the parameters of different equivalent electrical circuit model cells. Both types of amplifiers can provide accurate measurements of G(j), with errors less than 5% for a wide range of R(s) and G(j) values. However, CSEVC amplifiers need to be combined with R(s)-compensation or mathematical correction for the effects of nonzero R(s) and finite membrane resistance (R(m)). R(s)-compensation is difficult for higher values of R(s) and leads to instability that can damage the recorded cells. Mathematical correction for R(s) and R(m) yields highly accurate results, but depends on accurate estimates of R(s) throughout an experiment. DSEVC amplifiers display very accurate measurements over a larger range of R(s) values than CSEVC amplifiers and have the advantage that knowledge of R(s) is unnecessary, suggesting that they are preferable for long-duration experiments and/or recordings with high R(s). Copyright (c) 2009 Elsevier B.V. All rights reserved.

  2. Joint profiling of greenhouse gases, isotopes, thermodynamic variables, and wind from space by combined microwave and IR laser occultation: the ACCURATE concept

    NASA Astrophysics Data System (ADS)

    Kirchengast, G.; Schweitzer, S.

    2008-12-01

    aerosol extinction, cloud layering, and turbulence are obtained. All profiles come with accurate height knowledge (< 10 m uncertainty), since measuring height as a function of time is intrinsic to the MW occultation part of ACCURATE. The presentation will introduce ACCURATE along the lines above, with emphasis on the climate science value and the new IR laser occultation capability. The focus will then be on retrieval performance analysis results obtained so far, in particular regarding the profiles of GHGs, isotopes, and wind. The results provide evidence that the GHG and isotope profiles can generally be retrieved within 5-35 km outside clouds with < 1% to 5% rms accuracy at 1-2 km vertical resolution, and wind with < 2 m/s accuracy. Monthly mean climatological profiles, assuming ~40 profiles per climatologic grid box per month, are found unbiased (free of time-varying biases) and at < 0.2% to 0.5% rms accuracy. These encouraging results are discussed in light of the potential of the ACCURATE technique to provide benchmark data for future monitoring of climate, GHGs, and chemistry variability and change. European science and demonstration activities are outlined, including international participation opportunities.

  3. Anchoring the Population II Distance Scale: Accurate Ages for Globular Clusters

    NASA Technical Reports Server (NTRS)

    Chaboyer, Brian C.; Chaboyer, Brian C.; Carney, Bruce W.; Latham, David W.; Dunca, Douglas; Grand, Terry; Layden, Andy; Sarajedini, Ataollah; McWilliam, Andrew; Shao, Michael

    2004-01-01

    The metal-poor stars in the halo of the Milky Way galaxy were among the first objects formed in our Galaxy. These Population II stars are the oldest objects in the universe whose ages can be accurately determined. Age determinations for these stars allow us to set a firm lower limit, to the age of the universe and to probe the early formation history of the Milky Way. The age of the universe determined from studies of Population II stars may be compared to the expansion age of the universe and used to constrain cosmological models. The largest uncertainty in estimates for the ages of stars in our halo is due to the uncertainty in the distance scale to Population II objects. We propose to obtain accurate parallaxes to a number of Population II objects (globular clusters and field stars in the halo) resulting in a significant improvement in the Population II distance scale and greatly reducing the uncertainty in the estimated ages of the oldest stars in our galaxy. At the present time, the oldest stars are estimated to be 12.8 Gyr old, with an uncertainty of approx. 15%. The SIM observations obtained by this key project, combined with the supporting theoretical research and ground based observations outlined in this proposal will reduce the estimated uncertainty in the age estimates to 5%).

  4. Determination of optimal cutoff value to accurately identify glucose-6-phosphate dehydrogenase-deficient heterozygous female neonates.

    PubMed

    Miao, Jing-Kun; Chen, Qi-Xiong; Bao, Li-Ming; Huang, Yi; Zhang, Juan; Wan, Ke-Xing; Yi, Jing; Wang, Shi-Yi; Zou, Lin; Li, Ting-Yu

    2013-09-23

    Conventional screening tests to assess G6PD deficiency use a low cutoff value of 2.10 U/gHb which may not be adequate for detecting females with heterozygous deficiency. The aim of present study was to determine an appropriate cutoff value with increased sensitivity in identifying G6PD-deficient heterozygous females. G6PD activity analysis was performed on 51,747 neonates using semi-quantitative fluorescent spot test. Neonates suspected with G6PD deficiency were further analyzed using quantitatively enzymatic assay and for common G6PD mutations. The cutoff values of G6PD activity were estimated using the receiver operating characteristic curve. Our results demonstrated that using 2.10 U/g Hb as a cutoff, the sensitivity of the assay to detect female neonates with G6PD heterozygous deficiency was 83.3%, as compared with 97.6% using 2.55 U/g Hb as a cutoff. The high cutoff identified 21% (8/38) of the female neonates with partial G6PD deficiency which were not detected with 2.10 U/g Hb. Our study found that high cutoffs, 2.35 and 2.55 U/g Hb, would increase assay's sensitivity to identify male and female G6PD deficiency neonates, respectively. We established a reliable cutoff value of G6PD activity with increased sensitivity in identifying female newborns with partial G6PD deficiency. Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Life Support Baseline Values and Assumptions Document

    NASA Technical Reports Server (NTRS)

    Anderson, Molly S.; Ewert, Michael K.; Keener, John F.; Wagner, Sandra A.

    2015-01-01

    The Baseline Values and Assumptions Document (BVAD) provides analysts, modelers, and other life support researchers with a common set of values and assumptions which can be used as a baseline in their studies. This baseline, in turn, provides a common point of origin from which many studies in the community may depart, making research results easier to compare and providing researchers with reasonable values to assume for areas outside their experience. With the ability to accurately compare different technologies' performance for the same function, managers will be able to make better decisions regarding technology development.

  6. The value of innovation under value-based pricing.

    PubMed

    Moreno, Santiago G; Ray, Joshua A

    2016-01-01

    The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the 'value of innovation' reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share of this value, thereby incentivizing

  7. Small-time Scale Network Traffic Prediction Based on Complex-valued Neural Network

    NASA Astrophysics Data System (ADS)

    Yang, Bin

    2017-07-01

    Accurate models play an important role in capturing the significant characteristics of the network traffic, analyzing the network dynamic, and improving the forecasting accuracy for system dynamics. In this study, complex-valued neural network (CVNN) model is proposed to further improve the accuracy of small-time scale network traffic forecasting. Artificial bee colony (ABC) algorithm is proposed to optimize the complex-valued and real-valued parameters of CVNN model. Small-scale traffic measurements data namely the TCP traffic data is used to test the performance of CVNN model. Experimental results reveal that CVNN model forecasts the small-time scale network traffic measurement data very accurately

  8. Accurate quantification of creatinine in serum by coupling a measurement standard to extractive electrospray ionization mass spectrometry

    NASA Astrophysics Data System (ADS)

    Huang, Keke; Li, Ming; Li, Hongmei; Li, Mengwan; Jiang, You; Fang, Xiang

    2016-01-01

    Ambient ionization (AI) techniques have been widely used in chemistry, medicine, material science, environmental science, forensic science. AI takes advantage of direct desorption/ionization of chemicals in raw samples under ambient environmental conditions with minimal or no sample preparation. However, its quantitative accuracy is restricted by matrix effects during the ionization process. To improve the quantitative accuracy of AI, a matrix reference material, which is a particular form of measurement standard, was coupled to an AI technique in this study. Consequently the analyte concentration in a complex matrix can be easily quantified with high accuracy. As a demonstration, this novel method was applied for the accurate quantification of creatinine in serum by using extractive electrospray ionization (EESI) mass spectrometry. Over the concentration range investigated (0.166 ~ 1.617 μg/mL), a calibration curve was obtained with a satisfactory linearity (R2 = 0.994), and acceptable relative standard deviations (RSD) of 4.6 ~ 8.0% (n = 6). Finally, the creatinine concentration value of a serum sample was determined to be 36.18 ± 1.08 μg/mL, which is in excellent agreement with the certified value of 35.16 ± 0.39 μg/mL.

  9. The amount effect and marginal value.

    PubMed

    Rachlin, Howard; Arfer, Kodi B; Safin, Vasiliy; Yen, Ming

    2015-07-01

    The amount effect of delay discounting (by which the value of larger reward amounts is discounted by delay at a lower rate than that of smaller amounts) strictly implies that value functions (value as a function of amount) are steeper at greater delays than they are at lesser delays. That is, the amount effect and the difference in value functions at different delays are actually a single empirical finding. Amount effects of delay discounting are typically found with choice experiments. Value functions for immediate rewards have been empirically obtained by direct judgment. (Value functions for delayed rewards have not been previously obtained.) The present experiment obtained value functions for both immediate and delayed rewards by direct judgment and found them to be steeper when the rewards were delayed--hence, finding an amount effect with delay discounting. © Society for the Experimental Analysis of Behavior.

  10. An automatic scaling method for obtaining the trace and parameters from oblique ionogram based on hybrid genetic algorithm

    NASA Astrophysics Data System (ADS)

    Song, Huan; Hu, Yaogai; Jiang, Chunhua; Zhou, Chen; Zhao, Zhengyu; Zou, Xianjian

    2016-12-01

    Scaling oblique ionogram plays an important role in obtaining ionospheric structure at the midpoint of oblique sounding path. The paper proposed an automatic scaling method to extract the trace and parameters of oblique ionogram based on hybrid genetic algorithm (HGA). The extracted 10 parameters come from F2 layer and Es layer, such as maximum observation frequency, critical frequency, and virtual height. The method adopts quasi-parabolic (QP) model to describe F2 layer's electron density profile that is used to synthesize trace. And it utilizes secant theorem, Martyn's equivalent path theorem, image processing technology, and echoes' characteristics to determine seven parameters' best fit values, and three parameter's initial values in QP model to set up their searching spaces which are the needed input data of HGA. Then HGA searches the three parameters' best fit values from their searching spaces based on the fitness between the synthesized trace and the real trace. In order to verify the performance of the method, 240 oblique ionograms are scaled and their results are compared with manual scaling results and the inversion results of the corresponding vertical ionograms. The comparison results show that the scaling results are accurate or at least adequate 60-90% of the time.

  11. Comparison of methods for accurate end-point detection of potentiometric titrations

    NASA Astrophysics Data System (ADS)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  12. Accurate spectroscopic redshift of the multiply lensed quasar PSOJ0147 from the Pan-STARRS survey

    NASA Astrophysics Data System (ADS)

    Lee, C.-H.

    2017-09-01

    Context. The gravitational lensing time delay method provides a one-step determination of the Hubble constant (H0) with an uncertainty level on par with the cosmic distance ladder method. However, to further investigate the nature of the dark energy, a H0 estimate down to 1% level is greatly needed. This requires dozens of strongly lensed quasars that are yet to be delivered by ongoing and forthcoming all-sky surveys. Aims: In this work we aim to determine the spectroscopic redshift of PSOJ0147, the first strongly lensed quasar candidate found in the Pan-STARRS survey. The main goal of our work is to derive an accurate redshift estimate of the background quasar for cosmography. Methods: To obtain timely spectroscopically follow-up, we took advantage of the fast-track service programme that is carried out by the Nordic Optical Telescope. Using a grism covering 3200-9600 Å, we identified prominent emission line features, such as Lyα, N V, O I, C II, Si IV, C IV, and [C III] in the spectra of the background quasar of the PSOJ0147 lens system. This enables us to determine accurately the redshift of the background quasar. Results: The spectrum of the background quasar exhibits prominent absorption features bluewards of the strong emission lines, such as Lyα, N V, and C IV. These blue absorption lines indicate that the background source is a broad absorption line (BAL) quasar. Unfortunately, the BAL features hamper an accurate determination of redshift using the above-mentioned strong emission lines. Nevertheless, we are able to determine a redshift of 2.341 ± 0.001 from three of the four lensed quasar images with the clean forbidden line [C III]. In addition, we also derive a maximum outflow velocity of 9800 km s-1 with the broad absorption features bluewards of the C IV emission line. This value of maximum outflow velocity is in good agreement with other BAL quasars.

  13. Enhance the Value of a Research Paper: Choosing the Right References and Writing them Accurately.

    PubMed

    Bavdekar, Sandeep B

    2016-03-01

    References help readers identify and locate sources used for justifying the need for conducting the research study, verify methods employed in the study and for discussing the interpretation of results and implications of the study. It is extremely essential that references are accurate and complete. This article provides suggestions regarding choosing references and writing reference list. References are a list of sources that are selected by authors to represent the best documents concerning the research study.1 They constitute the foundation of any research paper. Although generally written towards the end of the article-writing process, they are nevertheless extremely important. They provide the context for the hypothesis and help justify the need for conducting the research study. Authors use references to inform readers about the techniques used for conducting the study and convince them about the appropriateness of methodology used. References help provide appropriate perspective in which the research findings should be seen and interpreted. This communication will discuss the purpose of citations, how to select quality sources for citing and the importance of accuracy while writing the reference list. © Journal of the Association of Physicians of India 2011.

  14. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    NASA Astrophysics Data System (ADS)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  15. Accurate quantum yields by laser gain vs absorption spectroscopy - Investigation of Br/Br(asterisk) channels in photofragmentation of Br2 and IBr

    NASA Technical Reports Server (NTRS)

    Haugen, H. K.; Weitz, E.; Leone, S. R.

    1985-01-01

    Various techniques have been used to study photodissociation dynamics of the halogens and interhalogens. The quantum yields obtained by these techniques differ widely. The present investigation is concerned with a qualitatively new approach for obtaining highly accurate quantum yields for electronically excited states. This approach makes it possible to obtain an accuracy of 1 percent to 3 percent. It is shown that measurement of the initial transient gain/absorption vs the final absorption in a single time-resolved signal is a very accurate technique in the study of absolute branching fractions in photodissociation. The new technique is found to be insensitive to pulse and probe laser characteristics, molecular absorption cross sections, and absolute precursor density.

  16. New reference values must be established for the Alberta Infant Motor Scales for accurate identification of infants at risk for motor developmental delay in Flanders.

    PubMed

    De Kegel, A; Peersman, W; Onderbeke, K; Baetens, T; Dhooge, I; Van Waelvelde, H

    2013-03-01

    The Alberta Infant Motor Scales (AIMS) is a reliable and valid assessment tool to evaluate the motor performance from birth to independent walking. This study aimed to determine whether the Canadian reference values on the AIMS from 1990-1992 are still useful tor Flemish infants, assessed in 2007-2010. Additionally, the association between motor performance and sleep and play positioning will be determined. A total of 270 Flemish infants between 0 and 18 months, recruited by formal day care services, were assessed with the AIMS by four trained physiotherapists. Information about sleep and play positioning was collected by mean of a questionnaire. Flemish infants perform significantly lower on the AIMS compared with the reference values (P < 0.001). Especially, infants from the age groups of 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 and of 15 months showed significantly lower scores. From the information collected by parental questionnaires, the lower motor scores seem to be related to the sleep position, the amount of play time in prone, in supine and in a sitting device. Infants who are exposed often to frequently to prone while awake showed a significant higher motor performance than infants who are exposed less to prone (<6 m: P = 0.002; >6 m: P = 0.013). Infants who are placed often to frequently in a sitting device in the first 6 months of life (P = 0.010) and in supine after 6 months (P = 0.001) performed significantly lower than those who are placed less in it. Flemish infants recruited by formal day care services, show significantly lower motor scores than the Canadian norm population. New reference values should be established for the AIMS for accurate identification of infants at risk. Prevention of sudden infant death syndrome by promoting supine sleep position should go together with promotion of tummy time when awake and avoiding to spent too much time in sitting devices when awake. © 2012 Blackwell Publishing Ltd.

  17. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    NASA Astrophysics Data System (ADS)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  18. Can Value-Added Measures of Teacher Performance Be Trusted? Working Paper #18

    ERIC Educational Resources Information Center

    Guarino, Cassandra M.; Reckase, Mark D.; Woolridge, Jeffrey M.

    2012-01-01

    We investigate whether commonly used value-added estimation strategies can produce accurate estimates of teacher effects. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. No one method accurately captures true teacher effects in all scenarios,…

  19. ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS

    PubMed Central

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-01-01

    The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104

  20. Accurate chemical master equation solution using multi-finite buffers

    DOE PAGES

    Cao, Youfang; Terebus, Anna; Liang, Jie

    2016-06-29

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  1. Accurate chemical master equation solution using multi-finite buffers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cao, Youfang; Terebus, Anna; Liang, Jie

    Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less

  2. A comparative study of charge transfer inefficiency value and trap parameter determination techniques making use of an irradiated ESA-Euclid prototype CCD

    NASA Astrophysics Data System (ADS)

    Prod'homme, Thibaut; Verhoeve, P.; Kohley, R.; Short, A.; Boudin, N.

    2014-07-01

    The science objectives of space missions using CCDs to carry out accurate astronomical measurements are put at risk by the radiation-induced increase in charge transfer inefficiency (CTI) that results from trapping sites in the CCD silicon lattice. A variety of techniques are used to obtain CTI values and derive trap parameters, however they often differ in results. To identify and understand these differences, we take advantage of an on-going comprehensive characterisation of an irradiated Euclid prototype CCD including the following techniques: X-ray, trap pumping, flat field extended pixel edge response and first pixel response. We proceed to a comparative analysis of the obtained results.

  3. Spatial and Temporal Distribution of Initial 230TH/232TH in Sumatran Corals and its Influence on the Accurate Dating of Young Corals

    NASA Astrophysics Data System (ADS)

    Chiang, H.; Shen, C.; Meltzner, A. J.; Philibosian, B.; WU, C.; Sieh, K. E.; Wang, X.

    2012-12-01

    Accurate and precise determination of initial 230Th/232Th (230Th/232Th0) is important in dating young fossil corals, and it can significantly influence our understanding of paleoclimate, paleoceanographic and paleoseismic histories. A total of 47 unpublished and published isochrons (Shen et al., 2008; Meltzner et al., 2010, 2012; Philibosian et al., 2012), covering most of the Sumatran outer-arc islands, provide a more robust estimate of the 230Th/232Th0 variability in the region. The weighted average of 230Th/232Th0 atomic values is 4.7 (+5.5/-4.7) × 10-6 (2σ), consistent with the previously reported value of 6.5 ± 6.5 × 10-6 obtained from a handful of samples from the southern part of Sumatran outer-arc. Specifically, the calculated 230Th/232Th0 in the north and south are identical. The weighted mean of 3.5 (+7.0/-3.5) × 10-6 for fossil corals of 300-2000-yr old is slightly lower than the value of 5.4 ± 4.5 × 10-6 obtained from corals younger than 300 yrs B.P.. For corals containing less than 2 ppb of thorium, however, the age offset will be less than 10 yr by using different 230Th/232Th0, which is acceptable for most studies. We hereby recommend an updated 230Th/232Th0 value of 4.7 (+5.5/-4.7) × 10-6 for corals throughout the Sumatran outer-arc islands. For very high-precision age determination (<10 yr), coral samples with low Th concentration (< 2 ppb) are preferred.; ;

  4. Accurate and efficient seismic data interpolation in the principal frequency wavenumber domain

    NASA Astrophysics Data System (ADS)

    Wang, Benfeng; Lu, Wenkai

    2017-12-01

    Seismic data irregularity caused by economic limitations, acquisition environmental constraints or bad trace elimination, can decrease the performance of the below multi-channel algorithms, such as surface-related multiple elimination (SRME), though some can overcome the irregularity defects. Therefore, accurate interpolation to provide the necessary complete data is a pre-requisite, but its wide applications are constrained because of its large computational burden for huge data volume, especially in 3D explorations. For accurate and efficient interpolation, the curvelet transform- (CT) based projection onto convex sets (POCS) method in the principal frequency wavenumber (PFK) domain is introduced. The complex-valued PF components can characterize their original signal with a high accuracy, but are at least half the size, which can help provide a reasonable efficiency improvement. The irregularity of the observed data is transformed into incoherent noise in the PFK domain, and curvelet coefficients may be sparser when CT is performed on the PFK domain data, enhancing the interpolation accuracy. The performance of the POCS-based algorithms using complex-valued CT in the time space (TX), principal frequency space, and PFK domains are compared. Numerical examples on synthetic and field data demonstrate the validity and effectiveness of the proposed method. With less computational burden, the proposed method can achieve a better interpolation result, and it can be easily extended into higher dimensions.

  5. An Accurate Co-registration Method for Airborne Repeat-pass InSAR

    NASA Astrophysics Data System (ADS)

    Dong, X. T.; Zhao, Y. H.; Yue, X. J.; Han, C. M.

    2017-10-01

    Interferometric Synthetic Aperture Radar (InSAR) technology plays a significant role in topographic mapping and surface deformation detection. Comparing with spaceborne repeat-pass InSAR, airborne repeat-pass InSAR solves the problems of long revisit time and low-resolution images. Due to the advantages of flexible, accurate, and fast obtaining abundant information, airborne repeat-pass InSAR is significant in deformation monitoring of shallow ground. In order to getting precise ground elevation information and interferometric coherence of deformation monitoring from master and slave images, accurate co-registration must be promised. Because of side looking, repeat observing path and long baseline, there are very different initial slant ranges and flight heights between repeat flight paths. The differences of initial slant ranges and flight height lead to the pixels, located identical coordinates on master and slave images, correspond to different size of ground resolution cells. The mismatching phenomenon performs very obvious on the long slant range parts of master image and slave image. In order to resolving the different sizes of pixels and getting accurate co-registration results, a new method is proposed based on Range-Doppler (RD) imaging model. VV-Polarization C-band airborne repeat-pass InSAR images were used in experiment. The experiment result shows that the proposed method leads to superior co-registration accuracy.

  6. The value of pathogen information in treating clinical mastitis.

    PubMed

    Cha, Elva; Smith, Rebecca L; Kristensen, Anders R; Hertl, Julia A; Schukken, Ynte H; Tauer, Loren W; Welcome, Frank L; Gröhn, Yrjö T

    2016-11-01

    The objective of this study was to determine the economic value of obtaining timely and more accurate clinical mastitis (CM) test results for optimal treatment of cows. Typically CM is first identified when the farmer observes recognisable outward signs. Further information of whether the pathogen causing CM is Gram-positive, Gram-negative or other (including no growth) can be determined by using on-farm culture methods. The most detailed level of information for mastitis diagnostics is obtainable by sending milk samples for culture to an external laboratory. Knowing the exact pathogen permits the treatment method to be specifically targeted to the causation pathogen, resulting in less discarded milk. The disadvantages are the additional waiting time to receive test results, which delays treating cows, and the cost of the culture test. Net returns per year (NR) for various levels of information were estimated using a dynamic programming model. The Value of Information (VOI) was then calculated as the difference in NR using a specific level of information as compared to more detailed information on the CM causative agent. The highest VOI was observed where the farmer assumed the pathogen causing CM was the one with the highest incidence in the herd and no pathogen specific CM information was obtained. The VOI of pathogen specific information, compared with non-optimal treatment of Staphylococcus aureus where recurrence and spread occurred due to lack of treatment efficacy, was $20.43 when the same incorrect treatment was applied to recurrent cases, and $30.52 when recurrent cases were assumed to be the next highest incidence pathogen and treated accordingly. This indicates that negative consequences associated with choosing the wrong CM treatment can make additional information cost-effective if pathogen identification is assessed at the generic information level and if the pathogen can spread to other cows if not treated appropriately.

  7. Self-Expression on Social Media: Do Tweets Present Accurate and Positive Portraits of Impulsivity, Self-Esteem, and Attachment Style?

    PubMed

    Orehek, Edward; Human, Lauren J

    2017-01-01

    Self-expression values are at an all-time high, and people are increasingly relying upon social media platforms to express themselves positively and accurately. We examined whether self-expression on the social media platform Twitter elicits positive and accurate social perceptions. Eleven perceivers rated 128 individuals (targets; total dyadic impressions = 1,408) on their impulsivity, self-esteem, and attachment style, based solely on the information provided in targets' 10 most recent tweets. Targets were on average perceived normatively and with distinctive self-other agreement, indicating both positive and accurate social perceptions. There were also individual differences in how positively and accurately targets were perceived, which exploratory analyses indicated may be partially driven by differential word usage, such as the use of positive emotion words and self- versus other-focus. This study demonstrates that self-expression on social media can elicit both positive and accurate perceptions and begins to shed light on how to curate such perceptions.

  8. An attempt to obtain a detailed declination chart from the United States magnetic anomaly map

    USGS Publications Warehouse

    Alldredge, L.R.

    1989-01-01

    Modern declination charts of the United States show almost no details. It was hoped that declination details could be derived from the information contained in the existing magnetic anomaly map of the United States. This could be realized only if all of the survey data were corrected to a common epoch, at which time a main-field vector model was known, before the anomaly values were computed. Because this was not done, accurate declination values cannot be determined. In spite of this conclusion, declination values were computed using a common main-field model for the entire United States to see how well they compared with observed values. The computed detailed declination values were found to compare less favourably with observed values of declination than declination values computed from the IGRF 1985 model itself. -from Author

  9. A 5-trial adjusting delay discounting task: Accurate discount rates in less than 60 seconds

    PubMed Central

    Koffarnus, Mikhail N.; Bickel, Warren K.

    2014-01-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Findings such as these suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining discount rate in less than one minute. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and correlated with discount rates from a lengthier task we have used previously, and that four known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students six times to obtain discount rates for six different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the four known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when zero amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PMID:24708144

  10. A 5-trial adjusting delay discounting task: accurate discount rates in less than one minute.

    PubMed

    Koffarnus, Mikhail N; Bickel, Warren K

    2014-06-01

    Individuals who discount delayed rewards at a high rate are more likely to engage in substance abuse, overeating, or problem gambling. Such findings suggest the value of methods to obtain an accurate and fast measurement of discount rate that can be easily deployed in variety of settings. In the present study, we developed and evaluated the 5-trial adjusting delay task, a novel method of obtaining a discount rate in less than 1 min. We hypothesized that discount rates from the 5-trial adjusting delay task would be similar and would correlate with discount rates from a lengthier task we have used previously, and that 4 known effects relating to delay discounting would be replicable with this novel task. To test these hypotheses, the 5-trial adjusting delay task was administered to 111 college students 6 times to obtain discount rates for 6 different commodities, along with a lengthier adjusting amount discounting task. We found that discount rates were similar and correlated between the 5-trial adjusting delay task and the adjusting amount task. Each of the 4 known effects relating to delay discounting was replicated with the 5-trial adjusting delay task to varying degrees. First, discount rates were inversely correlated with amount. Second, discount rates between past and future outcomes were correlated. Third, discount rates were greater for consumable rewards than with money, although we did not control for amount in this comparison. Fourth, discount rates were lower when $0 amounts opposing the chosen time point were explicitly described. Results indicate that the 5-trial adjusting delay task is a viable, rapid method to assess discount rate. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  11. Accurate Drift Time Determination by Traveling Wave Ion Mobility Spectrometry: The Concept of the Diffusion Calibration.

    PubMed

    Kune, Christopher; Far, Johann; De Pauw, Edwin

    2016-12-06

    Ion mobility spectrometry (IMS) is a gas phase separation technique, which relies on differences in collision cross section (CCS) of ions. Ionic clouds of unresolved conformers overlap if the CCS difference is below the instrumental resolution expressed as CCS/ΔCCS. The experimental arrival time distribution (ATD) peak is then a superimposition of the various contributions weighted by their relative intensities. This paper introduces a strategy for accurate drift time determination using traveling wave ion mobility spectrometry (TWIMS) of poorly resolved or unresolved conformers. This method implements through a calibration procedure the link between the peak full width at half-maximum (fwhm) and the drift time of model compounds for wide range of settings for wave heights and velocities. We modified a Gaussian equation, which achieves the deconvolution of ATD peaks where the fwhm is fixed according to our calibration procedure. The new fitting Gaussian equation only depends on two parameters: The apex of the peak (A) and the mean drift time value (μ). The standard deviation parameter (correlated to fwhm) becomes a function of the drift time. This correlation function between μ and fwhm is obtained using the TWIMS calibration procedure which determines the maximum instrumental ion beam diffusion under limited and controlled space charge effect using ionic compounds which are detected as single conformers in the gas phase. This deconvolution process has been used to highlight the presence of poorly resolved conformers of crown ether complexes and peptides leading to more accurate CCS determinations in better agreement with quantum chemistry predictions.

  12. An accurate, fast, and scalable solver for high-frequency wave propagation

    NASA Astrophysics Data System (ADS)

    Zepeda-Núñez, L.; Taus, M.; Hewett, R.; Demanet, L.

    2017-12-01

    In many science and engineering applications, solving time-harmonic high-frequency wave propagation problems quickly and accurately is of paramount importance. For example, in geophysics, particularly in oil exploration, such problems can be the forward problem in an iterative process for solving the inverse problem of subsurface inversion. It is important to solve these wave propagation problems accurately in order to efficiently obtain meaningful solutions of the inverse problems: low order forward modeling can hinder convergence. Additionally, due to the volume of data and the iterative nature of most optimization algorithms, the forward problem must be solved many times. Therefore, a fast solver is necessary to make solving the inverse problem feasible. For time-harmonic high-frequency wave propagation, obtaining both speed and accuracy is historically challenging. Recently, there have been many advances in the development of fast solvers for such problems, including methods which have linear complexity with respect to the number of degrees of freedom. While most methods scale optimally only in the context of low-order discretizations and smooth wave speed distributions, the method of polarized traces has been shown to retain optimal scaling for high-order discretizations, such as hybridizable discontinuous Galerkin methods and for highly heterogeneous (and even discontinuous) wave speeds. The resulting fast and accurate solver is consequently highly attractive for geophysical applications. To date, this method relies on a layered domain decomposition together with a preconditioner applied in a sweeping fashion, which has limited straight-forward parallelization. In this work, we introduce a new version of the method of polarized traces which reveals more parallel structure than previous versions while preserving all of its other advantages. We achieve this by further decomposing each layer and applying the preconditioner to these new components separately and

  13. Reference values for respiratory rate in the first 3 years of life.

    PubMed

    Rusconi, F; Castagneto, M; Gagliardi, L; Leo, G; Pellegatta, A; Porta, N; Razon, S; Braga, M

    1994-09-01

    Raised respiratory rate is a useful sign to diagnose lower respiratory infections in childhood. However, the normal range for respiratory rate has not been defined in a proper, large sample. To assess the respiratory rate in a large number of infants and young children in order to construct percentile curves by age; to determine the repeatability to the assessment using a stethoscope and compare it with observation. Respiratory rate was recorded for 1 minute with a stethoscope in 618 infants and children, aged 15 days to 3 years old, without respiratory infections or any other severe disease when awake and calm and when asleep. In 50 subjects we compared respiratory rate taken 30 to 60 minutes apart to assess repeatability, and in 50 others we compared simultaneous counts obtained by stethoscope versus observation. Repeatability was good as the standard deviation of differences was 2.5 breaths/minute in awake and 1.7 breaths/minute in asleep children. Respiratory rate obtained with a stethoscope was systematically higher than that obtained by observation (mean difference 2.6 breaths/minute in awake and 1.8 breaths/minute in asleep children; P = .015 and P < .001, respectively). A decrease in respiratory rate with age was seen for both states, and it was faster in the first few months of life when also a greater dispersion of values was observed. A second degree polynomial curve accurately fitted the data. Reference percentile values were developed from these data. The repeatability of respiratory rate measured with a stethoscope was good. Percentile curves would be particularly helpful in the first months of life when the decline in respiratory rate is very rapid and prevents to use cut off values for defining "normality."

  14. Accurate quantum wave packet calculations for the F + HCl → Cl + HF reaction on the ground 1(2)A' potential energy surface.

    PubMed

    Bulut, Niyazi; Kłos, Jacek; Alexander, Millard H

    2012-03-14

    We present converged exact quantum wave packet calculations of reaction probabilities, integral cross sections, and thermal rate coefficients for the title reaction. Calculations have been carried out on the ground 1(2)A' global adiabatic potential energy surface of Deskevich et al. [J. Chem. Phys. 124, 224303 (2006)]. Converged wave packet reaction probabilities at selected values of the total angular momentum up to a partial wave of J = 140 with the HCl reagent initially selected in the v = 0, j = 0-16 rovibrational states have been obtained for the collision energy range from threshold up to 0.8 eV. The present calculations confirm an important enhancement of reactivity with rotational excitation of the HCl molecule. First, accurate integral cross sections and rate constants have been calculated and compared with the available experimental data.

  15. The determination of accurate dipole polarizabilities alpha and gamma for the noble gases

    NASA Technical Reports Server (NTRS)

    Rice, Julia E.; Taylor, Peter R.; Lee, Timothy J.; Almloef, Jan

    1989-01-01

    The static dipole polarizabilities alpha and gamma for the noble gases helium through xenon were determined using large flexible one-particle basis sets in conjunction with high-level treatments of electron correlation. The electron correlation methods include single and double excitation coupled-cluster theory (CCSD), an extension of CCSD that includes a perturbational estimate of connected triple excitations, CCSD(T), and second order perturbation theory (MP2). The computed alpha and gamma values are estimated to be accurate to within a few percent. Agreement with experimental data for the static hyperpolarizability gamma is good for neon and xenon, but for argon and krypton the differences are larger than the combined theoretical and experimental uncertainties. Based on our calculations, we suggest that the experimental value of gamma for argon is too low; adjusting this value would bring the experimental value of gamma for krypton into better agreement with our computed result. The MP2 values for the polarizabilities of neon, argon, krypton and zenon are in reasonabe agreement with the CCSD and CCSD(T) values, suggesting that this less expensive method may be useful in studies of polarizabilities for larger systems.

  16. Estimation of Gutenberg-Richter b-value using instrumental earthquake catalog from the southern Korean Peninsula

    NASA Astrophysics Data System (ADS)

    Lee, H.; Sheen, D.; Kim, S.

    2013-12-01

    The b-value in Gutenberg-Richter relation is an important parameter widely used not only in the interpretation of regional tectonic structure but in the seismic hazard analysis. In this study, we tested four methods for estimating the stable b-value in a small number of events using Monte-Carlo method. One is the Least-Squares method (LSM) which minimizes the observation error. Others are based on the Maximum Likelihood method (MLM) which maximizes the likelihood function: Utsu's (1965) method for continuous magnitudes and an infinite maximum magnitude, Page's (1968) for continuous magnitudes and a finite maximum magnitude, and Weichert's (1980) for interval magnitude and a finite maximum magnitude. A synthetic parent population of the earthquake catalog of million events from magnitude 2.0 to 7.0 with interval of 0.1 was generated for the Monte-Carlo simulation. The sample, the number of which was increased from 25 to 1000, was extracted from the parent population randomly. The resampling procedure was applied 1000 times with different random seed numbers. The mean and the standard deviation of the b-value were estimated for each sample group that has the same number of samples. As expected, the more samples were used, the more stable b-value was obtained. However, in a small number of events, the LSM gave generally low b-value with a large standard deviation while other MLMs gave more accurate and stable values. It was found that Utsu (1965) gives the most accurate and stable b-value even in a small number of events. It was also found that the selection of the minimum magnitude could be critical for estimating the correct b-value for Utsu's (1965) method and Page's (1968) if magnitudes were binned into an interval. Therefore, we applied Utsu (1965) to estimate the b-value using two instrumental earthquake catalogs, which have events occurred around the southern part of the Korean Peninsula from 1978 to 2011. By a careful choice of the minimum magnitude, the b-values

  17. Accurate Singular Values and Differential QD Algorithms

    DTIC Science & Technology

    1992-07-01

    of the Cholesky Algorithm 5 4 The Quotient Difference Algorithm 8 5 Incorporation of Shifts 11 5.1 Shifted qd Algorithms...Effects of Finite Precision 18 7.1 Error Analysis - Overview ........ ........................... 18 7.2 High Relative Accuracy in the Presence of...showing that it was preferable to replace the DK zero-shift QR transform by two steps of zero-shift LR implemented in a qd (quotient- difference ) format

  18. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning

    PubMed Central

    Silva, Susana F.; Domingues, José Paulo

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed. PMID:29599938

  19. Accurate Rapid Lifetime Determination on Time-Gated FLIM Microscopy with Optical Sectioning.

    PubMed

    Silva, Susana F; Domingues, José Paulo; Morgado, António Miguel

    2018-01-01

    Time-gated fluorescence lifetime imaging microscopy (FLIM) is a powerful technique to assess the biochemistry of cells and tissues. When applied to living thick samples, it is hampered by the lack of optical sectioning and the need of acquiring many images for an accurate measurement of fluorescence lifetimes. Here, we report on the use of processing techniques to overcome these limitations, minimizing the acquisition time, while providing optical sectioning. We evaluated the application of the HiLo and the rapid lifetime determination (RLD) techniques for accurate measurement of fluorescence lifetimes with optical sectioning. HiLo provides optical sectioning by combining the high-frequency content from a standard image, obtained with uniform illumination, with the low-frequency content of a second image, acquired using structured illumination. Our results show that HiLo produces optical sectioning on thick samples without degrading the accuracy of the measured lifetimes. We also show that instrument response function (IRF) deconvolution can be applied with the RLD technique on HiLo images, improving greatly the accuracy of the measured lifetimes. These results open the possibility of using the RLD technique with pulsed diode laser sources to determine accurately fluorescence lifetimes in the subnanosecond range on thick multilayer samples, providing that offline processing is allowed.

  20. Hardwood log grading and lumber value

    Treesearch

    Allen W. Bratton

    1948-01-01

    For practically every commodity - beef, wool, grains, hides, lard, milk, eggs, etc. - there are standard grades based on quality. A price based on these standards is a fairly accurate measure of value. A person who deals with one of these commodities can use such price quotations in his business operations; in fact, he must. These quality specifications are essential...

  1. The value of body weight measurement to assess dehydration in children.

    PubMed

    Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain

    2013-01-01

    Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child's individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06-0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2-4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08-0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03-0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be reconsidered.

  2. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations.

    PubMed

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-15

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  3. Filter method without boundary-value condition for simultaneous calculation of eigenfunction and eigenvalue of a stationary Schrödinger equation on a grid.

    PubMed

    Nurhuda, M; Rouf, A

    2017-09-01

    The paper presents a method for simultaneous computation of eigenfunction and eigenvalue of the stationary Schrödinger equation on a grid, without imposing boundary-value condition. The method is based on the filter operator, which selects the eigenfunction from wave packet at the rate comparable to δ function. The efficacy and reliability of the method are demonstrated by comparing the simulation results with analytical or numerical solutions obtained by using other methods for various boundary-value conditions. It is found that the method is robust, accurate, and reliable. Further prospect of filter method for simulation of the Schrödinger equation in higher-dimensional space will also be highlighted.

  4. WR 20a Is an Eclipsing Binary: Accurate Determination of Parameters for an Extremely Massive Wolf-Rayet System

    NASA Astrophysics Data System (ADS)

    Bonanos, A. Z.; Stanek, K. Z.; Udalski, A.; Wyrzykowski, L.; Żebruń, K.; Kubiak, M.; Szymański, M. K.; Szewczyk, O.; Pietrzyński, G.; Soszyński, I.

    2004-08-01

    We present a high-precision I-band light curve for the Wolf-Rayet binary WR 20a, obtained as a subproject of the Optical Gravitational Lensing Experiment. Rauw et al. have recently presented spectroscopy for this system, strongly suggesting extremely large minimum masses of 70.7+/-4.0 and 68.8+/-3.8 Msolar for the component stars of the system, with the exact values depending strongly on the period of the system. We detect deep eclipses of about 0.4 mag in the light curve of WR 20a, confirming and refining the suspected period of P=3.686 days and deriving an inclination angle of i=74.5d+/-2.0d. Using these photometric data and the radial velocity data of Rauw et al., we derive the masses for the two components of WR 20a to be 83.0+/-5.0 and 82.0+/-5.0 Msolar. Therefore, WR 20a is confirmed to consist of two extremely massive stars and to be the most massive binary known with an accurate mass determination. Based on observations obtained with the 1.3 m Warsaw telescope at Las Campanas Observatory, which is operated by the Carnegie Institute of Washington.

  5. 7 CFR 765.353 - Determining market value.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 7 2010-01-01 2010-01-01 false Determining market value. 765.353 Section 765.353... Determining market value. (a) Security proposed for disposition. (1) The Agency will obtain an appraisal of... estimated value is less than $25,000. (b) Security remaining after disposition. The Agency will obtain an...

  6. The value of innovation under value-based pricing

    PubMed Central

    Moreno, Santiago G.; Ray, Joshua A.

    2016-01-01

    Objective The role of cost-effectiveness analysis (CEA) in incentivizing innovation is controversial. Critics of CEA argue that its use for pricing purposes disregards the ‘value of innovation’ reflected in new drug development, whereas supporters of CEA highlight that the value of innovation is already accounted for. Our objective in this article is to outline the limitations of the conventional CEA approach, while proposing an alternative method of evaluation that captures the value of innovation more accurately. Method The adoption of a new drug benefits present and future patients (with cost implications) for as long as the drug is part of clinical practice. Incidence patients and off-patent prices are identified as two key missing features preventing the conventional CEA approach from capturing 1) benefit to future patients and 2) future savings from off-patent prices. The proposed CEA approach incorporates these two features to derive the total lifetime value of an innovative drug (i.e., the value of innovation). Results The conventional CEA approach tends to underestimate the value of innovative drugs by disregarding the benefit to future patients and savings from off-patent prices. As a result, innovative drugs are underpriced, only allowing manufacturers to capture approximately 15% of the total value of innovation during the patent protection period. In addition to including the incidence population and off-patent price, the alternative approach proposes pricing new drugs by first negotiating the share of value of innovation to be appropriated by the manufacturer (>15%?) and payer (<85%?), in order to then identify the drug price that satisfies this condition. Conclusion We argue for a modification to the conventional CEA approach that integrates the total lifetime value of innovative drugs into CEA, by taking into account off-patent pricing and future patients. The proposed approach derives a price that allows manufacturers to capture an agreed share

  7. Accurate Binding Free Energy Predictions in Fragment Optimization.

    PubMed

    Steinbrecher, Thomas B; Dahlgren, Markus; Cappel, Daniel; Lin, Teng; Wang, Lingle; Krilov, Goran; Abel, Robert; Friesner, Richard; Sherman, Woody

    2015-11-23

    Predicting protein-ligand binding free energies is a central aim of computational structure-based drug design (SBDD)--improved accuracy in binding free energy predictions could significantly reduce costs and accelerate project timelines in lead discovery and optimization. The recent development and validation of advanced free energy calculation methods represents a major step toward this goal. Accurately predicting the relative binding free energy changes of modifications to ligands is especially valuable in the field of fragment-based drug design, since fragment screens tend to deliver initial hits of low binding affinity that require multiple rounds of synthesis to gain the requisite potency for a project. In this study, we show that a free energy perturbation protocol, FEP+, which was previously validated on drug-like lead compounds, is suitable for the calculation of relative binding strengths of fragment-sized compounds as well. We study several pharmaceutically relevant targets with a total of more than 90 fragments and find that the FEP+ methodology, which uses explicit solvent molecular dynamics and physics-based scoring with no parameters adjusted, can accurately predict relative fragment binding affinities. The calculations afford R(2)-values on average greater than 0.5 compared to experimental data and RMS errors of ca. 1.1 kcal/mol overall, demonstrating significant improvements over the docking and MM-GBSA methods tested in this work and indicating that FEP+ has the requisite predictive power to impact fragment-based affinity optimization projects.

  8. Accurate potential drop sheet resistance measurements of laser-doped areas in semiconductors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Heinrich, Martin, E-mail: mh.seris@gmail.com; NUS Graduate School for Integrative Science and Engineering, National University of Singapore, Singapore 117456; Kluska, Sven

    2014-10-07

    It is investigated how potential drop sheet resistance measurements of areas formed by laser-assisted doping in crystalline Si wafers are affected by typically occurring experimental factors like sample size, inhomogeneities, surface roughness, or coatings. Measurements are obtained with a collinear four point probe setup and a modified transfer length measurement setup to measure sheet resistances of laser-doped lines. Inhomogeneities in doping depth are observed from scanning electron microscope images and electron beam induced current measurements. It is observed that influences from sample size, inhomogeneities, surface roughness, and coatings can be neglected if certain preconditions are met. Guidelines are given onmore » how to obtain accurate potential drop sheet resistance measurements on laser-doped regions.« less

  9. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y.; Drake, Steven K.; Gucek, Marjan; Sacks, David B.; Yu, Yi-Kuo

    2018-06-01

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html.

  10. Rapid Classification and Identification of Multiple Microorganisms with Accurate Statistical Significance via High-Resolution Tandem Mass Spectrometry.

    PubMed

    Alves, Gelio; Wang, Guanghui; Ogurtsov, Aleksey Y; Drake, Steven K; Gucek, Marjan; Sacks, David B; Yu, Yi-Kuo

    2018-06-05

    Rapid and accurate identification and classification of microorganisms is of paramount importance to public health and safety. With the advance of mass spectrometry (MS) technology, the speed of identification can be greatly improved. However, the increasing number of microbes sequenced is complicating correct microbial identification even in a simple sample due to the large number of candidates present. To properly untwine candidate microbes in samples containing one or more microbes, one needs to go beyond apparent morphology or simple "fingerprinting"; to correctly prioritize the candidate microbes, one needs to have accurate statistical significance in microbial identification. We meet these challenges by using peptide-centric representations of microbes to better separate them and by augmenting our earlier analysis method that yields accurate statistical significance. Here, we present an updated analysis workflow that uses tandem MS (MS/MS) spectra for microbial identification or classification. We have demonstrated, using 226 MS/MS publicly available data files (each containing from 2500 to nearly 100,000 MS/MS spectra) and 4000 additional MS/MS data files, that the updated workflow can correctly identify multiple microbes at the genus and often the species level for samples containing more than one microbe. We have also shown that the proposed workflow computes accurate statistical significances, i.e., E values for identified peptides and unified E values for identified microbes. Our updated analysis workflow MiCId, a freely available software for Microorganism Classification and Identification, is available for download at https://www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads.html . Graphical Abstract ᅟ.

  11. Prognostic Value of Coronary Flow Reserve Obtained on Dobutamine Stress Echocardiography and its Correlation with Target Heart Rate.

    PubMed

    Abreu, José Sebastião de; Rocha, Eduardo Arrais; Machado, Isadora Sucupira; Parahyba, Isabelle O; Rocha, Thais Brito; Paes, Fernando José Villar Nogueira; Diogenes, Tereza Cristina Pinheiro; Abreu, Marília Esther Benevides de; Farias, Ana Gardenia Liberato Ponte; Carneiro, Marcia Maria; Paes, José Nogueira

    2017-05-01

    Normal coronary flow velocity reserve (CFVR) (≥ 2) obtained in the left anterior descending coronary artery (LAD) from transthoracic echocardiography is associated with a good prognosis, but there is no study correlating CFVR with submaximal target heart rate (HR). To evaluate the prognostic value of CFVR obtained in the LAD of patients with preserved (>50%) left ventricular ejection fraction (LVEF) who completed a dobutamine stress echocardiography (DSE), considering target HR. Prospective study of patients with preserved LVEF and CFVR obtained in the LAD who completed DSE. In Group I (GI = 31), normal CFVR was obtained before achieving target HR, and, in Group II (GII = 28), after that. Group III (G III=24) reached target HR, but CFVR was abnormal. Death, acute coronary insufficiency, coronary intervention, coronary angiography without further intervention, and hospitalization were considered events. In 28 ± 4 months, there were 18 (21.6%) events: 6% (2/31) in GI, 18% (5/28) in GII, and 46% (11/24) in GIII. There were 4 (4.8%) deaths, 6 (7.2%) coronary interventions and 8 (9.6%) coronary angiographies without further intervention. In event-free survival by regression analysis, GIII had more events than GI (p < 0.001) and GII (p < 0.045), with no difference between GI and GII (p = 0.160). After adjustment, the only difference was between GIII and GI (p = 0.012). In patients with preserved LVEF and who completed their DSE, normal CFVR obtained before achieving target HR was associated with better prognosis. A reserva de velocidade de fluxo coronariano (RVFC) adequada (≥ 2) obtida na artéria descendente anterior (ADA) através do ecocardiograma transtorácico associa-se a bom prognóstico, mas não há estudo correlacionando-a com a frequência cardíaca (FC) alvo (submáxima). Avaliar o valor prognóstico da RVFC obtida na ADA de pacientes com fração de ejeção do ventrículo esquerdo (FEVE) preservada (>50%) e ecocardiograma sob estresse com dobutamina

  12. How Accurate Are Transition States from Simulations of Enzymatic Reactions?

    PubMed Central

    2015-01-01

    The rate expression of traditional transition state theory (TST) assumes no recrossing of the transition state (TS) and thermal quasi-equilibrium between the ground state and the TS. Currently, it is not well understood to what extent these assumptions influence the nature of the activated complex obtained in traditional TST-based simulations of processes in the condensed phase in general and in enzymes in particular. Here we scrutinize these assumptions by characterizing the TSs for hydride transfer catalyzed by the enzyme Escherichia coli dihydrofolate reductase obtained using various simulation approaches. Specifically, we compare the TSs obtained with common TST-based methods and a dynamics-based method. Using a recently developed accurate hybrid quantum mechanics/molecular mechanics potential, we find that the TST-based and dynamics-based methods give considerably different TS ensembles. This discrepancy, which could be due equilibrium solvation effects and the nature of the reaction coordinate employed and its motion, raises major questions about how to interpret the TSs determined by common simulation methods. We conclude that further investigation is needed to characterize the impact of various TST assumptions on the TS phase-space ensemble and on the reaction kinetics. PMID:24860275

  13. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches†.

    PubMed

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-03-22

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating

  14. Seven Golden Rules for heuristic filtering of molecular formulas obtained by accurate mass spectrometry

    PubMed Central

    Kind, Tobias; Fiehn, Oliver

    2007-01-01

    Background Structure elucidation of unknown small molecules by mass spectrometry is a challenge despite advances in instrumentation. The first crucial step is to obtain correct elemental compositions. In order to automatically constrain the thousands of possible candidate structures, rules need to be developed to select the most likely and chemically correct molecular formulas. Results An algorithm for filtering molecular formulas is derived from seven heuristic rules: (1) restrictions for the number of elements, (2) LEWIS and SENIOR chemical rules, (3) isotopic patterns, (4) hydrogen/carbon ratios, (5) element ratio of nitrogen, oxygen, phosphor, and sulphur versus carbon, (6) element ratio probabilities and (7) presence of trimethylsilylated compounds. Formulas are ranked according to their isotopic patterns and subsequently constrained by presence in public chemical databases. The seven rules were developed on 68,237 existing molecular formulas and were validated in four experiments. First, 432,968 formulas covering five million PubChem database entries were checked for consistency. Only 0.6% of these compounds did not pass all rules. Next, the rules were shown to effectively reducing the complement all eight billion theoretically possible C, H, N, S, O, P-formulas up to 2000 Da to only 623 million most probable elemental compositions. Thirdly 6,000 pharmaceutical, toxic and natural compounds were selected from DrugBank, TSCA and DNP databases. The correct formulas were retrieved as top hit at 80–99% probability when assuming data acquisition with complete resolution of unique compounds and 5% absolute isotope ratio deviation and 3 ppm mass accuracy. Last, some exemplary compounds were analyzed by Fourier transform ion cyclotron resonance mass spectrometry and by gas chromatography-time of flight mass spectrometry. In each case, the correct formula was ranked as top hit when combining the seven rules with database queries. Conclusion The seven rules enable an

  15. A Hermite WENO reconstruction for fourth order temporal accurate schemes based on the GRP solver for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Du, Zhifang; Li, Jiequan

    2018-02-01

    This paper develops a new fifth order accurate Hermite WENO (HWENO) reconstruction method for hyperbolic conservation schemes in the framework of the two-stage fourth order accurate temporal discretization in Li and Du (2016) [13]. Instead of computing the first moment of the solution additionally in the conventional HWENO or DG approach, we can directly take the interface values, which are already available in the numerical flux construction using the generalized Riemann problem (GRP) solver, to approximate the first moment. The resulting scheme is fourth order temporal accurate by only invoking the HWENO reconstruction twice so that it becomes more compact. Numerical experiments show that such compactness makes significant impact on the resolution of nonlinear waves.

  16. An approach for accurate simulation of liquid mixing in a T-shaped micromixer.

    PubMed

    Matsunaga, Takuya; Lee, Ho-Joon; Nishino, Koichi

    2013-04-21

    In this paper, we propose a new computational method for efficient evaluation of the fluid mixing behaviour in a T-shaped micromixer with a rectangular cross section at high Schmidt number under steady state conditions. Our approach enables a low-cost high-quality simulation based on tracking of fluid particles for convective fluid mixing and posterior solving of a model of the species equation for molecular diffusion. The examined parameter range is Re = 1.33 × 10(-2) to 240 at Sc = 3600. The proposed method is shown to simulate well the mixing quality even in the engulfment regime, where the ordinary grid-based simulation is not able to obtain accurate solutions with affordable mesh sizes due to the numerical diffusion at high Sc. The obtained results agree well with a backward random-walk Monte Carlo simulation, by which the accuracy of the proposed method is verified. For further investigation of the characteristics of the proposed method, the Sc dependency is examined in a wide range of Sc from 10 to 3600 at Re = 200. The study reveals that the model discrepancy error emerges more significantly in the concentration distribution at lower Sc, while the resulting mixing quality is accurate over the entire range.

  17. The metallicity of M4: Accurate spectroscopic fundamental parameters for four giants

    NASA Technical Reports Server (NTRS)

    Drake, J. J.; Smith, V. V.; Suntzeff, N. B.

    1994-01-01

    High-quality spectra, covering the wavelength range 5480 to 7080 A, have been obtained for four giant stars in the intermediate-metallicity CN-bimodal globular cluster M4 (NGC 6121). We have employed a model atmosphere analysis that is entirely independent from cluster parameters, such as distance, age, and reddening, in order to derive accurate values for the stellar parameters effective temperature, surface gravity, and microturbulence, and for the abundance of iron relative to the Sun, (Fe/H), and of calcium, Ca/H, for each of the four stars. Detailed radiative transfer and statistical equilibrium calculations carried out for iron and calcium suggest that departures from local thermodynamic equilibrium are not significant for the purposes of our analysis. The spectroscopically derived effective temperatures for our program stars are hotter by about 200 K than existing photometric calibrations suggest. We conclude that this is due partly to the uncertain reddening of M4 and to the existing photometric temperature calibration for red giants being too cool by about 100 K. Comparison of our spectroscopic and existing photometric temperatures supports the prognosis of a significant east-west gradient in the reddening across M4. Our derived iron abundances are slightly higher than previous high-resolution studies suggested; the differences are most probably due to the different temperature scale and choice of microturbulent velocities adopted by earlier workers. The resulting value for the metallicity of M4 is (Fe/H )(sub M4) = -1.05 + or - 0.15. Based on this result, we suggest that metallicities derived in previous high-dispersion globular cluster abundance analyses could be too low by 0.2 to 0.3 dex. Our calcium abundances suggest an enhancement of calcium, an alpha element, over iron, relative to the Sun, in M4 of (Ca/H) = 0.23.

  18. An accurate automated technique for quasi-optics measurement of the microwave diagnostics for fusion plasma

    NASA Astrophysics Data System (ADS)

    Hu, Jianqiang; Liu, Ahdi; Zhou, Chu; Zhang, Xiaohui; Wang, Mingyuan; Zhang, Jin; Feng, Xi; Li, Hong; Xie, Jinlin; Liu, Wandong; Yu, Changxuan

    2017-08-01

    A new integrated technique for fast and accurate measurement of the quasi-optics, especially for the microwave/millimeter wave diagnostic systems of fusion plasma, has been developed. Using the LabVIEW-based comprehensive scanning system, we can realize not only automatic but also fast and accurate measurement, which will help to eliminate the effects of temperature drift and standing wave/multi-reflection. With the Matlab-based asymmetric two-dimensional Gaussian fitting method, all the desired parameters of the microwave beam can be obtained. This technique can be used in the design and testing of microwave diagnostic systems such as reflectometers and the electron cyclotron emission imaging diagnostic systems of the Experimental Advanced Superconducting Tokamak.

  19. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    PubMed Central

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRET-induced signal, and for differences in the detection efficiency and quantum yield of the probes. ALEX yields accurate FRET independent of instrumental factors, such as excitation intensity or detector alignment. Using DNA fragments, we showed that ALEX-based distances agree well with predictions from a cylindrical model of DNA; ALEX-based distances fit better to theory than distances obtained at the ensemble level. Distance measurements within transcription complexes agreed well with ensemble-FRET measurements, and with structural models based on ensemble-FRET and x-ray crystallography. ALEX can benefit structural analysis of biomolecules, especially when such molecules are inaccessible to conventional structural methods due to heterogeneity or transient nature. PMID:15653725

  20. A hybrid method for accurate star tracking using star sensor and gyros.

    PubMed

    Lu, Jiazhen; Yang, Lie; Zhang, Hao

    2017-10-01

    Star tracking is the primary operating mode of star sensors. To improve tracking accuracy and efficiency, a hybrid method using a star sensor and gyroscopes is proposed in this study. In this method, the dynamic conditions of an aircraft are determined first by the estimated angular acceleration. Under low dynamic conditions, the star sensor is used to measure the star vector and the vector difference method is adopted to estimate the current angular velocity. Under high dynamic conditions, the angular velocity is obtained by the calibrated gyros. The star position is predicted based on the estimated angular velocity and calibrated gyros using the star vector measurements. The results of the semi-physical experiment show that this hybrid method is accurate and feasible. In contrast with the star vector difference and gyro-assisted methods, the star position prediction result of the hybrid method is verified to be more accurate in two different cases under the given random noise of the star centroid.

  1. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  2. Development of improved enzyme-based and lateral flow immunoassays for rapid and accurate serodiagnosis of canine brucellosis.

    PubMed

    Cortina, María E; Novak, Analía; Melli, Luciano J; Elena, Sebastián; Corbera, Natalia; Romero, Juan E; Nicola, Ana M; Ugalde, Juan E; Comerci, Diego J; Ciocchini, Andrés E

    2017-09-01

    Brucellosis is a widespread zoonotic disease caused by Brucella spp. Brucella canis is the etiological agent of canine brucellosis, a disease that can lead to sterility in bitches and dogs causing important economic losses in breeding kennels. Early and accurate diagnosis of canine brucellosis is central to control the disease and lower the risk of transmission to humans. Here, we develop and validate enzyme and lateral flow immunoassays for improved serodiagnosis of canine brucellosis using as antigen the B. canis rough lipopolysaccharide (rLPS). The method used to obtain the rLPS allowed us to produce more homogeneous batches of the antigen that facilitated the standardization of the assays. To validate the assays, 284 serum samples obtained from naturally infected dogs and healthy animals were analyzed. For the B. canis-iELISA and B. canis-LFIA the diagnostic sensitivity was of 98.6%, and the specificity 99.5% and 100%, respectively. We propose the implementation of the B. canis-LFIA as a screening test in combination with the highly accurate laboratory g-iELISA. The B. canis-LFIA is a rapid, accurate and easy to use test, characteristics that make it ideal for the serological surveillance of canine brucellosis in the field or veterinary laboratories. Finally, a blind study including 1040 serum samples obtained from urban dogs showed a prevalence higher than 5% highlighting the need of new diagnostic tools for a more effective control of the disease in dogs and therefore to reduce the risk of transmission of this zoonotic pathogen to humans. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Accurate sub-millimetre rest frequencies for HOCO+ and DOCO+ ions

    NASA Astrophysics Data System (ADS)

    Bizzocchi, L.; Lattanzi, V.; Laas, J.; Spezzano, S.; Giuliano, B. M.; Prudenzano, D.; Endres, C.; Sipilä, O.; Caselli, P.

    2017-06-01

    Context. HOCO+ is a polar molecule that represents a useful proxy for its parent molecule CO2, which is not directly observable in the cold interstellar medium. This cation has been detected towards several lines of sight, including massive star forming regions, protostars, and cold cores. Despite the obvious astrochemical relevance, protonated CO2 and its deuterated variant, DOCO+, still lack an accurate spectroscopic characterisation. Aims: The aim of this work is to extend the study of the ground-state pure rotational spectra of HOCO+ and DOCO+ well into the sub-millimetre region. Methods: Ground-state transitions have been recorded in the laboratory using a frequency-modulation absorption spectrometer equipped with a free-space glow-discharge cell. The ions were produced in a low-density, magnetically confined plasma generated in a suitable gas mixture. The ground-state spectra of HOCO+ and DOCO+ have been investigated in the 213-967 GHz frequency range; 94 new rotational transitions have been detected. Additionally, 46 line positions taken from the literature have been accurately remeasured. Results: The newly measured lines have significantly enlarged the available data sets for HOCO+ and DOCO+, thus enabling the determination of highly accurate rotational and centrifugal distortion parameters. Our analysis shows that all HOCO+ lines with Ka ≥ 3 are perturbed by a ro-vibrational interaction that couples the ground state with the v5 = 1 vibrationally excited state. This resonance has been explicitly treated in the analysis in order to obtain molecular constants with clear physical meaning. Conclusions: The improved sets of spectroscopic parameters provide enhanced lists of very accurate sub-millimetre rest frequencies of HOCO+ and DOCO+ for astrophysical applications. These new data challenge a recent tentative identification of DOCO+ towards a pre-stellar core. Supplementary tables are only available at the CDS via anonymous ftp to http

  4. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyar, M. Darby; McCanta, Molly; Breves, Elly

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yield accurate resultsmore » from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less

  5. Accurate predictions of iron redox state in silicate glasses: A multivariate approach using X-ray absorption spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dyar, M. Darby; McCanta, Molly; Breves, Elly

    2016-03-01

    Pre-edge features in the K absorption edge of X-ray absorption spectra are commonly used to predict Fe 3+ valence state in silicate glasses. However, this study shows that using the entire spectral region from the pre-edge into the extended X-ray absorption fine-structure region provides more accurate results when combined with multivariate analysis techniques. The least absolute shrinkage and selection operator (lasso) regression technique yields %Fe 3+ values that are accurate to ±3.6% absolute when the full spectral region is employed. This method can be used across a broad range of glass compositions, is easily automated, and is demonstrated to yieldmore » accurate results from different synchrotrons. It will enable future studies involving X-ray mapping of redox gradients on standard thin sections at 1 × 1 μm pixel sizes.« less

  6. Validity and Reliability of Scores Obtained on Multiple-Choice Questions: Why Functioning Distractors Matter

    ERIC Educational Resources Information Center

    Ali, Syed Haris; Carr, Patrick A.; Ruit, Kenneth G.

    2016-01-01

    Plausible distractors are important for accurate measurement of knowledge via multiple-choice questions (MCQs). This study demonstrates the impact of higher distractor functioning on validity and reliability of scores obtained on MCQs. Freeresponse (FR) and MCQ versions of a neurohistology practice exam were given to four cohorts of Year 1 medical…

  7. Economic Goods and Services: Economic and Non-Economic Methods for Valuing

    EPA Science Inventory

    One of the greatest problems that global society faces in the 21st century is to accurately determine the value of the work contributions that the environment makes to support society. This work can be valued by economic methods, both market and nonmarket, as well as by accounti...

  8. Critical assessment of pediatric neurosurgery patient/parent educational information obtained via the Internet.

    PubMed

    Garcia, Michael; Daugherty, Christopher; Ben Khallouq, Bertha; Maugans, Todd

    2018-05-01

    OBJECTIVE The Internet is used frequently by patients and family members to acquire information about pediatric neurosurgical conditions. The sources, nature, accuracy, and usefulness of this information have not been examined recently. The authors analyzed the results from searches of 10 common pediatric neurosurgical terms using a novel scoring test to assess the value of the educational information obtained. METHODS Google and Bing searches were performed for 10 common pediatric neurosurgical topics (concussion, craniosynostosis, hydrocephalus, pediatric brain tumor, pediatric Chiari malformation, pediatric epilepsy surgery, pediatric neurosurgery, plagiocephaly, spina bifida, and tethered spinal cord). The first 10 "hits" obtained with each search engine were analyzed using the Currency, Relevance, Authority, Accuracy, and Purpose (CRAAP) test, which assigns a numerical score in each of 5 domains. Agreement between results was assessed for 1) concurrent searches with Google and Bing; 2) Google searches over time (6 months apart); 3) Google searches using mobile and PC platforms concurrently; and 4) searches using privacy settings. Readability was assessed with an online analytical tool. RESULTS Google and Bing searches yielded information with similar CRAAP scores (mean 72% and 75%, respectively), but with frequently differing results (58% concordance/matching results). There was a high level of agreement (72% concordance) over time for Google searches and also between searches using general and privacy settings (92% concordance). Government sources scored the best in both CRAAP score and readability. Hospitals and universities were the most prevalent sources, but these sources had the lowest CRAAP scores, due in part to an abundance of self-marketing. The CRAAP scores for mobile and desktop platforms did not differ significantly (p = 0.49). CONCLUSIONS Google and Bing searches yielded useful educational information, using either mobile or PC platforms. Most

  9. Experimental and theoretical oscillator strengths of Mg I for accurate abundance analysis

    NASA Astrophysics Data System (ADS)

    Pehlivan Rhodin, A.; Hartman, H.; Nilsson, H.; Jönsson, P.

    2017-02-01

    Context. With the aid of stellar abundance analysis, it is possible to study the galactic formation and evolution. Magnesium is an important element to trace the α-element evolution in our Galaxy. For chemical abundance analysis, such as magnesium abundance, accurate and complete atomic data are essential. Inaccurate atomic data lead to uncertain abundances and prevent discrimination between different evolution models. Aims: We study the spectrum of neutral magnesium from laboratory measurements and theoretical calculations. Our aim is to improve the oscillator strengths (f-values) of Mg I lines and to create a complete set of accurate atomic data, particularly for the near-IR region. Methods: We derived oscillator strengths by combining the experimental branching fractions with radiative lifetimes reported in the literature and computed in this work. A hollow cathode discharge lamp was used to produce free atoms in the plasma and a Fourier transform spectrometer recorded the intensity-calibrated high-resolution spectra. In addition, we performed theoretical calculations using the multiconfiguration Hartree-Fock program ATSP2K. Results: This project provides a set of experimental and theoretical oscillator strengths. We derived 34 experimental oscillator strengths. Except from the Mg I optical triplet lines (3p 3P°0,1,2-4s 3S1), these oscillator strengths are measured for the first time. The theoretical oscillator strengths are in very good agreement with the experimental data and complement the missing transitions of the experimental data up to n = 7 from even and odd parity terms. We present an evaluated set of oscillator strengths, gf, with uncertainties as small as 5%. The new values of the Mg I optical triplet line (3p 3P°0,1,2-4s 3S1) oscillator strength values are 0.08 dex larger than the previous measurements.

  10. Research on the Rapid and Accurate Positioning and Orientation Approach for Land Missile-Launching Vehicle

    PubMed Central

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-01-01

    Getting a land vehicle’s accurate position, azimuth and attitude rapidly is significant for vehicle based weapons’ combat effectiveness. In this paper, a new approach to acquire vehicle’s accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle’s accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm’s iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system’s working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min. PMID:26492249

  11. Research on the rapid and accurate positioning and orientation approach for land missile-launching vehicle.

    PubMed

    Li, Kui; Wang, Lei; Lv, Yanhong; Gao, Pengyu; Song, Tianxiao

    2015-10-20

    Getting a land vehicle's accurate position, azimuth and attitude rapidly is significant for vehicle based weapons' combat effectiveness. In this paper, a new approach to acquire vehicle's accurate position and orientation is proposed. It uses biaxial optical detection platform (BODP) to aim at and lock in no less than three pre-set cooperative targets, whose accurate positions are measured beforehand. Then, it calculates the vehicle's accurate position, azimuth and attitudes by the rough position and orientation provided by vehicle based navigation systems and no less than three couples of azimuth and pitch angles measured by BODP. The proposed approach does not depend on Global Navigation Satellite System (GNSS), thus it is autonomous and difficult to interfere. Meanwhile, it only needs a rough position and orientation as algorithm's iterative initial value, consequently, it does not have high performance requirement for Inertial Navigation System (INS), odometer and other vehicle based navigation systems, even in high precise applications. This paper described the system's working procedure, presented theoretical deviation of the algorithm, and then verified its effectiveness through simulation and vehicle experiments. The simulation and experimental results indicate that the proposed approach can achieve positioning and orientation accuracy of 0.2 m and 20″ respectively in less than 3 min.

  12. Accurately determining log and bark volumes of saw logs using high-resolution laser scan data

    Treesearch

    R. Edward Thomas; Neal D. Bennett

    2014-01-01

    Accurately determining the volume of logs and bark is crucial to estimating the total expected value recovery from a log. Knowing the correct size and volume of a log helps to determine which processing method, if any, should be used on a given log. However, applying volume estimation methods consistently can be difficult. Errors in log measurement and oddly shaped...

  13. Hematological reference values of healthy Malaysian population.

    PubMed

    Roshan, T M; Rosline, H; Ahmed, S A; Rapiaah, M; Wan Zaidah, A; Khattak, M N

    2009-10-01

    Health and disease can only be distinguished by accurate and reliable reference values of a particular laboratory test. It is now a proven fact that there is considerable variation in hematology reference intervals depending on the demographic and preanalytical variables. There are evidences that values provided by manufacturers do not have appropriate application for all populations. Moreover, reference ranges provided by different laboratory manuals and books also do not solve this problem. We are presenting here normal reference ranges of Malaysian population. These values were determined by using Sysmex XE-2100 and ACL 9000 hematology and coagulation analyzers. Results from this study showed that there were considerable differences in the reference values from manufacturers, western population or laboratory manuals compared with those from the local population.

  14. An accurate redetermination of the 118Sn binding energy

    NASA Astrophysics Data System (ADS)

    Borzakov, S. B.; Chrien, R. E.; Faikow-Stanczyk, H.; Grigoriev, Yu. V.; Panteleev, Ts. Ts.; Pospisil, S.; Smotritsky, L. M.; Telezhnikov, S. A.

    2002-03-01

    The energy of well-known strong γ line from 198Au, the "gold standard", has been modified in the light of new adjustments in the fundamental constants and the value of 411.80176(12) keV was determined, which is 0.29 eV lower than the latest 1999 value. An energy calibration procedure for determining the neutron binding energy, Bn, from complicated (n, γ) spectra has been developed. A mathematically simple minimization function consisting only of terms having as parameters the coefficients of the energy calibration curve (polynomial) is used. A priori information about the relationships among the energies of different peaks on the spectrum is taken into account by a Monte-Carlo simulation. The procedure was used in obtaining Bn for 118Sn. The γ-ray spectrum from thermal neutron radiative capture by 117Sn has been measured on the IBR-2 pulsed reactor. γ-rays were detected by a 72 cm 3 HPGe detector. For a better determination of Bn it was important to determine Bn for 64Cu. This value was obtained from two γ-spectra. One spectrum was measured on the IBR-2 by the same detector. The other spectrum was measured with a pair spectrometer at the Brookhaven High Flux Beam Reactor. From these two spectra, Bn for 64Cu was determined to be equal to 7915.52(8) keV. This result essentially differs from the previous value of 7915.96(11) keV. The mean value of the two most precise results of the Bn for 118Sn, was determined to be 9326.35(9) keV. The Bn for 57Fe was determined to be 7646.08(9) keV.

  15. Accurate finite difference methods for time-harmonic wave propagation

    NASA Technical Reports Server (NTRS)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  16. A time-accurate finite volume method valid at all flow velocities

    NASA Technical Reports Server (NTRS)

    Kim, S.-W.

    1993-01-01

    A finite volume method to solve the Navier-Stokes equations at all flow velocities (e.g., incompressible, subsonic, transonic, supersonic and hypersonic flows) is presented. The numerical method is based on a finite volume method that incorporates a pressure-staggered mesh and an incremental pressure equation for the conservation of mass. Comparison of three generally accepted time-advancing schemes, i.e., Simplified Marker-and-Cell (SMAC), Pressure-Implicit-Splitting of Operators (PISO), and Iterative-Time-Advancing (ITA) scheme, are made by solving a lid-driven polar cavity flow and self-sustained oscillatory flows over circular and square cylinders. Calculated results show that the ITA is the most stable numerically and yields the most accurate results. The SMAC is the most efficient computationally and is as stable as the ITA. It is shown that the PISO is the most weakly convergent and it exhibits an undesirable strong dependence on the time-step size. The degenerated numerical results obtained using the PISO are attributed to its second corrector step that cause the numerical results to deviate further from a divergence free velocity field. The accurate numerical results obtained using the ITA is attributed to its capability to resolve the nonlinearity of the Navier-Stokes equations. The present numerical method that incorporates the ITA is used to solve an unsteady transitional flow over an oscillating airfoil and a chemically reacting flow of hydrogen in a vitiated supersonic airstream. The turbulence fields in these flow cases are described using multiple-time-scale turbulence equations. For the unsteady transitional over an oscillating airfoil, the fluid flow is described using ensemble-averaged Navier-Stokes equations defined on the Lagrangian-Eulerian coordinates. It is shown that the numerical method successfully predicts the large dynamic stall vortex (DSV) and the trailing edge vortex (TEV) that are periodically generated by the oscillating airfoil

  17. Preference-based measures to obtain health state utility values for use in economic evaluations with child-based populations: a review and UK-based focus group assessment of patient and parent choices.

    PubMed

    Wolstenholme, Jane L; Bargo, Danielle; Wang, Kay; Harnden, Anthony; Räisänen, Ulla; Abel, Lucy

    2018-03-21

    No current guidance is available in the UK on the choice of preference-based measure (PBM) that should be used in obtaining health-related quality of life from children. The aim of this study is to review the current usage of PBMs for obtaining health state utility values in child and adolescent populations, and to obtain information on patient and parent-proxy respondent preferences in completing PBMs in the UK. A literature review was conducted to determine which instrument is most frequently used for child-based economic evaluations and whether child or proxy responses are used. Instruments were compared on dimensions, severity levels, elicitation and valuation methods, availability of value sets and validation studies, and the range of utility values generated. Additionally, a series of focus groups of parents and young people (11-20 years) were convened to determine patient and proxy preferences. Five PBMs suitable for child populations were identified, although only the Health Utilities Index 2 (HUI2) and Child Heath Utility 9D (CHU-9D) have UK value sets. 45 papers used PBMs in this population, but many used non-child-specific PBMs. Most respondents were parent proxies, even in adolescent populations. Reported missing data ranged from 0.5 to 49.3%. The focus groups reported their experiences with the EQ-5D-Y and CHU-9D. Both the young persons' group and parent/proxy groups felt that the CHU-9D was more comprehensive but may be harder for a proxy to complete. Some younger children had difficulty understanding the CHU-9D questions, but the young persons' group nonetheless preferred responding directly. The use of PBMs in child populations is increasing, but many studies use PBMs that do not have appropriate value sets. Parent proxies are the most common respondents, but the focus group responses suggest it would be preferred, and may be more informative, for older children to self-report or for child-parent dyads to respond.

  18. Profitable capitation requires accurate costing.

    PubMed

    West, D A; Hicks, L L; Balas, E A; West, T D

    1996-01-01

    In the name of costing accuracy, nurses are asked to track inventory use on per treatment basis when more significant costs, such as general overhead and nursing salaries, are usually allocated to patients or treatments on an average cost basis. Accurate treatment costing and financial viability require analysis of all resources actually consumed in treatment delivery, including nursing services and inventory. More precise costing information enables more profitable decisions as is demonstrated by comparing the ratio-of-cost-to-treatment method (aggregate costing) with alternative activity-based costing methods (ABC). Nurses must participate in this costing process to assure that capitation bids are based upon accurate costs rather than simple averages.

  19. Implications of Weltanschauungen for Value Formulation by Guidance Personnel

    ERIC Educational Resources Information Center

    Weinstock, Henry R.; O'Dowd, Peter S.

    1970-01-01

    Examines two divergent world-views (empiricism and ration alism) of human values, presenting definitions from many philosophers. Anticipates possibility of confluence of beliefs on nature of man which may offer guidance personnel more accurate tools for direction. (CJ)

  20. Radiometer for accurate (+ or - 1%) measurement of solar irradiance equal to 10,000 solar constants

    NASA Technical Reports Server (NTRS)

    Kendall, J. M., Sr.

    1981-01-01

    The 10,000 solar constant radiometer was developed for the accurate (+ or - 1%) measurement of the irradiance produced in the image formed by a parabolic reflector or by a multiple mirror solar installation. This radiometer is water cooled, weighs about 1 kg, and is 5 cm (2 in.) in diameter by 10 cm (4 in.) long. A sting is provided for mounting the radiometer in the solar installation capable of measuring irradiances as high as 20,000 solar constants, the instrument is self calibrating. Its accuracy depends on the accurate determination of the cavity aperture, and absorptivity of the cavity, and accurate electrical measurements. The spectral response is flat over the entire spectrum from far UV to far IR. The radiometer responds to a measurement within 99.7% of the final value within 8 s. During a measurement of the 10,000 solar constant irradiance, the temperature rise of the water is about 20 C. The radiometer has perfect cosine response up to 60 deg off the radiometer axis.

  1. Improved patient size estimates for accurate dose calculations in abdomen computed tomography

    NASA Astrophysics Data System (ADS)

    Lee, Chang-Lae

    2017-07-01

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  2. Improved image quality in pinhole SPECT by accurate modeling of the point spread function in low magnification systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pino, Francisco; Roé, Nuria; Aguiar, Pablo, E-mail: pablo.aguiar.fernandez@sergas.es

    2015-02-15

    Purpose: Single photon emission computed tomography (SPECT) has become an important noninvasive imaging technique in small-animal research. Due to the high resolution required in small-animal SPECT systems, the spatially variant system response needs to be included in the reconstruction algorithm. Accurate modeling of the system response should result in a major improvement in the quality of reconstructed images. The aim of this study was to quantitatively assess the impact that an accurate modeling of spatially variant collimator/detector response has on image-quality parameters, using a low magnification SPECT system equipped with a pinhole collimator and a small gamma camera. Methods: Threemore » methods were used to model the point spread function (PSF). For the first, only the geometrical pinhole aperture was included in the PSF. For the second, the septal penetration through the pinhole collimator was added. In the third method, the measured intrinsic detector response was incorporated. Tomographic spatial resolution was evaluated and contrast, recovery coefficients, contrast-to-noise ratio, and noise were quantified using a custom-built NEMA NU 4–2008 image-quality phantom. Results: A high correlation was found between the experimental data corresponding to intrinsic detector response and the fitted values obtained by means of an asymmetric Gaussian distribution. For all PSF models, resolution improved as the distance from the point source to the center of the field of view increased and when the acquisition radius diminished. An improvement of resolution was observed after a minimum of five iterations when the PSF modeling included more corrections. Contrast, recovery coefficients, and contrast-to-noise ratio were better for the same level of noise in the image when more accurate models were included. Ring-type artifacts were observed when the number of iterations exceeded 12. Conclusions: Accurate modeling of the PSF improves resolution, contrast, and

  3. Meaning That Social Studies Teacher Candidates Give to Value Concept and Their Value Rankings

    ERIC Educational Resources Information Center

    Aysegül, Tural

    2018-01-01

    This work determines the role that value education plays in shaping people's personal and social life. This research aims to put forward meaning that social studies teacher candidates give to value concept and its value ranking. To achieve this aim, the opinions of 12 social studies teacher candidates were obtained. During the data collection…

  4. Fuzzy Reasoning to More Accurately Determine Void Areas on Optical Micrographs of Composite Structures

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Tate, Lanetra C.; Wright, M. Clara; Caraccio, Anne

    2013-01-01

    Accomplishing the best-performing composite matrix (resin) requires that not only the processing method but also the cure cycle generate low-void-content structures. If voids are present, the performance of the composite matrix will be significantly reduced. This is usually noticed by significant reductions in matrix-dominated properties, such as compression and shear strength. Voids in composite materials are areas that are absent of the composite components: matrix and fibers. The characteristics of the voids and their accurate estimation are critical to determine for high performance composite structures. One widely used method of performing void analysis on a composite structure sample is acquiring optical micrographs or Scanning Electron Microscope (SEM) images of lateral sides of the sample and retrieving the void areas within the micrographs/images using an image analysis technique. Segmentation for the retrieval and subsequent computation of void areas within the micrographs/images is challenging as the gray-scaled values of the void areas are close to the gray-scaled values of the matrix leading to the need of manually performing the segmentation based on the histogram of the micrographs/images to retrieve the void areas. The use of an algorithm developed by NASA and based on Fuzzy Reasoning (FR) proved to overcome the difficulty of suitably differentiate void and matrix image areas with similar gray-scaled values leading not only to a more accurate estimation of void areas on composite matrix micrographs but also to a faster void analysis process as the algorithm is fully autonomous.

  5. Accurate Determination of the Frequency Response Function of Submerged and Confined Structures by Using PZT-Patches †

    PubMed Central

    Presas, Alexandre; Valentin, David; Egusquiza, Eduard; Valero, Carme; Egusquiza, Mònica; Bossio, Matias

    2017-01-01

    To accurately determine the dynamic response of a structure is of relevant interest in many engineering applications. Particularly, it is of paramount importance to determine the Frequency Response Function (FRF) for structures subjected to dynamic loads in order to avoid resonance and fatigue problems that can drastically reduce their useful life. One challenging case is the experimental determination of the FRF of submerged and confined structures, such as hydraulic turbines, which are greatly affected by dynamic problems as reported in many cases in the past. The utilization of classical and calibrated exciters such as instrumented hammers or shakers to determine the FRF in such structures can be very complex due to the confinement of the structure and because their use can disturb the boundary conditions affecting the experimental results. For such cases, Piezoelectric Patches (PZTs), which are very light, thin and small, could be a very good option. Nevertheless, the main drawback of these exciters is that the calibration as dynamic force transducers (relationship voltage/force) has not been successfully obtained in the past. Therefore, in this paper, a method to accurately determine the FRF of submerged and confined structures by using PZTs is developed and validated. The method consists of experimentally determining some characteristic parameters that define the FRF, with an uncalibrated PZT exciting the structure. These parameters, which have been experimentally determined, are then introduced in a validated numerical model of the tested structure. In this way, the FRF of the structure can be estimated with good accuracy. With respect to previous studies, where only the natural frequencies and mode shapes were considered, this paper discuss and experimentally proves the best excitation characteristic to obtain also the damping ratios and proposes a procedure to fully determine the FRF. The method proposed here has been validated for the structure vibrating

  6. Numerical Solutions of the Mean-Value Theorem: New Methods for Downward Continuation of Potential Fields

    NASA Astrophysics Data System (ADS)

    Zhang, Chong; Lü, Qingtian; Yan, Jiayong; Qi, Guang

    2018-04-01

    Downward continuation can enhance small-scale sources and improve resolution. Nevertheless, the common methods have disadvantages in obtaining optimal results because of divergence and instability. We derive the mean-value theorem for potential fields, which could be the theoretical basis of some data processing and interpretation. Based on numerical solutions of the mean-value theorem, we present the convergent and stable downward continuation methods by using the first-order vertical derivatives and their upward continuation. By applying one of our methods to both the synthetic and real cases, we show that our method is stable, convergent and accurate. Meanwhile, compared with the fast Fourier transform Taylor series method and the integrated second vertical derivative Taylor series method, our process has very little boundary effect and is still stable in noise. We find that the characters of the fading anomalies emerge properly in our downward continuation with respect to the original fields at the lower heights.

  7. On the value of the phenotypes in the genomic era.

    PubMed

    Gonzalez-Recio, O; Coffey, M P; Pryce, J E

    2014-12-01

    Genetic improvement programs around the world rely on the collection of accurate phenotypic data. These phenotypes have an inherent value that can be estimated as the contribution of an additional record to genetic gain. Here, the contribution of phenotypes to genetic gain was calculated using traditional progeny testing (PT) and 2 genomic selection (GS) strategies that, for simplicity, included either males or females in the reference population. A procedure to estimate the theoretical economic contribution of a phenotype to a breeding program is described for both GS and PT breeding programs through the increment in genetic gain per unit of increase in estimated breeding value reliability obtained when an additional phenotypic record is added. The main factors affecting the value of a phenotype were the economic value of the trait, the number of phenotypic records already available for the trait, and its heritability. Furthermore, the value of a phenotype was affected by several other factors, including the cost of establishing the breeding program and the cost of phenotyping and genotyping. The cost of achieving a reliability of 0.60 was assessed for different reference populations for GS. Genomic reference populations of more sires with small progeny group sizes (e.g., 20 equivalent daughters) had a lower cost than those reference populations with either large progeny group sizes for fewer genotyped sires, or female reference populations, unless the heritability was large and the cost of phenotyping exceeded a few hundred dollars; then, female reference populations were preferable from an economic perspective. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    PubMed Central

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid

  9. The Wagner-Nelson method can generate an accurate gastric emptying flow curve from CO2 data obtained by a 13C-labeled substrate breath test.

    PubMed

    Sanaka, Masaki; Yamamoto, Takatsugu; Ishii, Tarou; Kuyama, Yasushi

    2004-01-01

    In pharmacokinetics, the Wagner-Nelson (W-N) method can accurately estimate the rate of drug absorption from its urinary elimination rate. A stable isotope (13C) breath test attempts to estimate the rate of absorption of 13C, as an index of gastric emptying rate, from the rate of pulmonary elimination of 13CO2. The time-gastric emptying curve determined by the breath test is quite different from that determined by scintigraphy or ultrasonography. In this report, we have shown that the W-N method can adjust the difference. The W-N equation to estimate gastric emptying from breath data is as follows: the fractional cumulative amount of gastric contents emptied by time t = Abreath (t)/Abreath (infinity) + (1/0.65).d[Abreath (t)/Abreath (infinity) ]/dt, where Abreath (t) = the cumulative recovery of 13CO2 in breath by time t and Abreath ( infinity ) = the ultimate cumulative 13CO2 recovery. The emptying flow curve generated by ultrasonography was compared with that generated by the W-N method-adjusted breath test in 6 volunteers. The emptying curves by the W-N method were almost identical to those by ultrasound. The W-N method can generate an accurate emptying flow curve from 13CO2 data, and it can adjust the difference between ultrasonography and the breath test. Copyright 2004 S. Karger AG, Basel

  10. Valuing hydrological forecasts for a pumped storage assisted hydro facility

    NASA Astrophysics Data System (ADS)

    Zhao, Guangzhi; Davison, Matt

    2009-07-01

    SummaryThis paper estimates the value of a perfectly accurate short-term hydrological forecast to the operator of a hydro electricity generating facility which can sell its power at time varying but predictable prices. The expected value of a less accurate forecast will be smaller. We assume a simple random model for water inflows and that the costs of operating the facility, including water charges, will be the same whether or not its operator has inflow forecasts. Thus, the improvement in value from better hydrological prediction results from the increased ability of the forecast using facility to sell its power at high prices. The value of the forecast is therefore the difference between the sales of a facility operated over some time horizon with a perfect forecast, and the sales of a similar facility operated over the same time horizon with similar water inflows which, though governed by the same random model, cannot be forecast. This paper shows that the value of the forecast is an increasing function of the inflow process variance and quantifies how much the value of this perfect forecast increases with the variance of the water inflow process. Because the lifetime of hydroelectric facilities is long, the small increase observed here can lead to an increase in the profitability of hydropower investments.

  11. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  12. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    PubMed

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  13. Multidimensional gas chromatography in combination with accurate mass, tandem mass spectrometry, and element-specific detection for identification of sulfur compounds in tobacco smoke.

    PubMed

    Ochiai, Nobuo; Mitsui, Kazuhisa; Sasamoto, Kikuo; Yoshimura, Yuta; David, Frank; Sandra, Pat

    2014-09-05

    A method is developed for identification of sulfur compounds in tobacco smoke extract. The method is based on large volume injection (LVI) of 10μL of tobacco smoke extract followed by selectable one-dimensional ((1)D) or two-dimensional ((2)D) gas chromatography (GC) coupled to a hybrid quadrupole time-of-flight mass spectrometer (Q-TOF-MS) using electron ionization (EI) and positive chemical ionization (PCI), with parallel sulfur chemiluminescence detection (SCD). In order to identify each individual sulfur compound, sequential heart-cuts of 28 sulfur fractions from (1)D GC to (2)D GC were performed with the three MS detection modes (SCD/EI-TOF-MS, SCD/PCI-TOF-MS, and SCD/PCI-Q-TOF-MS). Thirty sulfur compounds were positively identified by MS library search, linear retention indices (LRI), molecular mass determination using PCI accurate mass spectra, formula calculation using EI and PCI accurate mass spectra, and structure elucidation using collision activated dissociation (CAD) of the protonated molecule. Additionally, 11 molecular formulas were obtained for unknown sulfur compounds. The determined values of the identified and unknown sulfur compounds were in the range of 10-740ngmg total particulate matter (TPM) (RSD: 1.2-12%, n=3). Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.

  14. An Accurate ab initio Quartic Force Field and Vibrational Frequencies for CH4 and Isotopomers

    NASA Technical Reports Server (NTRS)

    Lee, Timothy J.; Martin, Jan M. L.; Taylor, Peter R.

    1995-01-01

    A very accurate ab initio quartic force field for CH4 and its isotopomers is presented. The quartic force field was determined with the singles and doubles coupled-cluster procedure that includes a quasiperturbative estimate of the effects of connected triple excitations, CCSD(T), using the correlation consistent polarized valence triple zeta, cc-pVTZ, basis set. Improved quadratic force constants were evaluated with the correlation consistent polarized valence quadruple zeta, cc-pVQZ, basis set. Fundamental vibrational frequencies are determined using second-order perturbation theory anharmonic analyses. All fundamentals of CH4 and isotopomers for which accurate experimental values exist and for which there is not a large Fermi resonance, are predicted to within +/- 6 cm(exp -1). It is thus concluded that our predictions for the harmonic frequencies and the anharmonic constants are the most accurate estimates available. It is also shown that using cubic and quartic force constants determined with the correlation consistent polarized double zeta, cc-pVDZ, basis set in conjunction with the cc-pVQZ quadratic force constants and equilibrium geometry leads to accurate predictions for the fundamental vibrational frequencies of methane, suggesting that this approach may be a viable alternative for larger molecules. Using CCSD(T), core correlation is found to reduce the CH4 r(e), by 0.0015 A. Our best estimate for r, is 1.0862 +/- 0.0005 A.

  15. Accurate mode characterization of two-mode optical fibers by in-fiber acousto-optics.

    PubMed

    Alcusa-Sáez, E; Díez, A; Andrés, M V

    2016-03-07

    Acousto-optic interaction in optical fibers is exploited for the accurate and broadband characterization of two-mode optical fibers. Coupling between LP 01 and LP 1m modes is produced in a broadband wavelength range. Difference in effective indices, group indices, and chromatic dispersions between the guided modes, are obtained from experimental measurements. Additionally, we show that the technique is suitable to investigate the fine modes structure of LP modes, and some other intriguing features related with modes' cut-off.

  16. Simple and accurate sum rules for highly relativistic systems

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2005-03-01

    In this paper, I consider the Bethe and Thomas-Reiche-Kuhn sum rules, which together form the foundation of Bethe's theory of energy loss from fast charged particles to matter. For nonrelativistic target systems, the use of closure leads directly to simple expressions for these quantities. In the case of relativistic systems, on the other hand, the calculation of sum rules is fraught with difficulties. Various perturbative approaches have been used over the years to obtain relativistic corrections, but these methods fail badly when the system in question is very strongly bound. Here, I present an approach that leads to relatively simple expressions yielding accurate sums, even for highly relativistic many-electron systems. I also offer an explanation for the difference between relativistic and nonrelativistic sum rules in terms of the Zitterbewegung of the electrons.

  17. Normal Values of Tissue-Muscle Perfusion Indexes of Lower Limbs Obtained with a Scintigraphic Method.

    PubMed

    Manevska, Nevena; Stojanoski, Sinisa; Pop Gjorceva, Daniela; Todorovska, Lidija; Miladinova, Daniela; Zafirova, Beti

    2017-09-01

    Introduction Muscle perfusion is a physiologic process that can undergo quantitative assessment and thus define the range of normal values of perfusion indexes and perfusion reserve. The investigation of the microcirculation has a crucial role in determining the muscle perfusion. Materials and method The study included 30 examinees, 24-74 years of age, without a history of confirmed peripheral artery disease and all had normal findings on Doppler ultrasonography and pedo-brachial index of lower extremity (PBI). 99mTc-MIBI tissue muscle perfusion scintigraphy of lower limbs evaluates tissue perfusion in resting condition "rest study" and after workload "stress study", through quantitative parameters: Inter-extremity index (for both studies), left thigh/right thigh (LT/RT) left calf/right calf (LC/RC) and perfusion reserve (PR) for both thighs and calves. Results In our investigated group we assessed the normal values of quantitative parameters of perfusion indexes. Indexes ranged for LT/RT in rest study 0.91-1.05, in stress study 0.92-1.04. LC/RC in rest 0.93-1.07 and in stress study 0.93-1.09. The examinees older than 50 years had insignificantly lower perfusion reserve of these parameters compared with those younger than 50, LC (p=0.98), and RC (p=0.6). Conclusion This non-invasive scintigraphic method allows in individuals without peripheral artery disease to determine the range of normal values of muscle perfusion at rest and stress condition and to clinically implement them in evaluation of patients with peripheral artery disease for differentiating patients with normal from those with impaired lower limbs circulation.

  18. Accurate HLA type inference using a weighted similarity graph.

    PubMed

    Xie, Minzhu; Li, Jing; Jiang, Tao

    2010-12-14

    The human leukocyte antigen system (HLA) contains many highly variable genes. HLA genes play an important role in the human immune system, and HLA gene matching is crucial for the success of human organ transplantations. Numerous studies have demonstrated that variation in HLA genes is associated with many autoimmune, inflammatory and infectious diseases. However, typing HLA genes by serology or PCR is time consuming and expensive, which limits large-scale studies involving HLA genes. Since it is much easier and cheaper to obtain single nucleotide polymorphism (SNP) genotype data, accurate computational algorithms to infer HLA gene types from SNP genotype data are in need. To infer HLA types from SNP genotypes, the first step is to infer SNP haplotypes from genotypes. However, for the same SNP genotype data set, the haplotype configurations inferred by different methods are usually inconsistent, and it is often difficult to decide which one is true. In this paper, we design an accurate HLA gene type inference algorithm by utilizing SNP genotype data from pedigrees, known HLA gene types of some individuals and the relationship between inferred SNP haplotypes and HLA gene types. Given a set of haplotypes inferred from the genotypes of a population consisting of many pedigrees, the algorithm first constructs a weighted similarity graph based on a new haplotype similarity measure and derives constraint edges from known HLA gene types. Based on the principle that different HLA gene alleles should have different background haplotypes, the algorithm searches for an optimal labeling of all the haplotypes with unknown HLA gene types such that the total weight among the same HLA gene types is maximized. To deal with ambiguous haplotype solutions, we use a genetic algorithm to select haplotype configurations that tend to maximize the same optimization criterion. Our experiments on a previously typed subset of the HapMap data show that the algorithm is highly accurate

  19. Comparison of the biometric values obtained by two different A-mode ultrasound devices (Eye Cubed vs. PalmScan): A Transversal, descriptive, and comparative study

    PubMed Central

    2010-01-01

    Background To assess the reliability of the measurements obtained with the PalmScan™, when compared with another standardized A-mode ultrasound device, and assess the consistency and correlation between the two methods. Methods Transversal, descriptive, and comparative study. We recorded the axial length (AL), anterior chamber depth (ACD) and lens thickness (LT) obtained with two A-mode ultrasounds (PalmScan™ A2000 and Eye Cubed™) using an immersion technique. We compared the measurements with a two-sample t-test. Agreement between the two devices was assessed with Bland-Altman plots and 95% limits of agreement. Results 70 eyes of 70 patients were enrolled in this study. The measurements with the Eye Cubed™ of AL and ACD were shorter than the measurements taken by the PalmScan™. The differences were not statistically significant regarding AL (p < 0.4) but significant regarding ACD (p < 0.001). The highest agreement between the two devices was obtained during LT measurement. The PalmScan™ measurements were shorter, but not statistically significantly (p < 0.2). Conclusions The values of AL and LT, obtained with both devices are not identical, but within the limits of agreement. The agreement is not affected by the magnitude of the ocular dimensions (but only between range of 20 mm to 27 mm of AL and 3.5 mm to 5.7 mm of LT). A correction of about 0.5 D could be considered if an intraocular lens is being calculated. However due to the large variability of the results, the authors recommend discretion in using this conversion factor, and to adjust the power of the intraocular lenses based upon the personal experience of the surgeon. PMID:20334670

  20. Comparison of the biometric values obtained by two different A-mode ultrasound devices (Eye Cubed vs. PalmScan): a transversal, descriptive, and comparative study.

    PubMed

    Velez-Montoya, Raul; Shusterman, Eugene Mark; López-Miranda, Miriam Jessica; Mayorquin-Ruiz, Mariana; Salcedo-Villanueva, Guillermo; Quiroz-Mercado, Hugo; Morales-Cantón, Virgilio

    2010-03-24

    To assess the reliability of the measurements obtained with the PalmScan, when compared with another standardized A-mode ultrasound device, and assess the consistency and correlation between the two methods. Transversal, descriptive, and comparative study. We recorded the axial length (AL), anterior chamber depth (ACD) and lens thickness (LT) obtained with two A-mode ultrasounds (PalmScan A2000 and Eye Cubed) using an immersion technique. We compared the measurements with a two-sample t-test. Agreement between the two devices was assessed with Bland-Altman plots and 95% limits of agreement. 70 eyes of 70 patients were enrolled in this study. The measurements with the Eye Cubed of AL and ACD were shorter than the measurements taken by the PalmScan. The differences were not statistically significant regarding AL (p < 0.4) but significant regarding ACD (p < 0.001). The highest agreement between the two devices was obtained during LT measurement. The PalmScan measurements were shorter, but not statistically significantly (p < 0.2). The values of AL and LT, obtained with both devices are not identical, but within the limits of agreement. The agreement is not affected by the magnitude of the ocular dimensions (but only between range of 20 mm to 27 mm of AL and 3.5 mm to 5.7 mm of LT). A correction of about 0.5 D could be considered if an intraocular lens is being calculated. However due to the large variability of the results, the authors recommend discretion in using this conversion factor, and to adjust the power of the intraocular lenses based upon the personal experience of the surgeon.

  1. Comparison with IRI-PLUS and IRI-2012-TEC values of GPS-TEC values

    NASA Astrophysics Data System (ADS)

    Atıcı, Ramazan; Saǧır, Selçuk

    2016-07-01

    This study presents a comparison with IRI-PLUS and IRI-2012 Total Electron Content (TEC) values of Total Electron Content (TEC) values obtained from Ankara station (39,7 N; 32,76 E) of Global Position System (GPS) of Turkey on equinox and solstice days of 2009 year. For all days, it is observed that GPS-TEC values are greater than IRI-2012-TEC values, while IRI-PLUS-TEC values are very close to GPS-TEC values. When GPS-TEC values for both equinoxes are compared, it is seen that TEC values on September equinox are greater than one on March equinox. However, it is observed that GPS-TEC values on June solstice are greater than one on December solstice. Also, the relationship between GPS-TEC values and geomagnetic indexes is investigated.

  2. Accurate estimation of sigma(exp 0) using AIRSAR data

    NASA Technical Reports Server (NTRS)

    Holecz, Francesco; Rignot, Eric

    1995-01-01

    During recent years signature analysis, classification, and modeling of Synthetic Aperture Radar (SAR) data as well as estimation of geophysical parameters from SAR data have received a great deal of interest. An important requirement for the quantitative use of SAR data is the accurate estimation of the backscattering coefficient sigma(exp 0). In terrain with relief variations radar signals are distorted due to the projection of the scene topography into the slant range-Doppler plane. The effect of these variations is to change the physical size of the scattering area, leading to errors in the radar backscatter values and incidence angle. For this reason the local incidence angle, derived from sensor position and Digital Elevation Model (DEM) data must always be considered. Especially in the airborne case, the antenna gain pattern can be an additional source of radiometric error, because the radar look angle is not known precisely as a result of the the aircraft motions and the local surface topography. Consequently, radiometric distortions due to the antenna gain pattern must also be corrected for each resolution cell, by taking into account aircraft displacements (position and attitude) and position of the backscatter element, defined by the DEM data. In this paper, a method to derive an accurate estimation of the backscattering coefficient using NASA/JPL AIRSAR data is presented. The results are evaluated in terms of geometric accuracy, radiometric variations of sigma(exp 0), and precision of the estimated forest biomass.

  3. Accurate method for preoperative estimation of the right graft volume in adult-to-adult living donor liver transplantation.

    PubMed

    Khalaf, H; Shoukri, M; Al-Kadhi, Y; Neimatallah, M; Al-Sebayel, M

    2007-06-01

    Accurate estimation of graft volume is crucial to avoid small-for-size syndrome following adult-to-adult living donor liver transplantation AALDLT). Herein, we combined radiological and mathematical approaches for preoperative assessment of right graft volume. The right graft volume was preoperatively estimated in 31 live donors using two methods: first, the radiological graft volume (RGV) by computed tomography (CT) volumetry and second, a calculated graft volume (CGV) obtained by multiplying the standard liver volume by the percentage of the right graft volume (given by CT). Both methods were compared to the actual graft volume (AGV) measured during surgery. The graft recipient weight ratio (GRWR) was also calculated using all three volumes (RGV, CGV, and AGV). Lin's concordance correlation coefficient (CCC) was used to assess the agreement between AGV and both RGV and CGV. This was repeated using the GRWR measurements. The mean percentage of right graft volume was 62.4% (range, 55%-68%; SD +/- 3.27%). The CCC between AGV and RGV versus CGV was 0.38 and 0.66, respectively. The CCC between GRWR using AGV and RGV versus CGV was 0.63 and 0.88, respectively (P < .05). According to the Landis and Kock benchmark, the CGV correlated better with AGV when compared to RGV. The better correlation became even more apparent when applied to GRWR. In our experience, CGV showed a better correlation with AGV compared with the RGV. Using CGV in conjunction with RGV may be of value for a more accurate estimation of right graft volume for AALDLT.

  4. 77 FR 3800 - Accurate NDE & Inspection, LLC; Confirmatory Order

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-01-25

    ... In the Matter of Accurate NDE & Docket: 150-00017, General Inspection, LLC Broussard, Louisiana... an attempt to resolve issues associated with this matter. In response, on August 9, 2011, Accurate NDE requested ADR to resolve this matter with the NRC. On September 28, 2011, the NRC and Accurate NDE...

  5. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation.

    PubMed

    Saatchi, Mahdi; McClure, Mathew C; McKay, Stephanie D; Rolf, Megan M; Kim, JaeWoo; Decker, Jared E; Taxis, Tasia M; Chapple, Richard H; Ramey, Holly R; Northcutt, Sally L; Bauck, Stewart; Woodward, Brent; Dekkers, Jack C M; Fernando, Rohan L; Schnabel, Robert D; Garrick, Dorian J; Taylor, Jeremy F

    2011-11-28

    Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. These results suggest that genomic estimates of genetic merit can be produced in beef cattle at a young age but

  6. Accuracies of genomic breeding values in American Angus beef cattle using K-means clustering for cross-validation

    PubMed Central

    2011-01-01

    Background Genomic selection is a recently developed technology that is beginning to revolutionize animal breeding. The objective of this study was to estimate marker effects to derive prediction equations for direct genomic values for 16 routinely recorded traits of American Angus beef cattle and quantify corresponding accuracies of prediction. Methods Deregressed estimated breeding values were used as observations in a weighted analysis to derive direct genomic values for 3570 sires genotyped using the Illumina BovineSNP50 BeadChip. These bulls were clustered into five groups using K-means clustering on pedigree estimates of additive genetic relationships between animals, with the aim of increasing within-group and decreasing between-group relationships. All five combinations of four groups were used for model training, with cross-validation performed in the group not used in training. Bivariate animal models were used for each trait to estimate the genetic correlation between deregressed estimated breeding values and direct genomic values. Results Accuracies of direct genomic values ranged from 0.22 to 0.69 for the studied traits, with an average of 0.44. Predictions were more accurate when animals within the validation group were more closely related to animals in the training set. When training and validation sets were formed by random allocation, the accuracies of direct genomic values ranged from 0.38 to 0.85, with an average of 0.65, reflecting the greater relationship between animals in training and validation. The accuracies of direct genomic values obtained from training on older animals and validating in younger animals were intermediate to the accuracies obtained from K-means clustering and random clustering for most traits. The genetic correlation between deregressed estimated breeding values and direct genomic values ranged from 0.15 to 0.80 for the traits studied. Conclusions These results suggest that genomic estimates of genetic merit can be

  7. Study on the initial value for the exterior orientation of the mobile version

    NASA Astrophysics Data System (ADS)

    Yu, Zhi-jing; Li, Shi-liang

    2011-10-01

    Single mobile vision coordinate measurement system is in the measurement site using a single camera body and a notebook computer to achieve three-dimensional coordinates. To obtain more accurate approximate values of exterior orientation calculation in the follow-up is very important in the measurement process. The problem is a typical one for the space resection, and now studies on this topic have been widely conducted in research. Single-phase space resection mainly focuses on two aspects: of co-angular constraint based on the method, its representatives are camera co-angular constraint pose estimation algorithm and the cone angle law; the other is a direct linear transformation (DLT). One common drawback for both methods is that the CCD lens distortion is not considered. When the initial value was calculated with the direct linear transformation method, the distribution and abundance of control points is required relatively high, the need that control points can not be distributed in the same plane must be met, and there are at least six non- coplanar control points. However, its usefulness is limited. Initial value will directly influence the convergence and convergence speed of the ways of calculation. This paper will make the nonlinear of the total linear equations linearized by using the total linear equations containing distorted items and Taylor series expansion, calculating the initial value of the camera exterior orientation. Finally, the initial value is proved to be better through experiments.

  8. Cloud Type Classification (cldtype) Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flynn, Donna; Shi, Yan; Lim, K-S

    The Cloud Type (cldtype) value-added product (VAP) provides an automated cloud type classification based on macrophysical quantities derived from vertically pointing lidar and radar. Up to 10 layers of clouds are classified into seven cloud types based on predetermined and site-specific thresholds of cloud top, base and thickness. Examples of thresholds for selected U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) Climate Research Facility sites are provided in Tables 1 and 2. Inputs for the cldtype VAP include lidar and radar cloud boundaries obtained from the Active Remotely Sensed Cloud Location (ARSCL) and Surface Meteorological Systems (MET) data. Rainmore » rates from MET are used to determine when radar signal attenuation precludes accurate cloud detection. Temporal resolution and vertical resolution for cldtype are 1 minute and 30 m respectively and match the resolution of ARSCL. The cldtype classification is an initial step for further categorization of clouds. It was developed for use by the Shallow Cumulus VAP to identify potential periods of interest to the LASSO model and is intended to find clouds of interest for a variety of users.« less

  9. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear Layer

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Berkman, Mert E.

    2001-01-01

    A detailed computational aeroacoustic analysis of a high-lift flow field is performed. Time-accurate Reynolds Averaged Navier-Stokes (RANS) computations simulate the free shear layer that originates from the slat cusp. Both unforced and forced cases are studied. Preliminary results show that the shear layer is a good amplifier of disturbances in the low to mid-frequency range. The Ffowcs-Williams and Hawkings equation is solved to determine the acoustic field using the unsteady flow data from the RANS calculations. The noise radiated from the excited shear layer has a spectral shape qualitatively similar to that obtained from measurements in a corresponding experimental study of the high-lift system.

  10. Accurate position estimation methods based on electrical impedance tomography measurements

    NASA Astrophysics Data System (ADS)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  11. Accuracy of electron densities obtained via Koopmans-compliant hybrid functionals

    NASA Astrophysics Data System (ADS)

    Elmaslmane, A. R.; Wetherell, J.; Hodgson, M. J. P.; McKenna, K. P.; Godby, R. W.

    2018-04-01

    We evaluate the accuracy of electron densities and quasiparticle energy gaps given by hybrid functionals by directly comparing these to the exact quantities obtained from solving the many-electron Schrödinger equation. We determine the admixture of Hartree-Fock exchange to approximate exchange-correlation in our hybrid functional via one of several physically justified constraints, including the generalized Koopmans' theorem. We find that hybrid functionals yield strikingly accurate electron densities and gaps in both exchange-dominated and correlated systems. We also discuss the role of the screened Fock operator in the success of hybrid functionals.

  12. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature.

    PubMed

    Gadbury, Gary L; Allison, David B

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a "near significant p-value" to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called "fiddling") in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000.

  13. Autism spectrum disorder etiology: Lay beliefs and the role of cultural values and social axioms.

    PubMed

    Qi, Xin; Zaroff, Charles M; Bernardo, Allan Bi

    2016-08-01

    Recent research examining the explanations given by the public (i.e. lay beliefs) for autism spectrum disorder often reveals a reasonably accurate understanding of the biogenetic basis of the disorder. However, lay beliefs often manifest aspects of culture, and much of this work has been conducted in western cultures. In this study, 215 undergraduate university students in Macau, a Special Administrative Region of China, completed self-report measures assessing two beliefs concerning autism spectrum disorder etiology: (1) a belief in parental factors and (2) a belief in genetic factors. Potential correlates of lay beliefs were sought in culture-specific values, and more universal social axioms. Participants were significantly more likely to endorse parenting, relative to genetic factors, as etiological. A perceived parental etiology was predicted by values of mind-body holism. Beliefs in a parental etiology were not predicted by values assessing collectivism, conformity to norms, a belief in a family's ability to obtain recognition through a child's achievement, or interpersonal harmony, nor by the social axioms measured (e.g. social cynicism, reward for application, social complexity, fate control, and religiosity). Beliefs in a genetic etiology were not predicted by either culture-specific values or social axioms. Implications of the current results are discussed. © The Author(s) 2015.

  14. The Value of Body Weight Measurement to Assess Dehydration in Children

    PubMed Central

    Pruvost, Isabelle; Dubos, François; Chazard, Emmanuel; Hue, Valérie; Duhamel, Alain; Martinot, Alain

    2013-01-01

    Dehydration secondary to gastroenteritis is one of the most common reasons for office visits and hospital admissions. The indicator most commonly used to estimate dehydration status is acute weight loss. Post-illness weight gain is considered as the gold-standard to determine the true level of dehydration and is widely used to estimate weight loss in research. To determine the value of post-illness weight gain as a gold standard for acute dehydration, we conducted a prospective cohort study in which 293 children, aged 1 month to 2 years, with acute diarrhea were followed for 7 days during a 3-year period. The main outcome measures were an accurate pre-illness weight (if available within 8 days before the diarrhea), post-illness weight, and theoretical weight (predicted from the child’s individual growth chart). Post-illness weight was measured for 231 (79%) and both theoretical and post-illness weights were obtained for 111 (39%). Only 62 (21%) had an accurate pre-illness weight. The correlation between post-illness and theoretical weight was excellent (0.978), but bootstrapped linear regression analysis showed that post-illness weight underestimated theoretical weight by 0.48 kg (95% CI: 0.06–0.79, p<0.02). The mean difference in the fluid deficit calculated was 4.0% of body weight (95% CI: 3.2–4.7, p<0.0001). Theoretical weight overestimated accurate pre-illness weight by 0.21 kg (95% CI: 0.08–0.34, p = 0.002). Post-illness weight underestimated pre-illness weight by 0.19 kg (95% CI: 0.03–0.36, p = 0.02). The prevalence of 5% dehydration according to post-illness weight (21%) was significantly lower than the prevalence estimated by either theoretical weight (60%) or clinical assessment (66%, p<0.0001).These data suggest that post-illness weight is of little value as a gold standard to determine the true level of dehydration. The performance of dehydration signs or scales determined by using post-illness weight as a gold standard has to be

  15. A new coarse-grained model for E. coli cytoplasm: accurate calculation of the diffusion coefficient of proteins and observation of anomalous diffusion.

    PubMed

    Hasnain, Sabeeha; McClendon, Christopher L; Hsu, Monica T; Jacobson, Matthew P; Bandyopadhyay, Pradipta

    2014-01-01

    A new coarse-grained model of the E. coli cytoplasm is developed by describing the proteins of the cytoplasm as flexible units consisting of one or more spheres that follow Brownian dynamics (BD), with hydrodynamic interactions (HI) accounted for by a mean-field approach. Extensive BD simulations were performed to calculate the diffusion coefficients of three different proteins in the cellular environment. The results are in close agreement with experimental or previously simulated values, where available. Control simulations without HI showed that use of HI is essential to obtain accurate diffusion coefficients. Anomalous diffusion inside the crowded cellular medium was investigated with Fractional Brownian motion analysis, and found to be present in this model. By running a series of control simulations in which various forces were removed systematically, it was found that repulsive interactions (volume exclusion) are the main cause for anomalous diffusion, with a secondary contribution from HI.

  16. Dynamic characteristics of laser Doppler flowmetry signals obtained in response to a local and progressive pressure applied on diabetic and healthy subjects

    NASA Astrophysics Data System (ADS)

    Humeau, Anne; Koitka, Audrey; Abraham, Pierre; Saumet, Jean-Louis; L'Huillier, Jean-Pierre

    2004-09-01

    In the biomedical field, the laser Doppler flowmetry (LDF) technique is a non-invasive method to monitor skin perfusion. On the skin of healthy humans, LDF signals present a significant transient increase in response to a local and progressive pressure application. This vasodilatory reflex response may have important implications for cutaneous pathologies involved in various neurological diseases and in the pathophysiology of decubitus ulcers. The present work analyses the dynamic characteristics of these signals on young type 1 diabetic patients, and on healthy age-matched subjects. To obtain accurate dynamic characteristic values, a de-noising wavelet-based algorithm is first applied to LDF signals. All the de-noised signals are then normalised to the same value. The blood flow peak and the time to reach this peak are then calculated on each computed signal. The results show that a large vasodilation is present on signals of healthy subjects. The mean peak occurs at a pressure of 3.2 kPa approximately. However, a vasodilation of limited amplitude appears on type 1 diabetic patients. The maximum value is visualised, on the average, when the pressure is 1.1 kPa. The inability for diabetic patients to increase largely their cutaneous blood flow may bring explanations to foot ulcers.

  17. Derivation of guideline values for gold (III) ion toxicity limits to protect aquatic ecosystems.

    PubMed

    Nam, Sun-Hwa; Lee, Woo-Mi; Shin, Yu-Jin; Yoon, Sung-Ji; Kim, Shin Woong; Kwak, Jin Il; An, Youn-Joo

    2014-01-01

    This study focused on estimating the toxicity values of various aquatic organisms exposed to gold (III) ion (Au(3+)), and to propose maximum guideline values for Au(3+) toxicity that protect the aquatic ecosystem. A comparative assessment of methods developed in Australia and New Zealand versus the European Community (EC) was conducted. The test species used in this study included two bacteria (Escherichia coli and Bacillus subtilis), one alga (Pseudokirchneriella subcapitata), one euglena (Euglena gracilis), three cladocerans (Daphnia magna, Moina macrocopa, and Simocephalus mixtus), and two fish (Danio rerio and Oryzias latipes). Au(3+) induced growth inhibition, mortality, immobilization, and/or developmental malformations in all test species, with responses being concentration-dependent. According to the moderate reliability method of Australia and New Zealand, 0.006 and 0.075 mg/L of guideline values for Au(3+) were obtained by dividing 0.33 and 4.46 mg/L of HC5 and HC50 species sensitivity distributions (SSD) with an FACR (Final Acute to Chronic Ratio) of 59.09. In contrast, the EC method uses an assessment factor (AF), with the 0.0006 mg/L guideline value for Au(3+) being divided with the 48-h EC50 value for 0.60 mg/L (the lowest toxicity value obtained from short term results) by an AF of 1000. The Au(3+) guideline value derived using an AF was more stringent than the SSD. We recommend that more toxicity data using various bioassays are required to develop more accurate ecological risk assessments. More chronic/long-term exposure studies on sensitive endpoints using additional fish species and invertebrates not included in the current dataset will be needed to use other derivation methods (e.g., US EPA and Canadian Type A) or the "High Reliability Method" from Australia/New Zealand. Such research would facilitate the establishment of guideline values for various pollutants that reflect the universal effects of various pollutants in aquatic ecosystems. To

  18. Accuracy of apparent diffusion coefficient value measurement on PACS workstation: A comparative analysis.

    PubMed

    El Kady, Reem M; Choudhary, Arabinda Kumar; Tappouni, Rafel

    2011-03-01

    The purpose of this article is to evaluate the accuracy of apparent diffusion coefficient (ADC) measurements made with a PACS workstation compared with measurements made with a dedicated workstation, which is currently considered the reference standard. A retrospective review was performed in liver lesions from 79 patients using three MRI platforms. The final diagnosis was established by liver biopsy in 31 patients and by dynamic MRI and follow-up, both clinical and radiologic as indicated, in 48 patients. Each lesion that was clearly demonstrable on the ADC map was measured with a commercial dedicated postprocessing workstation and again with a PACS system. A two-sample t test was used to determine the statistically significant differences between the two ADC measurements. A total of 79 patients with 120 liver lesions were included. ADC values measured on the workstation were 0.4-4.38 × 10(-3) mm(2)/s. The ADC values measured on the PACS were 0.42-4.35 × 10(-3) mm(2)/s. The T value was -1.113, with 119 degrees of freedom, and the significance level was 0.268, which implies no significant difference between the two different measuring systems for all pathologic abnormalities and MRI scanners used. ADC values measured on a routine PACS workstation are as accurate as the values obtained on a dedicated specialized workstation. ADC value measurement on the routine PACS will save time and lead to increased utilization, which, in turn, will lead to an improved understanding of the different disease processes and their clinical management.

  19. The Utility of Maze Accurate Response Rate in Assessing Reading Comprehension in Upper Elementary and Middle School Students

    ERIC Educational Resources Information Center

    McCane-Bowling, Sara J.; Strait, Andrea D.; Guess, Pamela E.; Wiedo, Jennifer R.; Muncie, Eric

    2014-01-01

    This study examined the predictive utility of five formative reading measures: words correct per minute, number of comprehension questions correct, reading comprehension rate, number of maze correct responses, and maze accurate response rate (MARR). Broad Reading cluster scores obtained via the Woodcock-Johnson III (WJ III) Tests of Achievement…

  20. Use of an inertial navigation system for accurate track recovery and coastal oceanographic measurements

    NASA Technical Reports Server (NTRS)

    Oliver, B. M.; Gower, J. F. R.

    1977-01-01

    A data acquisition system using a Litton LTN-51 inertial navigation unit (INU) was tested and used for aircraft track recovery and for location and tracking from the air of targets at sea. The characteristic position drift of the INU is compensated for by sighting landmarks of accurately known position at discrete time intervals using a visual sighting system in the transparent nose of the Beechcraft 18 aircraft used. For an aircraft altitude of about 300 m, theoretical and experimental tests indicate that calculated aircraft and/or target positions obtained from the interpolated INU drift curve will be accurate to within 10 m for landmarks spaced approximately every 15 minutes in time. For applications in coastal oceanography, such as surface current mapping by tracking artificial targets, the system allows a broad area to be covered without use of high altitude photography and its attendant needs for large targets and clear weather.

  1. A novel method for pair-matching using three-dimensional digital models of bone: mesh-to-mesh value comparison.

    PubMed

    Karell, Mara A; Langstaff, Helen K; Halazonetis, Demetrios J; Minghetti, Caterina; Frelat, Mélanie; Kranioti, Elena F

    2016-09-01

    The commingling of human remains often hinders forensic/physical anthropologists during the identification process, as there are limited methods to accurately sort these remains. This study investigates a new method for pair-matching, a common individualization technique, which uses digital three-dimensional models of bone: mesh-to-mesh value comparison (MVC). The MVC method digitally compares the entire three-dimensional geometry of two bones at once to produce a single value to indicate their similarity. Two different versions of this method, one manual and the other automated, were created and then tested for how well they accurately pair-matched humeri. Each version was assessed using sensitivity and specificity. The manual mesh-to-mesh value comparison method was 100 % sensitive and 100 % specific. The automated mesh-to-mesh value comparison method was 95 % sensitive and 60 % specific. Our results indicate that the mesh-to-mesh value comparison method overall is a powerful new tool for accurately pair-matching commingled skeletal elements, although the automated version still needs improvement.

  2. Pelvic orientation for total hip arthroplasty in lateral decubitus: can it be accurately measured?

    PubMed

    Sykes, Alice M; Hill, Janet C; Orr, John F; Gill, Harinderjit S; Salazar, Jose J; Humphreys, Lee D; Beverland, David E

    2016-05-16

    During total hip arthroplasty (THA), accurately predicting acetabular cup orientation remains a key challenge, in great part because of uncertainty about pelvic orientation. This pilot study aimed to develop and validate a technique to measure pelvic orientation; establish its accuracy in the location of anatomical landmarks and subsequently; investigate if limb movement during a simulated surgical procedure alters pelvic orientation. The developed technique measured 3-D orientation of an isolated Sawbone pelvis, it was then implemented to measure pelvic orientation in lateral decubitus with post-THA patients (n = 20) using a motion capture system. Orientation of the isolated Sawbone pelvis was accurately measured, demonstrated by high correlations with angular data from a coordinate measurement machine; R-squared values close to 1 for all pelvic axes. When applied to volunteer subjects, largest movements occurred about the longitudinal pelvic axis; internal and external pelvic rotation. Rotations about the anteroposterior axis, which directly affect inclination angles, showed >75% of participants had movement within ±5° of neutral, 0°. The technique accurately measured orientation of the isolated bony pelvis. This was not the case in a simulated theatre environment. Soft tissue landmarks were difficult to palpate repeatedly. These findings have direct clinical relevance, landmark registration in lateral decubitus is a potential source of error, contributing here to large ranges in measured movement. Surgeons must be aware that present techniques using bony landmarks to reference pelvic orientation for cup implantation, both computer-based and mechanical, may not be sufficiently accurate.

  3. Highly Accurate Analytical Approximate Solution to a Nonlinear Pseudo-Oscillator

    NASA Astrophysics Data System (ADS)

    Wu, Baisheng; Liu, Weijia; Lim, C. W.

    2017-07-01

    A second-order Newton method is presented to construct analytical approximate solutions to a nonlinear pseudo-oscillator in which the restoring force is inversely proportional to the dependent variable. The nonlinear equation is first expressed in a specific form, and it is then solved in two steps, a predictor and a corrector step. In each step, the harmonic balance method is used in an appropriate manner to obtain a set of linear algebraic equations. With only one simple second-order Newton iteration step, a short, explicit, and highly accurate analytical approximate solution can be derived. The approximate solutions are valid for all amplitudes of the pseudo-oscillator. Furthermore, the method incorporates second-order Taylor expansion in a natural way, and it is of significant faster convergence rate.

  4. Different top-down approaches to estimate measurement uncertainty of whole blood tacrolimus mass concentration values.

    PubMed

    Rigo-Bonnin, Raül; Blanco-Font, Aurora; Canalias, Francesca

    2018-05-08

    Values of mass concentration of tacrolimus in whole blood are commonly used by the clinicians for monitoring the status of a transplant patient and for checking whether the administered dose of tacrolimus is effective. So, clinical laboratories must provide results as accurately as possible. Measurement uncertainty can allow ensuring reliability of these results. The aim of this study was to estimate measurement uncertainty of whole blood mass concentration tacrolimus values obtained by UHPLC-MS/MS using two top-down approaches: the single laboratory validation approach and the proficiency testing approach. For the single laboratory validation approach, we estimated the uncertainties associated to the intermediate imprecision (using long-term internal quality control data) and the bias (utilizing a certified reference material). Next, we combined them together with the uncertainties related to the calibrators-assigned values to obtain a combined uncertainty for, finally, to calculate the expanded uncertainty. For the proficiency testing approach, the uncertainty was estimated in a similar way that the single laboratory validation approach but considering data from internal and external quality control schemes to estimate the uncertainty related to the bias. The estimated expanded uncertainty for single laboratory validation, proficiency testing using internal and external quality control schemes were 11.8%, 13.2%, and 13.0%, respectively. After performing the two top-down approaches, we observed that their uncertainty results were quite similar. This fact would confirm that either two approaches could be used to estimate the measurement uncertainty of whole blood mass concentration tacrolimus values in clinical laboratories. Copyright © 2018 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Accurate inclusion mass screening: a bridge from unbiased discovery to targeted assay development for biomarker verification.

    PubMed

    Jaffe, Jacob D; Keshishian, Hasmik; Chang, Betty; Addona, Theresa A; Gillette, Michael A; Carr, Steven A

    2008-10-01

    Verification of candidate biomarker proteins in blood is typically done using multiple reaction monitoring (MRM) of peptides by LC-MS/MS on triple quadrupole MS systems. MRM assay development for each protein requires significant time and cost, much of which is likely to be of little value if the candidate biomarker is below the detection limit in blood or a false positive in the original discovery data. Here we present a new technology, accurate inclusion mass screening (AIMS), designed to provide a bridge from unbiased discovery to MS-based targeted assay development. Masses on the software inclusion list are monitored in each scan on the Orbitrap MS system, and MS/MS spectra for sequence confirmation are acquired only when a peptide from the list is detected with both the correct accurate mass and charge state. The AIMS experiment confirms that a given peptide (and thus the protein from which it is derived) is present in the plasma. Throughput of the method is sufficient to qualify up to a hundred proteins/week. The sensitivity of AIMS is similar to MRM on a triple quadrupole MS system using optimized sample preparation methods (low tens of ng/ml in plasma), and MS/MS data from the AIMS experiments on the Orbitrap can be directly used to configure MRM assays. The method was shown to be at least 4-fold more efficient at detecting peptides of interest than undirected LC-MS/MS experiments using the same instrumentation, and relative quantitation information can be obtained by AIMS in case versus control experiments. Detection by AIMS ensures that a quantitative MRM-based assay can be configured for that protein. The method has the potential to qualify large number of biomarker candidates based on their detection in plasma prior to committing to the time- and resource-intensive steps of establishing a quantitative assay.

  6. An algorithm for selecting the most accurate protocol for contact angle measurement by drop shape analysis.

    PubMed

    Xu, Z N

    2014-12-01

    In this study, an error analysis is performed to study real water drop images and the corresponding numerically generated water drop profiles for three widely used static contact angle algorithms: the circle- and ellipse-fitting algorithms and the axisymmetric drop shape analysis-profile (ADSA-P) algorithm. The results demonstrate the accuracy of the numerically generated drop profiles based on the Laplace equation. A significant number of water drop profiles with different volumes, contact angles, and noise levels are generated, and the influences of the three factors on the accuracies of the three algorithms are systematically investigated. The results reveal that the above-mentioned three algorithms are complementary. In fact, the circle- and ellipse-fitting algorithms show low errors and are highly resistant to noise for water drops with small/medium volumes and contact angles, while for water drop with large volumes and contact angles just the ADSA-P algorithm can meet accuracy requirement. However, this algorithm introduces significant errors in the case of small volumes and contact angles because of its high sensitivity to noise. The critical water drop volumes of the circle- and ellipse-fitting algorithms corresponding to a certain contact angle error are obtained through a significant amount of computation. To improve the precision of the static contact angle measurement, a more accurate algorithm based on a combination of the three algorithms is proposed. Following a systematic investigation, the algorithm selection rule is described in detail, while maintaining the advantages of the three algorithms and overcoming their deficiencies. In general, static contact angles over the entire hydrophobicity range can be accurately evaluated using the proposed algorithm. The ease of erroneous judgment in static contact angle measurements is avoided. The proposed algorithm is validated by a static contact angle evaluation of real and numerically generated water drop

  7. Magnetic resonance imaging assessment of the rotator cuff: is it really accurate?

    PubMed

    Wnorowski, D C; Levinsohn, E M; Chamberlain, B C; McAndrew, D L

    1997-12-01

    Magnetic resonance imaging (MRI) is used increasingly for evaluating the rotator cuff. This study of 39 shoulders (38 patients) compared the accuracy of MRI interpretation of rotator cuff integrity by a group of community hospital radiologists (clinical community scenario, CCS) with that of a musculoskeletal radiologist (experienced specialist scenario, ESS), relative to arthroscopy. For the CCS subgroup, the sensitivity, specificity, positive predictive value (PV), negative PV, and accuracy for partial tears were: 0%, 68%, 0%, 82%, and 59%, respectively; for complete tears: 56%, 73%, 36%, 86%, and 69%, respectively; and for all tears combined: 85%, 52%, 50%, 87%, and 64%, respectively. For the ESS subgroup, the respective values for partial tears were: 20%, 88%, 20%, 88%, and 79%, respectively; for complete tears: 78%, 83%, 58%, 92%, and 82%, respectively; and for all tears: 71%, 71%, 59%, 81%, and 71%, respectively. We concluded that MRI assessment of the rotator cuff was not accurate relative to arthroscopy. MRI was most helpful if the result was negative, and MRI diagnosis of partial tear was of little value. Considering the high cost of shoulder MRI, this study has significant implications for the evaluation of patients with possible rotator cuff pathology.

  8. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    NASA Astrophysics Data System (ADS)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  9. MM-ISMSA: An Ultrafast and Accurate Scoring Function for Protein-Protein Docking.

    PubMed

    Klett, Javier; Núñez-Salgado, Alfonso; Dos Santos, Helena G; Cortés-Cabrera, Álvaro; Perona, Almudena; Gil-Redondo, Rubén; Abia, David; Gago, Federico; Morreale, Antonio

    2012-09-11

    An ultrafast and accurate scoring function for protein-protein docking is presented. It includes (1) a molecular mechanics (MM) part based on a 12-6 Lennard-Jones potential; (2) an electrostatic component based on an implicit solvent model (ISM) with individual desolvation penalties for each partner in the protein-protein complex plus a hydrogen bonding term; and (3) a surface area (SA) contribution to account for the loss of water contacts upon protein-protein complex formation. The accuracy and performance of the scoring function, termed MM-ISMSA, have been assessed by (1) comparing the total binding energies, the electrostatic term, and its components (charge-charge and individual desolvation energies), as well as the per residue contributions, to results obtained with well-established methods such as APBSA or MM-PB(GB)SA for a set of 1242 decoy protein-protein complexes and (2) testing its ability to recognize the docking solution closest to the experimental structure as that providing the most favorable total binding energy. For this purpose, a test set consisting of 15 protein-protein complexes with known 3D structure mixed with 10 decoys for each complex was used. The correlation between the values afforded by MM-ISMSA and those from the other methods is quite remarkable (r(2) ∼ 0.9), and only 0.2-5.0 s (depending on the number of residues) are spent on a single calculation including an all vs all pairwise energy decomposition. On the other hand, MM-ISMSA correctly identifies the best docking solution as that closest to the experimental structure in 80% of the cases. Finally, MM-ISMSA can process molecular dynamics trajectories and reports the results as averaged values with their standard deviations. MM-ISMSA has been implemented as a plugin to the widely used molecular graphics program PyMOL, although it can also be executed in command-line mode. MM-ISMSA is distributed free of charge to nonprofit organizations.

  10. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. Accurate Acoustic Thermometry I: The Triple Point of Gallium

    NASA Astrophysics Data System (ADS)

    Moldover, M. R.; Trusler, J. P. M.

    1988-01-01

    The speed of sound in argon has been accurately measured in the pressure range 25-380 kPa at the temperature of the triple point of gallium (Tg) and at 340 kPa at the temperature of the triple point of water (Tt). The results are combined with previously published thermodynamic and transport property data to obtain Tg = (302.9169 +/- 0.0005) K on the thermodynamic scale. Among recent determinations of T68 (the temperature on IPTS-68) at the gallium triple point, those with the smallest measurement uncertainty fall in the range 302.923 71 to 302.923 98 K. We conclude that T-T68 = (-6.9 +/- 0.5) mK near 303 K, in agreement with results obtained from other primary thermometers. The speed of sound was measured with a spherical resonator. The volume and thermal expansion of the resonator were determined by weighing the mercury required to fill it at Tt and Tg. The largest part of the standard error in the present determination of Tg is systematic. It results from imperfect knowledge of the thermal expansion of mercury between Tt and Tg. Smaller parts of the error result from imperfections in the measurement of the temperature of the resonator and of the resonance frequencies.

  12. Effective scheme to determine accurate defect formation energies and charge transition levels of point defects in semiconductors

    NASA Astrophysics Data System (ADS)

    Yao, Cang Lang; Li, Jian Chen; Gao, Wang; Tkatchenko, Alexandre; Jiang, Qing

    2017-12-01

    We propose an effective method to accurately determine the defect formation energy Ef and charge transition level ɛ of the point defects using exclusively cohesive energy Ecoh and the fundamental band gap Eg of pristine host materials. We find that Ef of the point defects can be effectively separated into geometric and electronic contributions with a functional form: Ef=χ Ecoh+λ Eg , where χ and λ are dictated by the geometric and electronic factors of the point defects (χ and λ are defect dependent). Such a linear combination of Ecoh and Eg reproduces Ef with an accuracy better than 5% for electronic structure methods ranging from hybrid density-functional theory (DFT) to many-body random-phase approximation (RPA) and experiments. Accordingly, ɛ is also determined by Ecoh/Eg and the defect geometric/electronic factors. The identified correlation is rather general for monovacancies and interstitials, which holds in a wide variety of semiconductors covering Si, Ge, phosphorenes, ZnO, GaAs, and InP, and enables one to obtain reliable values of Ef and ɛ of the point defects for RPA and experiments based on semilocal DFT calculations.

  13. A robust recognition and accurate locating method for circular coded diagonal target

    NASA Astrophysics Data System (ADS)

    Bao, Yunna; Shang, Yang; Sun, Xiaoliang; Zhou, Jiexin

    2017-10-01

    As a category of special control points which can be automatically identified, artificial coded targets have been widely developed in the field of computer vision, photogrammetry, augmented reality, etc. In this paper, a new circular coded target designed by RockeTech technology Corp. Ltd is analyzed and studied, which is called circular coded diagonal target (CCDT). A novel detection and recognition method with good robustness is proposed in the paper, and implemented on Visual Studio. In this algorithm, firstly, the ellipse features of the center circle are used for rough positioning. Then, according to the characteristics of the center diagonal target, a circular frequency filter is designed to choose the correct center circle and eliminates non-target noise. The precise positioning of the coded target is done by the correlation coefficient fitting extreme value method. Finally, the coded target recognition is achieved by decoding the binary sequence in the outer ring of the extracted target. To test the proposed algorithm, this paper has carried out simulation experiments and real experiments. The results show that the CCDT recognition and accurate locating method proposed in this paper can robustly recognize and accurately locate the targets in complex and noisy background.

  14. History and progress on accurate measurements of the Planck constant

    NASA Astrophysics Data System (ADS)

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10-34 J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, NA. As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 108 from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the improved

  15. Rib biomechanical properties exhibit diagnostic potential for accurate ageing in forensic investigations

    PubMed Central

    Bonicelli, Andrea; Xhemali, Bledar; Kranioti, Elena F.

    2017-01-01

    Age estimation remains one of the most challenging tasks in forensic practice when establishing a biological profile of unknown skeletonised remains. Morphological methods based on developmental markers of bones can provide accurate age estimates at a young age, but become highly unreliable for ages over 35 when all developmental markers disappear. This study explores the changes in the biomechanical properties of bone tissue and matrix, which continue to change with age even after skeletal maturity, and their potential value for age estimation. As a proof of concept we investigated the relationship of 28 variables at the macroscopic and microscopic level in rib autopsy samples from 24 individuals. Stepwise regression analysis produced a number of equations one of which with seven variables showed an R2 = 0.949; a mean residual error of 2.13 yrs ±0.4 (SD) and a maximum residual error value of 2.88 yrs. For forensic purposes, by using only bench top machines in tests which can be carried out within 36 hrs, a set of just 3 variables produced an equation with an R2 = 0.902 a mean residual error of 3.38 yrs ±2.6 (SD) and a maximum observed residual error 9.26yrs. This method outstrips all existing age-at-death methods based on ribs, thus providing a novel lab based accurate tool in the forensic investigation of human remains. The present application is optimised for fresh (uncompromised by taphonomic conditions) remains, but the potential of the principle and method is vast once the trends of the biomechanical variables are established for other environmental conditions and circumstances. PMID:28520764

  16. Accurate Determination of Coulombic Efficiency for Lithium Metal Anodes and Lithium Metal Batteries

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adams, Brian D.; Zheng, Jianming; Ren, Xiaodi

    Lithium (Li) metal is an ideal anode material for high energy density batteries. However, its low Coulombic efficiency (CE) and formation of dendrites during the plating and stripping processes has hindered its applications in rechargeable Li metal batteries. The accurate measurement of Li CE is a critical factor to predict the cycle life of Li metal batteries, but the measurement of Li CE is affected by various factors that often leads to conflicting values reported in the literature. Here, we investigate various factors that affect the measurement of Li CE and propose a more accurate method of determining Li CE.more » It was also found that the capacity used for cycling greatly affects the stabilization cycles and the average CE. A higher cycling capacity leads to a shorter number of stabilization cycles and higher average CE. With a proper high-concentration ether-based electrolyte, Li metal can be cycled with a high average CE of 99.5 % for over 100 cycles at a high capacity of 6 mAh cm-2 suitable for practical applications.« less

  17. Accurate Determination of Tunneling-Affected Rate Coefficients: Theory Assessing Experiment.

    PubMed

    Zuo, Junxiang; Xie, Changjian; Guo, Hua; Xie, Daiqian

    2017-07-20

    The thermal rate coefficients of a prototypical bimolecular reaction are determined on an accurate ab initio potential energy surface (PES) using ring polymer molecular dynamics (RPMD). It is shown that quantum effects such as tunneling and zero-point energy (ZPE) are of critical importance for the HCl + OH reaction at low temperatures, while the heavier deuterium substitution renders tunneling less facile in the DCl + OH reaction. The calculated RPMD rate coefficients are in excellent agreement with experimental data for the HCl + OH reaction in the entire temperature range of 200-1000 K, confirming the accuracy of the PES. On the other hand, the RPMD rate coefficients for the DCl + OH reaction agree with some, but not all, experimental values. The self-consistency of the theoretical results thus allows a quality assessment of the experimental data.

  18. Partitioned key-value store with atomic memory operations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, John M.; Faibish, Sorin; Grider, Gary

    A partitioned key-value store is provided that supports atomic memory operations. A server performs a memory operation in a partitioned key-value store by receiving a request from an application for at least one atomic memory operation, the atomic memory operation comprising a memory address identifier; and, in response to the atomic memory operation, performing one or more of (i) reading a client-side memory location identified by the memory address identifier and storing one or more key-value pairs from the client-side memory location in a local key-value store of the server; and (ii) obtaining one or more key-value pairs from themore » local key-value store of the server and writing the obtained one or more key-value pairs into the client-side memory location identified by the memory address identifier. The server can perform functions obtained from a client-side memory location and return a result to the client using one or more of the atomic memory operations.« less

  19. Standardization of a fluconazole bioassay and correlation of results with those obtained by high-pressure liquid chromatography.

    PubMed Central

    Rex, J H; Hanson, L H; Amantea, M A; Stevens, D A; Bennett, J E

    1991-01-01

    An improved bioassay for fluconazole was developed. This assay is sensitive in the clinically relevant range (2 to 40 micrograms/ml) and analyzes plasma, serum, and cerebrospinal fluid specimens; bioassay results correlate with results obtained by high-pressure liquid chromatography (HPLC). Bioassay and HPLC analyses of spiked plasma, serum, and cerebrospinal fluid samples (run as unknowns) gave good agreement with expected values. Analysis of specimens from patients gave equivalent results by both HPLC and bioassay. HPLC had a lower within-run coefficient of variation (less than 2.5% for HPLC versus less than 11% for bioassay) and a lower between-run coefficient of variation (less than 5% versus less than 12% for bioassay) and was more sensitive (lower limit of detection, 0.1 micrograms/ml [versus 2 micrograms/ml for bioassay]). The bioassay is, however, sufficiently accurate and sensitive for clinical specimens, and its relative simplicity, low sample volume requirement, and low equipment cost should make it the technique of choice for analysis of routine clinical specimens. PMID:1854166

  20. The challenge of obtaining information necessary for multi-criteria decision analysis implementation: the case of physiotherapy services in Canada

    PubMed Central

    2013-01-01

    Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to

  1. Accurate computation and continuation of homoclinic and heteroclinic orbits for singular perturbation problems

    NASA Technical Reports Server (NTRS)

    Vaughan, William W.; Friedman, Mark J.; Monteiro, Anand C.

    1993-01-01

    In earlier papers, Doedel and the authors have developed a numerical method and derived error estimates for the computation of branches of heteroclinic orbits for a system of autonomous ordinary differential equations in R(exp n). The idea of the method is to reduce a boundary value problem on the real line to a boundary value problem on a finite interval by using a local (linear or higher order) approximation of the stable and unstable manifolds. A practical limitation for the computation of homoclinic and heteroclinic orbits has been the difficulty in obtaining starting orbits. Typically these were obtained from a closed form solution or via a homotopy from a known solution. Here we consider extensions of our algorithm which allow us to obtain starting orbits on the continuation branch in a more systematic way as well as make the continuation algorithm more flexible. In applications, we use the continuation software package AUTO in combination with some initial value software. The examples considered include computation of homoclinic orbits in a singular perturbation problem and in a turbulent fluid boundary layer in the wall region problem.

  2. A fast and accurate dihedral interpolation loop subdivision scheme

    NASA Astrophysics Data System (ADS)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  3. Microfabricated fuel heating value monitoring device

    DOEpatents

    Robinson, Alex L [Albuquerque, NM; Manginell, Ronald P [Albuquerque, NM; Moorman, Matthew W [Albuquerque, NM

    2010-05-04

    A microfabricated fuel heating value monitoring device comprises a microfabricated gas chromatography column in combination with a catalytic microcalorimeter. The microcalorimeter can comprise a reference thermal conductivity sensor to provide diagnostics and surety. Using microfabrication techniques, the device can be manufactured in production quantities at a low per-unit cost. The microfabricated fuel heating value monitoring device enables continuous calorimetric determination of the heating value of natural gas with a 1 minute analysis time and 1.5 minute cycle time using air as a carrier gas. This device has applications in remote natural gas mining stations, pipeline switching and metering stations, turbine generators, and other industrial user sites. For gas pipelines, the device can improve gas quality during transfer and blending, and provide accurate financial accounting. For industrial end users, the device can provide continuous feedback of physical gas properties to improve combustion efficiency during use.

  4. System to measure accurate temperature dependence of electric conductivity down to 20 K in ultrahigh vacuum.

    PubMed

    Sakai, C; Takeda, S N; Daimon, H

    2013-07-01

    We have developed the new in situ electrical-conductivity measurement system which can be operated in ultrahigh vacuum (UHV) with accurate temperature measurement down to 20 K. This system is mainly composed of a new sample-holder fixing mechanism, a new movable conductivity-measurement mechanism, a cryostat, and two receptors for sample- and four-probe holders. Sample-holder is pushed strongly against the receptor, which is connected to a cryostat, by using this new sample-holder fixing mechanism to obtain high thermal conductivity. Test pieces on the sample-holders have been cooled down to about 20 K using this fixing mechanism, although they were cooled down to only about 60 K without this mechanism. Four probes are able to be touched to a sample surface using this new movable conductivity-measurement mechanism for measuring electrical conductivity after making film on substrates or obtaining clean surfaces by cleavage, flashing, and so on. Accurate temperature measurement is possible since the sample can be transferred with a thermocouple and∕or diode being attached directly to the sample. A single crystal of Bi-based copper oxide high-Tc superconductor (HTSC) was cleaved in UHV to obtain clean surface, and its superconducting critical temperature has been successfully measured in situ. The importance of in situ measurement of resistance in UHV was demonstrated for this HTSC before and after cesium (Cs) adsorption on its surface. The Tc onset increase and the Tc offset decrease by Cs adsorption were observed.

  5. Is self-reported height or arm span a more accurate alternative measure of height?

    PubMed

    Brown, Jean K; Feng, Jui-Ying; Knapp, Thomas R

    2002-11-01

    The purpose of this study was to determine whether self-reported height or arm span is the more accurate alternative measure of height. A sample of 409 people between the ages of 19 and 67 (M = 35.0) participated in this anthropometric study. Height, self-reported height, and arm span were measured by 82 nursing research students. Mean differences from criterion measures were 0.17 cm for the measuring rules, 0.47 cm for arm span, and 0.85 cm and 0.87 cm for heights. Test-retest reliability was r = .997 for both height and arm span. The relationships of height to self-reported height and arm span were r = .97 and .90, respectively. Mean absolute differences were 1.80 cm and 4.29 cm, respectively. These findings support the practice of using self-reported height as an alternative measure of measured height in clinical settings, but arm span is an accurate alternative when neither measured height nor self-reported height is obtainable.

  6. Does ultrasonography accurately diagnose acute cholecystitis? Improving diagnostic accuracy based on a review at a regional hospital

    PubMed Central

    Hwang, Hamish; Marsh, Ian; Doyle, Jason

    2014-01-01

    Background Acute cholecystitis is one of the most common diseases requiring emergency surgery. Ultrasonography is an accurate test for cholelithiasis but has a high false-negative rate for acute cholecystitis. The Murphy sign and laboratory tests performed independently are also not particularly accurate. This study was designed to review the accuracy of ultrasonography for diagnosing acute cholecystitis in a regional hospital. Methods We studied all emergency cholecystectomies performed over a 1-year period. All imaging studies were reviewed by a single radiologist, and all pathology was reviewed by a single pathologist. The reviewers were blinded to each other’s results. Results A total of 107 patients required an emergency cholecystectomy in the study period; 83 of them underwent ultrasonography. Interradiologist agreement was 92% for ultrasonography. For cholelithiasis, ultrasonography had 100% sensitivity, 18% specificity, 81% positive predictive value (PPV) and 100% negative predictive value (NPV). For acute cholecystitis, it had 54% sensitivity, 81% specificity, 85% PPV and 47% NPV. All patients had chronic cholecystitis and 67% had acute cholecystitis on histology. When combined with positive Murphy sign and elevated neutrophil count, an ultrasound showing cholelithiasis or acute cholecystitis yielded a sensitivity of 74%, specificity of 62%, PPV of 80% and NPV of 53% for the diagnosis of acute cholecystitis. Conclusion Ultrasonography alone has a high rate of false-negative studies for acute cholecystitis. However, a higher rate of accurate diagnosis can be achieved using a triad of positive Murphy sign, elevated neutrophil count and an ultrasound showing cholelithiasis or cholecystitis. PMID:24869607

  7. Developing Electronic Health Record Algorithms That Accurately Identify Patients With Systemic Lupus Erythematosus.

    PubMed

    Barnado, April; Casey, Carolyn; Carroll, Robert J; Wheless, Lee; Denny, Joshua C; Crofford, Leslie J

    2017-05-01

    To study systemic lupus erythematosus (SLE) in the electronic health record (EHR), we must accurately identify patients with SLE. Our objective was to develop and validate novel EHR algorithms that use International Classification of Diseases, Ninth Revision (ICD-9), Clinical Modification codes, laboratory testing, and medications to identify SLE patients. We used Vanderbilt's Synthetic Derivative, a de-identified version of the EHR, with 2.5 million subjects. We selected all individuals with at least 1 SLE ICD-9 code (710.0), yielding 5,959 individuals. To create a training set, 200 subjects were randomly selected for chart review. A subject was defined as a case if diagnosed with SLE by a rheumatologist, nephrologist, or dermatologist. Positive predictive values (PPVs) and sensitivity were calculated for combinations of code counts of the SLE ICD-9 code, a positive antinuclear antibody (ANA), ever use of medications, and a keyword of "lupus" in the problem list. The algorithms with the highest PPV were each internally validated using a random set of 100 individuals from the remaining 5,759 subjects. The algorithm with the highest PPV at 95% in the training set and 91% in the validation set was 3 or more counts of the SLE ICD-9 code, ANA positive (≥1:40), and ever use of both disease-modifying antirheumatic drugs and steroids, while excluding individuals with systemic sclerosis and dermatomyositis ICD-9 codes. We developed and validated the first EHR algorithm that incorporates laboratory values and medications with the SLE ICD-9 code to identify patients with SLE accurately. © 2016, American College of Rheumatology.

  8. Accuracy of genomic breeding values in multibreed beef cattle populations derived from deregressed breeding values and phenotypes.

    PubMed

    Weber, K L; Thallman, R M; Keele, J W; Snelling, W M; Bennett, G L; Smith, T P L; McDaneld, T G; Allan, M F; Van Eenennaam, A L; Kuehn, L A

    2012-12-01

    Genomic selection involves the assessment of genetic merit through prediction equations that allocate genetic variation with dense marker genotypes. It has the potential to provide accurate breeding values for selection candidates at an early age and facilitate selection for expensive or difficult to measure traits. Accurate across-breed prediction would allow genomic selection to be applied on a larger scale in the beef industry, but the limited availability of large populations for the development of prediction equations has delayed researchers from providing genomic predictions that are accurate across multiple beef breeds. In this study, the accuracy of genomic predictions for 6 growth and carcass traits were derived and evaluated using 2 multibreed beef cattle populations: 3,358 crossbred cattle of the U.S. Meat Animal Research Center Germplasm Evaluation Program (USMARC_GPE) and 1,834 high accuracy bull sires of the 2,000 Bull Project (2000_BULL) representing influential breeds in the U.S. beef cattle industry. The 2000_BULL EPD were deregressed, scaled, and weighted to adjust for between- and within-breed heterogeneous variance before use in training and validation. Molecular breeding values (MBV) trained in each multibreed population and in Angus and Hereford purebred sires of 2000_BULL were derived using the GenSel BayesCπ function (Fernando and Garrick, 2009) and cross-validated. Less than 10% of large effect loci were shared between prediction equations trained on (USMARC_GPE) relative to 2000_BULL although locus effects were moderately to highly correlated for most traits and the traits themselves were highly correlated between populations. Prediction of MBV accuracy was low and variable between populations. For growth traits, MBV accounted for up to 18% of genetic variation in a pooled, multibreed analysis and up to 28% in single breeds. For carcass traits, MBV explained up to 8% of genetic variation in a pooled, multibreed analysis and up to 42% in

  9. Inappropriate Fiddling with Statistical Analyses to Obtain a Desirable P-value: Tests to Detect its Presence in Published Literature

    PubMed Central

    Gadbury, Gary L.; Allison, David B.

    2012-01-01

    Much has been written regarding p-values below certain thresholds (most notably 0.05) denoting statistical significance and the tendency of such p-values to be more readily publishable in peer-reviewed journals. Intuition suggests that there may be a tendency to manipulate statistical analyses to push a “near significant p-value” to a level that is considered significant. This article presents a method for detecting the presence of such manipulation (herein called “fiddling”) in a distribution of p-values from independent studies. Simulations are used to illustrate the properties of the method. The results suggest that the method has low type I error and that power approaches acceptable levels as the number of p-values being studied approaches 1000. PMID:23056287

  10. Is 50 Hz high enough ECG sampling frequency for accurate HRV analysis?

    PubMed

    Mahdiani, Shadi; Jeyhani, Vala; Peltokangas, Mikko; Vehkaoja, Antti

    2015-01-01

    With the worldwide growth of mobile wireless technologies, healthcare services can be provided at anytime and anywhere. Usage of wearable wireless physiological monitoring system has been extensively increasing during the last decade. These mobile devices can continuously measure e.g. the heart activity and wirelessly transfer the data to the mobile phone of the patient. One of the significant restrictions for these devices is usage of energy, which leads to requiring low sampling rate. This article is presented in order to investigate the lowest adequate sampling frequency of ECG signal, for achieving accurate enough time domain heart rate variability (HRV) parameters. For this purpose the ECG signals originally measured with high 5 kHz sampling rate were down-sampled to simulate the measurement with lower sampling rate. Down-sampling loses information, decreases temporal accuracy, which was then restored by interpolating the signals to their original sampling rates. The HRV parameters obtained from the ECG signals with lower sampling rates were compared. The results represent that even when the sampling rate of ECG signal is equal to 50 Hz, the HRV parameters are almost accurate with a reasonable error.

  11. Accurate approximation of in-ecliptic trajectories for E-sail with constant pitch angle

    NASA Astrophysics Data System (ADS)

    Huo, Mingying; Mengali, Giovanni; Quarta, Alessandro A.

    2018-05-01

    Propellantless continuous-thrust propulsion systems, such as electric solar wind sails, may be successfully used for new space missions, especially those requiring high-energy orbit transfers. When the mass-to-thrust ratio is sufficiently large, the spacecraft trajectory is characterized by long flight times with a number of revolutions around the Sun. The corresponding mission analysis, especially when addressed within an optimal context, requires a significant amount of simulation effort. Analytical trajectories are therefore useful aids in a preliminary phase of mission design, even though exact solution are very difficult to obtain. The aim of this paper is to present an accurate, analytical, approximation of the spacecraft trajectory generated by an electric solar wind sail with a constant pitch angle, using the latest mathematical model of the thrust vector. Assuming a heliocentric circular parking orbit and a two-dimensional scenario, the simulation results show that the proposed equations are able to accurately describe the actual spacecraft trajectory for a long time interval when the propulsive acceleration magnitude is sufficiently small.

  12. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    PubMed

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  13. An implicit higher-order spatially accurate scheme for solving time dependent flows on unstructured meshes

    NASA Astrophysics Data System (ADS)

    Tomaro, Robert F.

    1998-07-01

    -order spatially accurate code. The new solutions were compared with those obtained using the second-order spatially accurate scheme. Finally, the increased efficiency of using an implicit solution algorithm in a production Computational Fluid Dynamics flow solver was demonstrated for steady and unsteady flows. A third- and fourth-order spatially accurate scheme has been implemented creating a basis for a state-of-the-art aerodynamic analysis tool.

  14. An Accurate Method for Measuring Airplane-Borne Conformal Antenna's Radar Cross Section

    NASA Astrophysics Data System (ADS)

    Guo, Shuxia; Zhang, Lei; Wang, Yafeng; Hu, Chufeng

    2016-09-01

    The airplane-borne conformal antenna attaches itself tightly with the airplane skin, so the conventional measurement method cannot determine the contribution of the airplane-borne conformal antenna to its radar cross section (RCS). This paper uses the 2D microwave imaging to isolate and extract the distribution of the reflectivity of the airplane-borne conformal antenna. It obtains the 2D spatial spectra of the conformal antenna through the wave spectral transform between the 2D spatial image and the 2D spatial spectrum. After the interpolation from the rectangular coordinate domain to the polar coordinate domain, the spectral domain data for the variation of the scatter of the conformal antenna with frequency and angle is obtained. The experimental results show that the measurement method proposed in this paper greatly enhances the airplane-borne conformal antenna's RCS measurement accuracy, essentially eliminates the influences caused by the airplane skin and more accurately reveals the airplane-borne conformal antenna's RCS scatter properties.

  15. Flavonoid values for USDA survey foods and beverages 2007-2010

    USDA-ARS?s Scientific Manuscript database

    Comprehensive databases of the flavonoid content of foods are needed to more accurately estimate dietary intakes of these compounds. The Flavonoid Values for Survey Foods and Beverages 2007-2010 allows estimation of flavonoid intakes based on all foods and beverages reported in the national survey,...

  16. Fast and accurate automated cell boundary determination for fluorescence microscopy

    NASA Astrophysics Data System (ADS)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  17. Conversion of calibration curves for accurate estimation of molecular weight averages and distributions of polyether polyols by conventional size exclusion chromatography.

    PubMed

    Xu, Xiuqing; Yang, Xiuhan; Martin, Steven J; Mes, Edwin; Chen, Junlan; Meunier, David M

    2018-08-17

    Accurate measurement of molecular weight averages (M¯ n, M¯ w, M¯ z ) and molecular weight distributions (MWD) of polyether polyols by conventional SEC (size exclusion chromatography) is not as straightforward as it would appear. Conventional calibration with polystyrene (PS) standards can only provide PS apparent molecular weights which do not provide accurate estimates of polyol molecular weights. Using polyethylene oxide/polyethylene glycol (PEO/PEG) for molecular weight calibration could improve the accuracy, but the retention behavior of PEO/PEG is not stable in THF-based (tetrahydrofuran) SEC systems. In this work, two approaches for calibration curve conversion with narrow PS and polyol molecular weight standards were developed. Equations to convert PS-apparent molecular weight to polyol-apparent molecular weight were developed using both a rigorous mathematical analysis and graphical plot regression method. The conversion equations obtained by the two approaches were in good agreement. Factors influencing the conversion equation were investigated. It was concluded that the separation conditions such as column batch and operating temperature did not have significant impact on the conversion coefficients and a universal conversion equation could be obtained. With this conversion equation, more accurate estimates of molecular weight averages and MWDs for polyether polyols can be achieved from conventional PS-THF SEC calibration. Moreover, no additional experimentation is required to convert historical PS equivalent data to reasonably accurate molecular weight results. Copyright © 2018. Published by Elsevier B.V.

  18. On axiomatizations of the Shapley value for bi-cooperative games

    NASA Astrophysics Data System (ADS)

    Meirong, Wu; Shaochen, Cao; Huazhen, Zhu

    2016-06-01

    There are three decisions available for each participant in bi-cooperative games which can depict real life accurately in this paper. This paper researches the Shapley value of bi-cooperative games and completes the unique characterization. The axiom similar to classical cooperative games which could be used to characterize the Shapley value of bi-cooperative games as well. Meanwhile, it introduces a structural axiom and a zero excluded axiom instead of effective axiom in classical cooperative games.

  19. Rapid and Accurate Diagnosis Based on Real-Time PCR Cycle Threshold Value for the Identification of Campylobacter jejuni, astA Gene-Positive Escherichia coli, and eae Gene-Positive E. coli.

    PubMed

    Kawase, Jun; Asakura, Hiroshi; Kurosaki, Morito; Oshiro, Hitoshi; Etoh, Yoshiki; Ikeda, Tetsuya; Watahiki, Masanori; Kameyama, Mitsuhiro; Hayashi, Fumi; Kawakami, Yuta; Murakami, Yoshiko; Tsunomori, Yoshie

    2018-01-23

    We previously developed a multiplex real-time PCR assay (Rapid Foodborne Bacterial Screening 24 ver.5, [RFBS24 ver.5]) for simultaneous detection of 24 foodborne bacterial targets. Here, to overcome the discrepancy of the results from RFBS24 ver.5 and bacterial culture methods (BC), we analyzed 246 human clinical samples from 49 gastroenteritis outbreaks using RFBS24 ver.5 and evaluated the correlation between the cycle threshold (CT) value of RFBS24 ver.5 and the BC results. The results showed that the RFBS24 ver.5 was more sensitive than BC for Campylobacter jejuni and Escherichia coli harboring astA or eae, with positive predictive values (PPV) of 45.5-87.0% and a kappa coefficient (KC) of 0.60-0.92, respectively. The CTs were significantly different between BC-positive and -negative samples (p < 0.01). All RFBS24 ver.5-positive samples were BC-positive under the lower confidence interval (CI) limit of 95% or 99% for the CT of the BC-negative samples. We set the 95% or 99% CI lower limit to the determination CT (d-CT) to discriminate for assured BC-positive results (d-CTs: 27.42-30.86), and subsequently the PPVs (94.7%-100.0%) and KCs (0.89-0.95) of the 3 targets were increased. Together, we concluded that the implication of a d-CT-based approach would be a valuable tool for rapid and accurate diagnoses using the RFBS24 ver.5 system.

  20. Methods to achieve accurate projection of regional and global raster databases

    USGS Publications Warehouse

    Usery, E. Lynn; Seong, Jeong Chang; Steinwand, Dan

    2002-01-01

    Modeling regional and global activities of climatic and human-induced change requires accurate geographic data from which we can develop mathematical and statistical tabulations of attributes and properties of the environment. Many of these models depend on data formatted as raster cells or matrices of pixel values. Recently, it has been demonstrated that regional and global raster datasets are subject to significant error from mathematical projection and that these errors are of such magnitude that model results may be jeopardized (Steinwand, et al., 1995; Yang, et al., 1996; Usery and Seong, 2001; Seong and Usery, 2001). There is a need to develop methods of projection that maintain the accuracy of these datasets to support regional and global analyses and modeling

  1. A generalized operational formula based on total electronic densities to obtain 3D pictures of the dual descriptor to reveal nucleophilic and electrophilic sites accurately on closed-shell molecules.

    PubMed

    Martínez-Araya, Jorge I

    2016-09-30

    By means of the conceptual density functional theory, the so-called dual descriptor (DD) has been adapted to be used in any closed-shell molecule that presents degeneracy in its frontier molecular orbitals. The latter is of paramount importance because a correct description of local reactivity will allow to predict the most favorable sites on a molecule to undergo nucleophilic or electrophilic attacks; on the contrary, an incomplete description of local reactivity might have serio us consequences, particularly for those experimental chemists that have the need of getting an insight about reactivity of chemical reagents before using them in synthesis to obtain a new compound. In the present work, the old approach based only on electronic densities of frontier molecular orbitals is replaced by the most accurate procedure that implies the use of total electronic densities thus keeping consistency with the essential principle of the DFT in which the electronic density is the fundamental variable and not the molecular orbitals. As a result of the present work, the DD will be able to properly describe local reactivities only in terms of total electronic densities. To test the proposed operational formula, 12 very common molecules were selected as the original definition of the DD was not able to describe their local reactivities properly. The ethylene molecule was additionally used to test the capability of the proposed operational formula to reveal a correct local reactivity even in absence of degeneracy in frontier molecular orbitals. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  2. School system evaluation by value added analysis under endogeneity.

    PubMed

    Manzi, Jorge; San Martín, Ernesto; Van Bellegem, Sébastien

    2014-01-01

    Value added is a common tool in educational research on effectiveness. It is often modeled as a (prediction of a) random effect in a specific hierarchical linear model. This paper shows that this modeling strategy is not valid when endogeneity is present. Endogeneity stems, for instance, from a correlation between the random effect in the hierarchical model and some of its covariates. This paper shows that this phenomenon is far from exceptional and can even be a generic problem when the covariates contain the prior score attainments, a typical situation in value added modeling. Starting from a general, model-free definition of value added, the paper derives an explicit expression of the value added in an endogeneous hierarchical linear Gaussian model. Inference on value added is proposed using an instrumental variable approach. The impact of endogeneity on the value added and the estimated value added is calculated accurately. This is also illustrated on a large data set of individual scores of about 200,000 students in Chile.

  3. Measurement of shot noise in magnetic tunnel junction and its utilization for accurate system calibration

    NASA Astrophysics Data System (ADS)

    Tamaru, S.; Kubota, H.; Yakushiji, K.; Fukushima, A.; Yuasa, S.

    2017-11-01

    This work presents a technique to calibrate the spin torque oscillator (STO) measurement system by utilizing the whiteness of shot noise. The raw shot noise spectrum in a magnetic tunnel junction based STO in the microwave frequency range is obtained by first subtracting the baseline noise, and then excluding the field dependent mag-noise components reflecting the thermally excited spin wave resonances. As the shot noise is guaranteed to be completely white, the total gain of the signal path should be proportional to the shot noise spectrum obtained by the above procedure, which allows for an accurate gain calibration of the system and a quantitative determination of each noise power. The power spectral density of the shot noise as a function of bias voltage obtained by this technique was compared with a theoretical calculation, which showed excellent agreement when the Fano factor was assumed to be 0.99.

  4. An accurate and efficient acoustic eigensolver based on a fast multipole BEM and a contour integral method

    NASA Astrophysics Data System (ADS)

    Zheng, Chang-Jun; Gao, Hai-Feng; Du, Lei; Chen, Hai-Bo; Zhang, Chuanzeng

    2016-01-01

    An accurate numerical solver is developed in this paper for eigenproblems governed by the Helmholtz equation and formulated through the boundary element method. A contour integral method is used to convert the nonlinear eigenproblem into an ordinary eigenproblem, so that eigenvalues can be extracted accurately by solving a set of standard boundary element systems of equations. In order to accelerate the solution procedure, the parameters affecting the accuracy and efficiency of the method are studied and two contour paths are compared. Moreover, a wideband fast multipole method is implemented with a block IDR (s) solver to reduce the overall solution cost of the boundary element systems of equations with multiple right-hand sides. The Burton-Miller formulation is employed to identify the fictitious eigenfrequencies of the interior acoustic problems with multiply connected domains. The actual effect of the Burton-Miller formulation on tackling the fictitious eigenfrequency problem is investigated and the optimal choice of the coupling parameter as α = i / k is confirmed through exterior sphere examples. Furthermore, the numerical eigenvalues obtained by the developed method are compared with the results obtained by the finite element method to show the accuracy and efficiency of the developed method.

  5. A new numerical method for inverse Laplace transforms used to obtain gluon distributions from the proton structure function

    NASA Astrophysics Data System (ADS)

    Block, Martin M.; Durand, Loyal

    2011-11-01

    We recently derived a very accurate and fast new algorithm for numerically inverting the Laplace transforms needed to obtain gluon distributions from the proton structure function F2^{γ p}(x,Q2). We numerically inverted the function g( s), s being the variable in Laplace space, to G( v), where v is the variable in ordinary space. We have since discovered that the algorithm does not work if g( s)→0 less rapidly than 1/ s as s→∞, e.g., as 1/ s β for 0< β<1. In this note, we derive a new numerical algorithm for such cases, which holds for all positive and non-integer negative values of β. The new algorithm is exact if the original function G( v) is given by the product of a power v β-1 and a polynomial in v. We test the algorithm numerically for very small positive β, β=10-6 obtaining numerical results that imitate the Dirac delta function δ( v). We also devolve the published MSTW2008LO gluon distribution at virtuality Q 2=5 GeV2 down to the lower virtuality Q 2=1.69 GeV2. For devolution, β is negative, giving rise to inverse Laplace transforms that are distributions and not proper functions. This requires us to introduce the concept of Hadamard Finite Part integrals, which we discuss in detail.

  6. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations.

    PubMed

    Martínez-Romero, Marcos; O'Connor, Martin J; Shankar, Ravi D; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L; Gevaert, Olivier; Graybeal, John; Musen, Mark A

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository.

  7. Fast and Accurate Metadata Authoring Using Ontology-Based Recommendations

    PubMed Central

    Martínez-Romero, Marcos; O’Connor, Martin J.; Shankar, Ravi D.; Panahiazar, Maryam; Willrett, Debra; Egyedi, Attila L.; Gevaert, Olivier; Graybeal, John; Musen, Mark A.

    2017-01-01

    In biomedicine, high-quality metadata are crucial for finding experimental datasets, for understanding how experiments were performed, and for reproducing those experiments. Despite the recent focus on metadata, the quality of metadata available in public repositories continues to be extremely poor. A key difficulty is that the typical metadata acquisition process is time-consuming and error prone, with weak or nonexistent support for linking metadata to ontologies. There is a pressing need for methods and tools to speed up the metadata acquisition process and to increase the quality of metadata that are entered. In this paper, we describe a methodology and set of associated tools that we developed to address this challenge. A core component of this approach is a value recommendation framework that uses analysis of previously entered metadata and ontology-based metadata specifications to help users rapidly and accurately enter their metadata. We performed an initial evaluation of this approach using metadata from a public metadata repository. PMID:29854196

  8. Coplanar electrode microfluidic chip enabling accurate sheathless impedance cytometry.

    PubMed

    De Ninno, Adele; Errico, Vito; Bertani, Francesca Romana; Businaro, Luca; Bisegna, Paolo; Caselli, Federica

    2017-03-14

    Microfluidic impedance cytometry offers a simple non-invasive method for single-cell analysis. Coplanar electrode chips are especially attractive due to ease of fabrication, yielding miniaturized, reproducible, and ultimately low-cost devices. However, their accuracy is challenged by the dependence of the measured signal on particle trajectory within the interrogation volume, that manifests itself as an error in the estimated particle size, unless any kind of focusing system is used. In this paper, we present an original five-electrode coplanar chip enabling accurate particle sizing without the need for focusing. The chip layout is designed to provide a peculiar signal shape from which a new metric correlating with particle trajectory can be extracted. This metric is exploited to correct the estimated size of polystyrene beads of 5.2, 6 and 7 μm nominal diameter, reaching coefficient of variations lower than the manufacturers' quoted values. The potential impact of the proposed device in the field of life sciences is demonstrated with an application to Saccharomyces cerevisiae yeast.

  9. Accurate reactions open up the way for more cooperative societies

    NASA Astrophysics Data System (ADS)

    Vukov, Jeromos

    2014-09-01

    We consider a prisoner's dilemma model where the interaction neighborhood is defined by a square lattice. Players are equipped with basic cognitive abilities such as being able to distinguish their partners, remember their actions, and react to their strategy. By means of their short-term memory, they can remember not only the last action of their partner but the way they reacted to it themselves. This additional accuracy in the memory enables the handling of different interaction patterns in a more appropriate way and this results in a cooperative community with a strikingly high cooperation level for any temptation value. However, the more developed cognitive abilities can only be effective if the copying process of the strategies is accurate enough. The excessive extent of faulty decisions can deal a fatal blow to the possibility of stable cooperative relations.

  10. Analysis of an Internet Community about Pneumothorax and the Importance of Accurate Information about the Disease.

    PubMed

    Kim, Bong Jun; Lee, Sungsoo

    2018-04-01

    The huge improvements in the speed of data transmission and the increasing amount of data available as the Internet has expanded have made it easy to obtain information about any disease. Since pneumothorax frequently occurs in young adolescents, patients often search the Internet for information on pneumothorax. This study analyzed an Internet community for exchanging information on pneumothorax, with an emphasis on the importance of accurate information and doctors' role in providing such information. This study assessed 599,178 visitors to the Internet community from June 2008 to April 2017. There was an average of 190 visitors, 2.2 posts, and 4.5 replies per day. A total of 6,513 posts were made, and 63.3% of them included questions about the disease. The visitors mostly searched for terms such as 'pneumothorax,' 'recurrent pneumothorax,' 'pneumothorax operation,' and 'obtaining a medical certification of having been diagnosed with pneumothorax.' However, 22% of the pneumothorax-related posts by visitors contained inaccurate information. Internet communities can be an important source of information. However, incorrect information about a disease can be harmful for patients. We, as doctors, should try to provide more in-depth information about diseases to patients and to disseminate accurate information about diseases in Internet communities.

  11. Fractional labelmaps for computing accurate dose volume histograms

    NASA Astrophysics Data System (ADS)

    Sunderland, Kyle; Pinter, Csaba; Lasso, Andras; Fichtinger, Gabor

    2017-03-01

    PURPOSE: In radiation therapy treatment planning systems, structures are represented as parallel 2D contours. For treatment planning algorithms, structures must be converted into labelmap (i.e. 3D image denoting structure inside/outside) representations. This is often done by triangulated a surface from contours, which is converted into a binary labelmap. This surface to binary labelmap conversion can cause large errors in small structures. Binary labelmaps are often represented using one byte per voxel, meaning a large amount of memory is unused. Our goal is to develop a fractional labelmap representation containing non-binary values, allowing more information to be stored in the same amount of memory. METHODS: We implemented an algorithm in 3D Slicer, which converts surfaces to fractional labelmaps by creating 216 binary labelmaps, changing the labelmap origin on each iteration. The binary labelmap values are summed to create the fractional labelmap. In addition, an algorithm is implemented in the SlicerRT toolkit that calculates dose volume histograms (DVH) using fractional labelmaps. RESULTS: We found that with manually segmented RANDO head and neck structures, fractional labelmaps represented structure volume up to 19.07% (average 6.81%) more accurately than binary labelmaps, while occupying the same amount of memory. When compared to baseline DVH from treatment planning software, DVH from fractional labelmaps had agreement acceptance percent (1% ΔD, 1% ΔV) up to 57.46% higher (average 4.33%) than DVH from binary labelmaps. CONCLUSION: Fractional labelmaps promise to be an effective method for structure representation, allowing considerably more information to be stored in the same amount of memory.

  12. Patterns of Wildlife Value Orientations

    Treesearch

    Harry C. Zinn; Michael J. Manfredo; Susan C. Barro

    2002-01-01

    Public value orientations toward wildlife may be growing less utilitarian and more protectionist. To better understand one aspect of this trend, we investigated patterns of wildlife value orientations within families. Using a mail survey, we sampled Pennsylvania and Colorado hunting license holders 50 or older; obtaining a 54% response rate (n = 599). Males (94% of...

  13. Accurate mass and velocity functions of dark matter haloes

    NASA Astrophysics Data System (ADS)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (<2 per cent level) model of the distinct halo mass function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z < 2.3 to push for the development of halo occupation distribution using Vmax. The data and the analysis code are made publicly available in the Skies and Universes data base.

  14. Incidence of Artifacts and Deviating Values in Research Data Obtained from an Anesthesia Information Management System in Children.

    PubMed

    Hoorweg, Anne-Lee J; Pasma, Wietze; van Wolfswinkel, Leo; de Graaff, Jurgen C

    2018-02-01

    Vital parameter data collected in anesthesia information management systems are often used for clinical research. The validity of this type of research is dependent on the number of artifacts. In this prospective observational cohort study, the incidence of artifacts in anesthesia information management system data was investigated in children undergoing anesthesia for noncardiac procedures. Secondary outcomes included the incidence of artifacts among deviating and nondeviating values, among the anesthesia phases, and among different anesthetic techniques. We included 136 anesthetics representing 10,236 min of anesthesia time. The incidence of artifacts was 0.5% for heart rate (95% CI: 0.4 to 0.7%), 1.3% for oxygen saturation (1.1 to 1.5%), 7.5% for end-tidal carbon dioxide (6.9 to 8.0%), 5.0% for noninvasive blood pressure (4.0 to 6.0%), and 7.3% for invasive blood pressure (5.9 to 8.8%). The incidence of artifacts among deviating values was 3.1% for heart rate (2.1 to 4.4%), 10.8% for oxygen saturation (7.6 to 14.8%), 14.1% for end-tidal carbon dioxide (13.0 to 15.2%), 14.4% for noninvasive blood pressure (10.3 to 19.4%), and 38.4% for invasive blood pressure (30.3 to 47.1%). Not all values in anesthesia information management systems are valid. The incidence of artifacts stored in the present pediatric anesthesia practice was low for heart rate and oxygen saturation, whereas noninvasive and invasive blood pressure and end-tidal carbon dioxide had higher artifact incidences. Deviating values are more often artifacts than values in a normal range, and artifacts are associated with the phase of anesthesia and anesthetic technique. Development of (automatic) data validation systems or solutions to deal with artifacts in data is warranted.

  15. Recommended Values of the Fundamental Physical Constants: A Status Report

    PubMed Central

    Taylor, Barry N.; Cohen, E. Richard

    1990-01-01

    We summarize the principal advances made in the fundamental physical constants field since the completion of the 1986 CODATA least-squares adjustment of the constants and discuss their implications for both the 1986 set of recommended values and the next least-squares adjustment. In general, the new results lead to values of the constants with uncertainties 5 to 7 times smaller than the uncertainties assigned the 1986 values. However, the changes in the values themselves are less than twice the 1986 assigned one-standard-deviation uncertainties and thus are not highly significant. Although much new data has become available since 1986, three new results dominate the analysis: a value of the Planck constant obtained from a realization of the watt; a value of the fine-structure constant obtained from the magnetic moment anomaly of the electron; and a value of the molar gas constant obtained from the speed of sound in argon. Because of their dominant role in determining the values and uncertainties of many of the constants, it is highly desirable that additional results of comparable uncertainty that corroborate these three data items be obtained before the next adjustment is carried out. Until then, the 1986 CODATA set of recommended values will remain the set of choice. PMID:28179787

  16. Accurate Mars Express orbits to improve the determination of the mass and ephemeris of the Martian moons

    NASA Astrophysics Data System (ADS)

    Rosenblatt, P.; Lainey, V.; Le Maistre, S.; Marty, J. C.; Dehant, V.; Pätzold, M.; Van Hoolst, T.; Häusler, B.

    2008-05-01

    The determination of the ephemeris of the Martian moons has benefited from observations of their plane-of-sky positions derived from images taken by cameras onboard spacecraft orbiting Mars. Images obtained by the Super Resolution Camera (SRC) onboard Mars Express (MEX) have been used to derive moon positions relative to Mars on the basis of a fit of a complete dynamical model of their motion around Mars. Since, these positions are computed from the relative position of the spacecraft when the images are taken, those positions need to be known as accurately as possible. An accurate MEX orbit is obtained by fitting two years of tracking data of the Mars Express Radio Science (MaRS) experiment onboard MEX. The average accuracy of the orbits has been estimated to be around 20-25 m. From these orbits, we have re-derived the positions of Phobos and Deimos at the epoch of the SRC observations and compared them with the positions derived by using the MEX orbits provided by the ESOC navigation team. After fit of the orbital model of Phobos and Deimos, the gain in precision in the Phobos position is roughly 30 m, corresponding to the estimated gain of accuracy of the MEX orbits. A new solution of the GM of the Martian moons has also been obtained from the accurate MEX orbits, which is consistent with previous solutions and, for Phobos, is more precise than the solution from the Mars Global Surveyor (MGS) and Mars Odyssey (ODY) tracking data. It will be further improved with data from MEX-Phobos closer encounters (at a distance less than 300 km). This study also demonstrates the advantage of combining observations of the moon positions from a spacecraft and from the Earth to assess the real accuracy of the spacecraft orbit. In turn, the natural satellite ephemerides can be improved and participate to a better knowledge of the origin and evolution of the Martian moons.

  17. Accurate Estimate of Some Propagation Characteristics for the First Higher Order Mode in Graded Index Fiber with Simple Analytic Chebyshev Method

    NASA Astrophysics Data System (ADS)

    Dutta, Ivy; Chowdhury, Anirban Roy; Kumbhakar, Dharmadas

    2013-03-01

    Using Chebyshev power series approach, accurate description for the first higher order (LP11) mode of graded index fibers having three different profile shape functions are presented in this paper and applied to predict their propagation characteristics. These characteristics include fractional power guided through the core, excitation efficiency and Petermann I and II spot sizes with their approximate analytic formulations. We have shown that where two and three Chebyshev points in LP11 mode approximation present fairly accurate results, the values based on our calculations involving four Chebyshev points match excellently with available exact numerical results.

  18. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  19. A high order accurate finite element algorithm for high Reynolds number flow prediction

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.

  20. Fusing Continuous-Valued Medical Labels Using a Bayesian Model.

    PubMed

    Zhu, Tingting; Dunkley, Nic; Behar, Joachim; Clifton, David A; Clifford, Gari D

    2015-12-01

    With the rapid increase in volume of time series medical data available through wearable devices, there is a need to employ automated algorithms to label data. Examples of labels include interventions, changes in activity (e.g. sleep) and changes in physiology (e.g. arrhythmias). However, automated algorithms tend to be unreliable resulting in lower quality care. Expert annotations are scarce, expensive, and prone to significant inter- and intra-observer variance. To address these problems, a Bayesian Continuous-valued Label Aggregator (BCLA) is proposed to provide a reliable estimation of label aggregation while accurately infer the precision and bias of each algorithm. The BCLA was applied to QT interval (pro-arrhythmic indicator) estimation from the electrocardiogram using labels from the 2006 PhysioNet/Computing in Cardiology Challenge database. It was compared to the mean, median, and a previously proposed Expectation Maximization (EM) label aggregation approaches. While accurately predicting each labelling algorithm's bias and precision, the root-mean-square error of the BCLA was 11.78 ± 0.63 ms, significantly outperforming the best Challenge entry (15.37 ± 2.13 ms) as well as the EM, mean, and median voting strategies (14.76 ± 0.52, 17.61 ± 0.55, and 14.43 ± 0.57 ms respectively with p < 0.0001). The BCLA could therefore provide accurate estimation for medical continuous-valued label tasks in an unsupervised manner even when the ground truth is not available.

  1. Molecular acidity: An accurate description with information-theoretic approach in density functional reactivity theory.

    PubMed

    Cao, Xiaofang; Rong, Chunying; Zhong, Aiguo; Lu, Tian; Liu, Shubin

    2018-01-15

    Molecular acidity is one of the important physiochemical properties of a molecular system, yet its accurate calculation and prediction are still an unresolved problem in the literature. In this work, we propose to make use of the quantities from the information-theoretic (IT) approach in density functional reactivity theory and provide an accurate description of molecular acidity from a completely new perspective. To illustrate our point, five different categories of acidic series, singly and doubly substituted benzoic acids, singly substituted benzenesulfinic acids, benzeneseleninic acids, phenols, and alkyl carboxylic acids, have been thoroughly examined. We show that using IT quantities such as Shannon entropy, Fisher information, Ghosh-Berkowitz-Parr entropy, information gain, Onicescu information energy, and relative Rényi entropy, one is able to simultaneously predict experimental pKa values of these different categories of compounds. Because of the universality of the quantities employed in this work, which are all density dependent, our approach should be general and be applicable to other systems as well. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  2. Intraocular lens power estimation by accurate ray tracing for eyes underwent previous refractive surgeries

    NASA Astrophysics Data System (ADS)

    Yang, Que; Wang, Shanshan; Wang, Kai; Zhang, Chunyu; Zhang, Lu; Meng, Qingyu; Zhu, Qiudong

    2015-08-01

    For normal eyes without history of any ocular surgery, traditional equations for calculating intraocular lens (IOL) power, such as SRK-T, Holladay, Higis, SRK-II, et al., all were relativley accurate. However, for eyes underwent refractive surgeries, such as LASIK, or eyes diagnosed as keratoconus, these equations may cause significant postoperative refractive error, which may cause poor satisfaction after cataract surgery. Although some methods have been carried out to solve this problem, such as Hagis-L equation[1], or using preoperative data (data before LASIK) to estimate K value[2], no precise equations were available for these eyes. Here, we introduced a novel intraocular lens power estimation method by accurate ray tracing with optical design software ZEMAX. Instead of using traditional regression formula, we adopted the exact measured corneal elevation distribution, central corneal thickness, anterior chamber depth, axial length, and estimated effective lens plane as the input parameters. The calculation of intraocular lens power for a patient with keratoconus and another LASIK postoperative patient met very well with their visual capacity after cataract surgery.

  3. History and progress on accurate measurements of the Planck constant.

    PubMed

    Steiner, Richard

    2013-01-01

    The measurement of the Planck constant, h, is entering a new phase. The CODATA 2010 recommended value is 6.626 069 57 × 10(-34) J s, but it has been a long road, and the trip is not over yet. Since its discovery as a fundamental physical constant to explain various effects in quantum theory, h has become especially important in defining standards for electrical measurements and soon, for mass determination. Measuring h in the International System of Units (SI) started as experimental attempts merely to prove its existence. Many decades passed while newer experiments measured physical effects that were the influence of h combined with other physical constants: elementary charge, e, and the Avogadro constant, N(A). As experimental techniques improved, the precision of the value of h expanded. When the Josephson and quantum Hall theories led to new electronic devices, and a hundred year old experiment, the absolute ampere, was altered into a watt balance, h not only became vital in definitions for the volt and ohm units, but suddenly it could be measured directly and even more accurately. Finally, as measurement uncertainties now approach a few parts in 10(8) from the watt balance experiments and Avogadro determinations, its importance has been linked to a proposed redefinition of a kilogram unit of mass. The path to higher accuracy in measuring the value of h was not always an example of continuous progress. Since new measurements periodically led to changes in its accepted value and the corresponding SI units, it is helpful to see why there were bumps in the road and where the different branch lines of research joined in the effort. Recalling the bumps along this road will hopefully avoid their repetition in the upcoming SI redefinition debates. This paper begins with a brief history of the methods to measure a combination of fundamental constants, thus indirectly obtaining the Planck constant. The historical path is followed in the section describing how the

  4. Accurate integration over atomic regions bounded by zero-flux surfaces.

    PubMed

    Polestshuk, Pavel M

    2013-01-30

    The approach for the integration over a region covered by zero-flux surface is described. This approach based on the surface triangulation technique is efficiently realized in a newly developed program TWOE. The elaborated method is tested on several atomic properties including the source function. TWOE results are compared with those produced by using well-known existing programs. Absolute errors in computed atomic properties are shown to range usually from 10(-6) to 10(-5) au. The demonstrative examples prove that present realization has perfect convergence of atomic properties with increasing size of angular grid and allows to obtain highly accurate data even in the most difficult cases. It is believed that the developed program can be bridgehead that allows to implement atomic partitioning of any desired molecular property with high accuracy. Copyright © 2012 Wiley Periodicals, Inc.

  5. Colocalization analysis in fluorescence micrographs: verification of a more accurate calculation of pearson's correlation coefficient.

    PubMed

    Barlow, Andrew L; Macleod, Alasdair; Noppen, Samuel; Sanderson, Jeremy; Guérin, Christopher J

    2010-12-01

    One of the most routine uses of fluorescence microscopy is colocalization, i.e., the demonstration of a relationship between pairs of biological molecules. Frequently this is presented simplistically by the use of overlays of red and green images, with areas of yellow indicating colocalization of the molecules. Colocalization data are rarely quantified and can be misleading. Our results from both synthetic and biological datasets demonstrate that the generation of Pearson's correlation coefficient between pairs of images can overestimate positive correlation and fail to demonstrate negative correlation. We have demonstrated that the calculation of a thresholded Pearson's correlation coefficient using only intensity values over a determined threshold in both channels produces numerical values that more accurately describe both synthetic datasets and biological examples. Its use will bring clarity and accuracy to colocalization studies using fluorescent microscopy.

  6. Obtaining short-fiber orientation model parameters using non-lubricated squeeze flow

    NASA Astrophysics Data System (ADS)

    Lambert, Gregory; Wapperom, Peter; Baird, Donald

    2017-12-01

    Accurate models of fiber orientation dynamics during the processing of polymer-fiber composites are needed for the design work behind important automobile parts. All of the existing models utilize empirical parameters, but a standard method for obtaining them independent of processing does not exist. This study considers non-lubricated squeeze flow through a rectangular channel as a solution. A two-dimensional finite element method simulation of the kinematics and fiber orientation evolution along the centerline of a sample is developed as a first step toward a fully three-dimensional simulation. The model is used to fit to orientation data in a short-fiber-reinforced polymer composite after squeezing. Fiber orientation model parameters obtained in this study do not agree well with those obtained for the same material during startup of simple shear. This is attributed to the vastly different rates at which fibers orient during shearing and extensional flows. A stress model is also used to try to fit to experimental closure force data. Although the model can be tuned to the correct magnitude of the closure force, it does not fully recreate the transient behavior, which is attributed to the lack of any consideration for fiber-fiber interactions.

  7. Point of Care Ultrasound Accurately Distinguishes Inflammatory from Noninflammatory Disease in Patients Presenting with Abdominal Pain and Diarrhea

    PubMed Central

    Novak, Kerri L.; Jacob, Deepti; Kaplan, Gilaad G.; Boyce, Emma; Ghosh, Subrata; Ma, Irene; Lu, Cathy; Wilson, Stephanie; Panaccione, Remo

    2016-01-01

    Background. Approaches to distinguish inflammatory bowel disease (IBD) from noninflammatory disease that are noninvasive, accurate, and readily available are desirable. Such approaches may decrease time to diagnosis and better utilize limited endoscopic resources. The aim of this study was to evaluate the diagnostic accuracy for gastroenterologist performed point of care ultrasound (POCUS) in the detection of luminal inflammation relative to gold standard ileocolonoscopy. Methods. A prospective, single-center study was conducted on convenience sample of patients presenting with symptoms of diarrhea and/or abdominal pain. Patients were offered POCUS prior to having ileocolonoscopy. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) with 95% confidence intervals (CI), as well as likelihood ratios, were calculated. Results. Fifty-eight patients were included in this study. The overall sensitivity, specificity, PPV, and NPV were 80%, 97.8%, 88.9%, and 95.7%, respectively, with positive and negative likelihood ratios (LR) of 36.8 and 0.20. Conclusion. POCUS can accurately be performed at the bedside to detect transmural inflammation of the intestine. This noninvasive approach may serve to expedite diagnosis, improve allocation of endoscopic resources, and facilitate initiation of appropriate medical therapy. PMID:27446838

  8. Accurate mass measurement: terminology and treatment of data.

    PubMed

    Brenton, A Gareth; Godfrey, A Ruth

    2010-11-01

    High-resolution mass spectrometry has become ever more accessible with improvements in instrumentation, such as modern FT-ICR and Orbitrap mass spectrometers. This has resulted in an increase in the number of articles submitted for publication quoting accurate mass data. There is a plethora of terms related to accurate mass analysis that are in current usage, many employed incorrectly or inconsistently. This article is based on a set of notes prepared by the authors for research students and staff in our laboratories as a guide to the correct terminology and basic statistical procedures to apply in relation to mass measurement, particularly for accurate mass measurement. It elaborates on the editorial by Gross in 1994 regarding the use of accurate masses for structure confirmation. We have presented and defined the main terms in use with reference to the International Union of Pure and Applied Chemistry (IUPAC) recommendations for nomenclature and symbolism for mass spectrometry. The correct use of statistics and treatment of data is illustrated as a guide to new and existing mass spectrometry users with a series of examples as well as statistical methods to compare different experimental methods and datasets. Copyright © 2010. Published by Elsevier Inc.

  9. An infrastructure for accurate characterization of single-event transients in digital circuits.

    PubMed

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-11-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure.

  10. An infrastructure for accurate characterization of single-event transients in digital circuits☆

    PubMed Central

    Savulimedu Veeravalli, Varadan; Polzer, Thomas; Schmid, Ulrich; Steininger, Andreas; Hofbauer, Michael; Schweiger, Kurt; Dietrich, Horst; Schneider-Hornstein, Kerstin; Zimmermann, Horst; Voss, Kay-Obbe; Merk, Bruno; Hajek, Michael

    2013-01-01

    We present the architecture and a detailed pre-fabrication analysis of a digital measurement ASIC facilitating long-term irradiation experiments of basic asynchronous circuits, which also demonstrates the suitability of the general approach for obtaining accurate radiation failure models developed in our FATAL project. Our ASIC design combines radiation targets like Muller C-elements and elastic pipelines as well as standard combinational gates and flip-flops with an elaborate on-chip measurement infrastructure. Major architectural challenges result from the fact that the latter must operate reliably under the same radiation conditions the target circuits are exposed to, without wasting precious die area for a rad-hard design. A measurement architecture based on multiple non-rad-hard counters is used, which we show to be resilient against double faults, as well as many triple and even higher-multiplicity faults. The design evaluation is done by means of comprehensive fault injection experiments, which are based on detailed Spice models of the target circuits in conjunction with a standard double-exponential current injection model for single-event transients (SET). To be as accurate as possible, the parameters of this current model have been aligned with results obtained from 3D device simulation models, which have in turn been validated and calibrated using micro-beam radiation experiments at the GSI in Darmstadt, Germany. For the latter, target circuits instrumented with high-speed sense amplifiers have been used for analog SET recording. Together with a probabilistic analysis of the sustainable particle flow rates, based on a detailed area analysis and experimental cross-section data, we can conclude that the proposed architecture will indeed sustain significant target hit rates, without exceeding the resilience bound of the measurement infrastructure. PMID:24748694

  11. Canine and feline hematology reference values for the ADVIA 120 hematology system.

    PubMed

    Moritz, Andreas; Fickenscher, Yvonne; Meyer, Karin; Failing, Klaus; Weiss, Douglas J

    2004-01-01

    The ADVIA 120 is a laser-based hematology analyzer with software applications for animal species. Accurate reference values would be useful for the assessment of new hematologic parameters and for interlaboratory comparisons. The goal of this study was to establish reference intervals for CBC results and new parameters for RBC morphology, reticulocytes, and platelets in healthy dogs and cats using the ADVIA 120 hematology system. The ADVIA 120, with multispecies software (version 1.107-MS), was used to analyze whole blood samples from clinically healthy dogs (n=46) and cats (n=61). Data distribution was determined and reference intervals were calculated as 2.5 to 97.5 percentiles and 25 to 75 percentiles. Most data showed Gaussian or log-normal distribution. The numbers of RBCs falling outside the normocytic-normochromic range were slightly higher in cats than in dogs. Both dogs and cats had reticulocytes with low, medium, and high absorbance. Mean numbers of large platelets and platelet clumps were higher in cats compared with dogs. Reference intervals obtained on the ADVIA 120 provide valuable baseline information for assessing new hematologic parameters and for interlaboratory comparisons. Differences compared with previously published reference values can be attributed largely to differences in methodology.

  12. Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions

    NASA Astrophysics Data System (ADS)

    Dong, Miao L.; Goyal, Kashika G.; Worth, Bradley W.; Makkar, Sorab S.; Calhoun, William R.; Bali, Lalit M.; Bali, Samir

    2013-08-01

    A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.

  13. Accurate in situ measurement of complex refractive index and particle size in intralipid emulsions.

    PubMed

    Dong, Miao L; Goyal, Kashika G; Worth, Bradley W; Makkar, Sorab S; Calhoun, William R; Bali, Lalit M; Bali, Samir

    2013-08-01

    A first accurate measurement of the complex refractive index in an intralipid emulsion is demonstrated, and thereby the average scatterer particle size using standard Mie scattering calculations is extracted. Our method is based on measurement and modeling of the reflectance of a divergent laser beam from the sample surface. In the absence of any definitive reference data for the complex refractive index or particle size in highly turbid intralipid emulsions, we base our claim of accuracy on the fact that our work offers several critically important advantages over previously reported attempts. First, our measurements are in situ in the sense that they do not require any sample dilution, thus eliminating dilution errors. Second, our theoretical model does not employ any fitting parameters other than the two quantities we seek to determine, i.e., the real and imaginary parts of the refractive index, thus eliminating ambiguities arising from multiple extraneous fitting parameters. Third, we fit the entire reflectance-versus-incident-angle data curve instead of focusing on only the critical angle region, which is just a small subset of the data. Finally, despite our use of highly scattering opaque samples, our experiment uniquely satisfies a key assumption behind the Mie scattering formalism, namely, no multiple scattering occurs. Further proof of our method's validity is given by the fact that our measured particle size finds good agreement with the value obtained by dynamic light scattering.

  14. Fatty liver disease in severe obese patients: Diagnostic value of abdominal ultrasound

    PubMed Central

    de Moura Almeida, Alessandro; Cotrim, Helma Pinchemel; Barbosa, Daniel Batista Valente; de Athayde, Luciana Gordilho Matteoni; Santos, Adimeia Souza; Bitencourt, Almir Galvão Vieira; de Freitas, Luiz Antonio Rodrigues; Rios, Adriano; Alves, Erivaldo

    2008-01-01

    AIM: To evaluate the sensitivity and specificity of abdominal ultrasound (US) for the diagnosis of hepatic steatosis in severe obese subjects and its relation to histological grade of steatosis. METHODS: A consecutive series of obese patients, who underwent bariatric surgery from October 2004 to May 2005, was selected. Ultrasonography was performed in all patients as part of routine preoperative time and an intraoperative wedge biopsy was obtained at the beginning of the bariatric surgery. The US and histological findings of steatosis were compared, considering histology as the gold standard. RESULTS: The study included 105 patients. The mean age was 37.2 ± 10.6 years and 75.2% were female. The histological prevalence of steatosis was 89.5%. The sensitivity and specificity of US in the diagnosis of hepatic steatosis were, respectively, 64.9% (95% CI: 54.9-74.3) and 90.9% (95% CI: 57.1-99.5). The positive predictive value and negative predictive value were, respectively, 98.4% (95% CI: 90.2-99.9) and 23.3% (95% CI: 12.3-39.0). The presence of steatosis on US was associated to advanced grades of steatosis on histology (P = 0.016). CONCLUSION: Preoperative abdominal US in our series has not shown to be an accurate method for the diagnosis of hepatic steatosis in severe obese patients. Until another non-invasive method demonstrates better sensitivity and specificity values, histological evaluation may be recommended to these patients undergoing bariatric surgery. PMID:18322958

  15. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe.

    PubMed

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-09

    Accurate quantitation of intracellular pH (pH i ) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pH i sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pH i . Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pH i , in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF 4 :Yb 3+ , Tm 3+ UCNPs were used as pH i response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pH i value 3.0-7.0 with deviation less than 0.43. This approach shall facilitate the researches in pH i related areas and development of the intracellular drug delivery systems.

  16. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    NASA Astrophysics Data System (ADS)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0-7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.

  17. Accurate measurements of the true column efficiency and of the instrument band broadening contributions in the presence of a chromatographic column.

    PubMed

    Gritti, Fabrice; Guiochon, Georges

    2014-01-31

    A rapid and simple validated experimental protocol is proposed for the accurate determination of the true intrinsic column efficiency and for that of the variance of the extra-column volume of the instrument used, the latter being obtained without requiring the removal of the chromatographic column from the HPLC system. This protocol was applied to 2.1mm×100mm columns packed with sub-3 (2.7μm Halo Peptide ES-C18) and sub-2μm (1.6μm prototype) core-shell particles. It was validated by observing the linear behavior of the plot of the apparent column plate height versus the reciprocal of (1+k')(2) for at least three homologous compounds, with a linear regression coefficient R(2) larger than 0.999. Irrespective of the contribution of the several, different instruments used to the total band broadening, the same column HETP value was obtained within 5%. This new protocol outperform the classical one in which the chromatographic column is replaced with a zero dead volume (ZDV) union connector to measure the extra-column volume variance, which is subtracted from the variance measured with the column to measure the intrinsic HETP. This protocol fails because it significantly underestimates the system volume variance. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Probabilities and statistics for backscatter estimates obtained by a scatterometer with applications to new scatterometer design data

    NASA Technical Reports Server (NTRS)

    Pierson, Willard J., Jr.

    1989-01-01

    The values of the Normalized Radar Backscattering Cross Section (NRCS), sigma (o), obtained by a scatterometer are random variables whose variance is a known function of the expected value. The probability density function can be obtained from the normal distribution. Models for the expected value obtain it as a function of the properties of the waves on the ocean and the winds that generated the waves. Point estimates of the expected value were found from various statistics given the parameters that define the probability density function for each value. Random intervals were derived with a preassigned probability of containing that value. A statistical test to determine whether or not successive values of sigma (o) are truly independent was derived. The maximum likelihood estimates for wind speed and direction were found, given a model for backscatter as a function of the properties of the waves on the ocean. These estimates are biased as a result of the terms in the equation that involve natural logarithms, and calculations of the point estimates of the maximum likelihood values are used to show that the contributions of the logarithmic terms are negligible and that the terms can be omitted.

  19. Estimating the impact of somatic cell count on the value of milk utilising parameters obtained from the published literature.

    PubMed

    Geary, Una; Lopez-Villalobos, Nicolas; O'Brien, Bernadette; Garrick, Dorian J; Shalloo, Laurence

    2014-05-01

    The impact of mastitis on milk value per litre independent of the effect of mastitis on milk volume, was quantified for Ireland using a meta-analysis and a processing sector model. Changes in raw milk composition, cheese processing and composition associated with increased bulk milk somatic cell count (BMSCC) were incorporated into the model. Processing costs and market values were representative of current industry values. It was assumed that as BMSCC increased (i) milk fat and milk protein increased and milk lactose decreased, (ii) fat and protein recoveries decreased, (iii) cheese protein decreased and cheese moisture increased. Five BMSCC categories were examined from ⩽100 000 to >400 000 cells/ml. The analysis showed that as BMSCC increased the production quantities reduced. An increase in BMSCC from 100 000 to >400 000 cells/ml saw a reduction in net revenue of 3·2% per annum (€51·3 million) which corresponded to a reduction in the value of raw milk of €0·0096 cents/l.

  20. Pre-Licensed Nursing Students Rate Professional Values

    ERIC Educational Resources Information Center

    Garee, Denise L.

    2016-01-01

    Ethical decision making of new nurses relies on professional values and moral development obtained during training. This descriptive, comparative study demonstrated the importance values attributed to the items of the Nurses' Professional Values Scale-Revised (Weis & Schank, 2009), by a sample of senior ADN and BSN students from across the…

  1. Accurate Finite Difference Algorithms

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  2. Time-Accurate Simulations and Acoustic Analysis of Slat Free-Shear-Layer. Part 2

    NASA Technical Reports Server (NTRS)

    Khorrami, Mehdi R.; Singer, Bart A.; Lockard, David P.

    2002-01-01

    Unsteady computational simulations of a multi-element, high-lift configuration are performed. Emphasis is placed on accurate spatiotemporal resolution of the free shear layer in the slat-cove region. The excessive dissipative effects of the turbulence model, so prevalent in previous simulations, are circumvented by switching off the turbulence-production term in the slat cove region. The justifications and physical arguments for taking such a step are explained in detail. The removal of this excess damping allows the shear layer to amplify large-scale structures, to achieve a proper non-linear saturation state, and to permit vortex merging. The large-scale disturbances are self-excited, and unlike our prior fully turbulent simulations, no external forcing of the shear layer is required. To obtain the farfield acoustics, the Ffowcs Williams and Hawkings equation is evaluated numerically using the simulated time-accurate flow data. The present comparison between the computed and measured farfield acoustic spectra shows much better agreement for the amplitude and frequency content than past calculations. The effect of the angle-of-attack on the slat's flow features radiated acoustic field are also simulated presented.

  3. Interferometric Constraints on Surface Brightness Asymmetries in Long-Period Variable Stars: A Threat to Accurate Gaia Parallaxes

    NASA Astrophysics Data System (ADS)

    Sacuto, S.; Jorissen, A.; Cruzalèbes, P.; Pasquato, E.; Chiavassa, A.; Spang, A.; Rabbia, Y.; Chesneau, O.

    2011-09-01

    A monitoring of surface brightness asymmetries in evolved giants and supergiants is necessary to estimate the threat that they represent to accurate Gaia parallaxes. Closure-phase measurements obtained with AMBER/VISA in a 3-telescope configuration are fitted by a simple model to constrain the photocenter displacement. The results for the C-type star TX Psc show a large deviation of the photocenter displacement that could bias the Gaia parallax.

  4. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  5. Accurate 3D kinematic measurement of temporomandibular joint using X-ray fluoroscopic images

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takaharu; Matsumoto, Akiko; Sugamoto, Kazuomi; Matsumoto, Ken; Kakimoto, Naoya; Yura, Yoshiaki

    2014-04-01

    Accurate measurement and analysis of 3D kinematics of temporomandibular joint (TMJ) is very important for assisting clinical diagnosis and treatment of prosthodontics and orthodontics, and oral surgery. This study presents a new 3D kinematic measurement technique of the TMJ using X-ray fluoroscopic images, which can easily obtain the TMJ kinematic data in natural motion. In vivo kinematics of the TMJ (maxilla and mandibular bone) is determined using a feature-based 2D/3D registration, which uses beads silhouette on fluoroscopic images and 3D surface bone models with beads. The 3D surface models of maxilla and mandibular bone with beads were created from CT scans data of the subject using the mouthpiece with the seven strategically placed beads. In order to validate the accuracy of pose estimation for the maxilla and mandibular bone, computer simulation test was performed using five patterns of synthetic tantalum beads silhouette images. In the clinical applications, dynamic movement during jaw opening and closing was conducted, and the relative pose of the mandibular bone with respect to the maxilla bone was determined. The results of computer simulation test showed that the root mean square errors were sufficiently smaller than 1.0 mm and 1.0 degree. In the results of clinical application, during jaw opening from 0.0 to 36.8 degree of rotation, mandibular condyle exhibited 19.8 mm of anterior sliding relative to maxillary articular fossa, and these measurement values were clinically similar to the previous reports. Consequently, present technique was thought to be suitable for the 3D TMJ kinematic analysis.

  6. Accurate structural and spectroscopic characterization of prebiotic molecules: The neutral and cationic acetyl cyanide and their related species.

    PubMed

    Bellili, A; Linguerri, R; Hochlaf, M; Puzzarini, C

    2015-11-14

    In an effort to provide an accurate structural and spectroscopic characterization of acetyl cyanide, its two enolic isomers and the corresponding cationic species, state-of-the-art computational methods, and approaches have been employed. The coupled-cluster theory including single and double excitations together with a perturbative treatment of triples has been used as starting point in composite schemes accounting for extrapolation to the complete basis-set limit as well as core-valence correlation effects to determine highly accurate molecular structures, fundamental vibrational frequencies, and rotational parameters. The available experimental data for acetyl cyanide allowed us to assess the reliability of our computations: structural, energetic, and spectroscopic properties have been obtained with an overall accuracy of about, or better than, 0.001 Å, 2 kcal/mol, 1-10 MHz, and 11 cm(-1) for bond distances, adiabatic ionization potentials, rotational constants, and fundamental vibrational frequencies, respectively. We are therefore confident that the highly accurate spectroscopic data provided herein can be useful for guiding future experimental investigations and/or astronomical observations.

  7. Accurate deuterium spectroscopy for fundamental studies

    NASA Astrophysics Data System (ADS)

    Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.

    2018-07-01

    We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.

  8. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  9. Accurate, precise, and efficient theoretical methods to calculate anion-π interaction energies in model structures.

    PubMed

    Mezei, Pál D; Csonka, Gábor I; Ruzsinszky, Adrienn; Sun, Jianwei

    2015-01-13

    A correct description of the anion-π interaction is essential for the design of selective anion receptors and channels and important for advances in the field of supramolecular chemistry. However, it is challenging to do accurate, precise, and efficient calculations of this interaction, which are lacking in the literature. In this article, by testing sets of 20 binary anion-π complexes of fluoride, chloride, bromide, nitrate, or carbonate ions with hexafluorobenzene, 1,3,5-trifluorobenzene, 2,4,6-trifluoro-1,3,5-triazine, or 1,3,5-triazine and 30 ternary π-anion-π' sandwich complexes composed from the same monomers, we suggest domain-based local-pair natural orbital coupled cluster energies extrapolated to the complete basis-set limit as reference values. We give a detailed explanation of the origin of anion-π interactions, using the permanent quadrupole moments, static dipole polarizabilities, and electrostatic potential maps. We use symmetry-adapted perturbation theory (SAPT) to calculate the components of the anion-π interaction energies. We examine the performance of the direct random phase approximation (dRPA), the second-order screened exchange (SOSEX), local-pair natural-orbital (LPNO) coupled electron pair approximation (CEPA), and several dispersion-corrected density functionals (including generalized gradient approximation (GGA), meta-GGA, and double hybrid density functional). The LPNO-CEPA/1 results show the best agreement with the reference results. The dRPA method is only slightly less accurate and precise than the LPNO-CEPA/1, but it is considerably more efficient (6-17 times faster) for the binary complexes studied in this paper. For 30 ternary π-anion-π' sandwich complexes, we give dRPA interaction energies as reference values. The double hybrid functionals are much more efficient but less accurate and precise than dRPA. The dispersion-corrected double hybrid PWPB95-D3(BJ) and B2PLYP-D3(BJ) functionals perform better than the GGA and meta

  10. Identifying Intraplate Mechanism by B-Value Calculations in the South of Java Island

    NASA Astrophysics Data System (ADS)

    Bagus Suananda Y., Ida; Aufa, Irfan; Harlianti, Ulvienin

    2018-03-01

    Java is the most populous island in Indonesia with 50 million people live there. This island geologically formed at the Eurasia plate margin by the subduction of the Australian oceanic crust. At the south part of Java, beside the occurrence of 2-plate convergence earthquake (interplate), there are also the activities of the intraplate earthquake. Research for distinguish this 2 different earthquake type is necessary for estimating the behavior of the earthquake that may occur. The aim of this research is to map the b-value in the south of Java using earthquake data from 1963 until 2008. The research area are divided into clusters based on the epicenter mapping results with magnitude more than 4 and three different depth (0-30 km, 30-60 km, 60-100 km). This location clustering indicate group of earthquakes occurred by the same structure or mechanism. On some cluster in the south of Java, b-value obtained are between 0.8 and 1.25. This range of b-value indicates the region was intraplate earthquake zone, with 0.72-1.2 b-value range is the indication of intraplate earthquake zone. The final validation is to determine the mechanism of a segment done by correlating the epicenter and b-value plot with the available structural geology data. Based on this research, we discover that the earthquakes occur in Java not only the interplate earthquake, the intraplate earthquake also occurred here. By identifying the mechanism of a segment in the south of Java, earthquake characterization that may occur can be done for developing the accurate earthquake disaster mitigation system.

  11. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Batina, John T.; Yang, Henry T. Y.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in a high gradient region or the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational costs. A detailed description is given of the enrichment and coarsening procedures and comparisons with alternative results and experimental data are presented to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  12. Spatial adaption procedures on unstructured meshes for accurate unsteady aerodynamic flow computation

    NASA Technical Reports Server (NTRS)

    Rausch, Russ D.; Yang, Henry T. Y.; Batina, John T.

    1991-01-01

    Spatial adaption procedures for the accurate and efficient solution of steady and unsteady inviscid flow problems are described. The adaption procedures were developed and implemented within a two-dimensional unstructured-grid upwind-type Euler code. These procedures involve mesh enrichment and mesh coarsening to either add points in high gradient regions of the flow or remove points where they are not needed, respectively, to produce solutions of high spatial accuracy at minimal computational cost. The paper gives a detailed description of the enrichment and coarsening procedures and presents comparisons with alternative results and experimental data to provide an assessment of the accuracy and efficiency of the capability. Steady and unsteady transonic results, obtained using spatial adaption for the NACA 0012 airfoil, are shown to be of high spatial accuracy, primarily in that the shock waves are very sharply captured. The results were obtained with a computational savings of a factor of approximately fifty-three for a steady case and as much as twenty-five for the unsteady cases.

  13. Fast and Accurate Hybrid Stream PCRTMSOLAR Radiative Transfer Model for Reflected Solar Spectrum Simulation in the Cloudy Atmosphere

    NASA Technical Reports Server (NTRS)

    Yang, Qiguang; Liu, Xu; Wu, Wan; Kizer, Susan; Baize, Rosemary R.

    2016-01-01

    A hybrid stream PCRTM-SOLAR model has been proposed for fast and accurate radiative transfer simulation. It calculates the reflected solar (RS) radiances with a fast coarse way and then, with the help of a pre-saved matrix, transforms the results to obtain the desired high accurate RS spectrum. The methodology has been demonstrated with the hybrid stream discrete ordinate (HSDO) radiative transfer (RT) model. The HSDO method calculates the monochromatic radiances using a 4-stream discrete ordinate method, where only a small number of monochromatic radiances are simulated with both 4-stream and a larger N-stream (N = 16) discrete ordinate RT algorithm. The accuracy of the obtained channel radiance is comparable to the result from N-stream moderate resolution atmospheric transmission version 5 (MODTRAN5). The root-mean-square errors are usually less than 5x10(exp -4) mW/sq cm/sr/cm. The computational speed is three to four-orders of magnitude faster than the medium speed correlated-k option MODTRAN5. This method is very efficient to simulate thousands of RS spectra under multi-layer clouds/aerosols and solar radiation conditions for climate change study and numerical weather prediction applications.

  14. 12 CFR 703.11 - Valuing securities.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ...) At least monthly, a Federal credit union must determine the fair value of each security it holds. It may determine fair value by obtaining a price quotation on the security from an industry-recognized... supervisory committee or its external auditor must independently assess the reliability of monthly price...

  15. Zeolite formation from coal fly ash and heavy metal ion removal characteristics of thus-obtained Zeolite X in multi-metal systems.

    PubMed

    Jha, Vinay Kumar; Nagae, Masahiro; Matsuda, Motohide; Miyake, Michihiro

    2009-06-01

    Zeolitic materials have been prepared from coal fly ash as well as from a SiO(2)-Al(2)O(3) system upon NaOH fusion treatment, followed by subsequent hydrothermal processing at various NaOH concentrations and reaction times. During the preparation process, the starting material initially decomposed to an amorphous form, and the nucleation process of the zeolite began. The carbon content of the starting material influenced the formation of the zeolite by providing an active surface for nucleation. Zeolite A (Na-A) was transformed into zeolite X (Na-X) with increasing NaOH concentration and reaction time. The adsorption isotherms of the obtained Na-X based on the characteristics required to remove heavy ions such as Ni(2+), Cu(2+), Cd(2+) and Pb(2+) were examined in multi-metal systems. Thus obtained experimental data suggests that the Langmuir and Freundlich models are more accurate compared to the Dubinin-Kaganer-Radushkevich (DKR) model. However, the sorption energy obtained from the DKR model was helpful in elucidating the mechanism of the sorption process. Further, in going from a single- to multi-metal system, the degree of fitting for the Freundlich model compared with the Langmuir model was favored due to its basic assumption of a heterogeneity factor. The Extended-Langmuir model may be used in multi-metal systems, but gives a lower value for equilibrium sorption compared with the Langmuir model.

  16. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm.

    PubMed

    Schmidt, Taly Gilat; Wang, Adam S; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-10-01

    The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was [Formula: see text], with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors.

  17. Accuracy of patient-specific organ dose estimates obtained using an automated image segmentation algorithm

    PubMed Central

    Schmidt, Taly Gilat; Wang, Adam S.; Coradi, Thomas; Haas, Benjamin; Star-Lack, Josh

    2016-01-01

    Abstract. The overall goal of this work is to develop a rapid, accurate, and automated software tool to estimate patient-specific organ doses from computed tomography (CT) scans using simulations to generate dose maps combined with automated segmentation algorithms. This work quantified the accuracy of organ dose estimates obtained by an automated segmentation algorithm. We hypothesized that the autosegmentation algorithm is sufficiently accurate to provide organ dose estimates, since small errors delineating organ boundaries will have minimal effect when computing mean organ dose. A leave-one-out validation study of the automated algorithm was performed with 20 head-neck CT scans expertly segmented into nine regions. Mean organ doses of the automatically and expertly segmented regions were computed from Monte Carlo-generated dose maps and compared. The automated segmentation algorithm estimated the mean organ dose to be within 10% of the expert segmentation for regions other than the spinal canal, with the median error for each organ region below 2%. In the spinal canal region, the median error was −7%, with a maximum absolute error of 28% for the single-atlas approach and 11% for the multiatlas approach. The results demonstrate that the automated segmentation algorithm can provide accurate organ dose estimates despite some segmentation errors. PMID:27921070

  18. Reliable and accurate point-based prediction of cumulative infiltration using soil readily available characteristics: A comparison between GMDH, ANN, and MLR

    NASA Astrophysics Data System (ADS)

    Rahmati, Mehdi

    2017-08-01

    Developing accurate and reliable pedo-transfer functions (PTFs) to predict soil non-readily available characteristics is one of the most concerned topic in soil science and selecting more appropriate predictors is a crucial factor in PTFs' development. Group method of data handling (GMDH), which finds an approximate relationship between a set of input and output variables, not only provide an explicit procedure to select the most essential PTF input variables, but also results in more accurate and reliable estimates than other mostly applied methodologies. Therefore, the current research was aimed to apply GMDH in comparison with multivariate linear regression (MLR) and artificial neural network (ANN) to develop several PTFs to predict soil cumulative infiltration point-basely at specific time intervals (0.5-45 min) using soil readily available characteristics (RACs). In this regard, soil infiltration curves as well as several soil RACs including soil primary particles (clay (CC), silt (Si), and sand (Sa)), saturated hydraulic conductivity (Ks), bulk (Db) and particle (Dp) densities, organic carbon (OC), wet-aggregate stability (WAS), electrical conductivity (EC), and soil antecedent (θi) and field saturated (θfs) water contents were measured at 134 different points in Lighvan watershed, northwest of Iran. Then, applying GMDH, MLR, and ANN methodologies, several PTFs have been developed to predict cumulative infiltrations using two sets of selected soil RACs including and excluding Ks. According to the test data, results showed that developed PTFs by GMDH and MLR procedures using all soil RACs including Ks resulted in more accurate (with E values of 0.673-0.963) and reliable (with CV values lower than 11 percent) predictions of cumulative infiltrations at different specific time steps. In contrast, ANN procedure had lower accuracy (with E values of 0.356-0.890) and reliability (with CV values up to 50 percent) compared to GMDH and MLR. The results also revealed

  19. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system

    PubMed Central

    Hinke, Jefferson T.; Perryman, Wayne L.; Goebel, Michael E.; LeRoi, Donald J.

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx) are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal). Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable. PMID:29186134

  20. An accurate and adaptable photogrammetric approach for estimating the mass and body condition of pinnipeds using an unmanned aerial system.

    PubMed

    Krause, Douglas J; Hinke, Jefferson T; Perryman, Wayne L; Goebel, Michael E; LeRoi, Donald J

    2017-01-01

    Measurements of body size and mass are fundamental to pinniped population management and research. Manual measurements tend to be accurate but are invasive and logistically challenging to obtain. Ground-based photogrammetric techniques are less invasive, but inherent limitations make them impractical for many field applications. The recent proliferation of unmanned aerial systems (UAS) in wildlife monitoring has provided a promising new platform for the photogrammetry of free-ranging pinnipeds. Leopard seals (Hydrurga leptonyx) are an apex predator in coastal Antarctica whose body condition could be a valuable indicator of ecosystem health. We aerially surveyed leopard seals of known body size and mass to test the precision and accuracy of photogrammetry from a small UAS. Flights were conducted in January and February of 2013 and 2014 and 50 photogrammetric samples were obtained from 15 unrestrained seals. UAS-derived measurements of standard length were accurate to within 2.01 ± 1.06%, and paired comparisons with ground measurements were statistically indistinguishable. An allometric linear mixed effects model predicted leopard seal mass within 19.40 kg (4.4% error for a 440 kg seal). Photogrammetric measurements from a single, vertical image obtained using UAS provide a noninvasive approach for estimating the mass and body condition of pinnipeds that may be widely applicable.

  1. Magnetic Resonance Imaging of Intracranial Hypotension: Diagnostic Value of Combined Qualitative Signs and Quantitative Metrics.

    PubMed

    Aslan, Kerim; Gunbey, Hediye Pinar; Tomak, Leman; Ozmen, Zafer; Incesu, Lutfi

    The aim of this study was to investigate whether the use of combination quantitative metrics (mamillopontine distance [MPD], pontomesencephalic angle, and mesencephalon anterior-posterior/medial-lateral diameter ratios) with qualitative signs (dural enhancement, subdural collections/hematoma, venous engorgement, pituitary gland enlargements, and tonsillar herniations) provides a more accurate diagnosis of intracranial hypotension (IH). The quantitative metrics and qualitative signs of 34 patients and 34 control subjects were assessed by 2 independent observers. Receiver operating characteristic (ROC) curve was used to evaluate the diagnostic performance of quantitative metrics and qualitative signs, and for the diagnosis of IH, optimum cutoff values of quantitative metrics were found with ROC analysis. Combined ROC curve was measured for the quantitative metrics, and qualitative signs combinations in determining diagnostic accuracy and sensitivity, specificity, and positive and negative predictive values were found, and the best model combination was formed. Whereas MPD and pontomesencephalic angle were significantly lower in patients with IH when compared with the control group (P < 0.001), mesencephalon anterior-posterior/medial-lateral diameter ratio was significantly higher (P < 0.001). For qualitative signs, the highest individual distinctive power was dural enhancement with area under the ROC curve (AUC) of 0.838. For quantitative metrics, the highest individual distinctive power was MPD with AUC of 0.947. The best accuracy in the diagnosis of IH was obtained by combination of dural enhancement, venous engorgement, and MPD with an AUC of 1.00. This study showed that the combined use of dural enhancement, venous engorgement, and MPD had diagnostic accuracy of 100 % for the diagnosis of IH. Therefore, a more accurate IH diagnosis can be provided with combination of quantitative metrics with qualitative signs.

  2. Calibrating GPS With TWSTFT For Accurate Time Transfer

    DTIC Science & Technology

    2008-12-01

    40th Annual Precise Time and Time Interval (PTTI) Meeting 577 CALIBRATING GPS WITH TWSTFT FOR ACCURATE TIME TRANSFER Z. Jiang1 and...primary time transfer techniques are GPS and TWSTFT (Two-Way Satellite Time and Frequency Transfer, TW for short). 83% of UTC time links are...Calibrating GPS With TWSTFT For Accurate Time Transfer 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d. PROJECT

  3. Fourier Transform Mass Spectrometry and Nuclear Magnetic Resonance Analysis for the Rapid and Accurate Characterization of Hexacosanoylceramide.

    PubMed

    Ross, Charles W; Simonsick, William J; Bogusky, Michael J; Celikay, Recep W; Guare, James P; Newton, Randall C

    2016-06-28

    Ceramides are a central unit of all sphingolipids which have been identified as sites of biological recognition on cellular membranes mediating cell growth and differentiation. Several glycosphingolipids have been isolated, displaying immunomodulatory and anti-tumor activities. These molecules have generated considerable interest as potential vaccine adjuvants in humans. Accurate analyses of these and related sphingosine analogues are important for the characterization of structure, biological function, and metabolism. We report the complementary use of direct laser desorption ionization (DLDI), sheath flow electrospray ionization (ESI) Fourier transform ion cyclotron resonance mass spectrometry (FTICR MS) and high-field nuclear magnetic resonance (NMR) analysis for the rapid, accurate identification of hexacosanoylceramide and starting materials. DLDI does not require stringent sample preparation and yields representative ions. Sheath-flow ESI yields ions of the product and byproducts and was significantly better than monospray ESI due to improved compound solubility. Negative ion sheath flow ESI provided data of starting materials and products all in one acquisition as hexacosanoic acid does not ionize efficiently when ceramides are present. NMR provided characterization of these lipid molecules complementing the results obtained from MS analyses. NMR data was able to differentiate straight chain versus branched chain alkyl groups not easily obtained from mass spectrometry.

  4. Mental models accurately predict emotion transitions.

    PubMed

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  5. Mental models accurately predict emotion transitions

    PubMed Central

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  6. Lunar tidal acceleration obtained from satellite-derived ocean tide parameters

    NASA Technical Reports Server (NTRS)

    Goad, C. C.; Douglas, B. C.

    1978-01-01

    One hundred sets of mean elements of GEOS-3 computed at 2-day intervals yielded observation equations for the M sub 2 ocean tide from the long periodic variations of the inclination and node of the orbit. The 2nd degree Love number was given the value k sub 2 = 0.30 and the solid tide phase angle was taken to be zero. Combining obtained equations with results for the satellite 1967-92A gives the M sub 2 ocean tide parameter values. Under the same assumption of zero solid tide phase lag, the lunar tidal acceleration was found mostly due to the C sub 22 term in the expansion of the M sub 2 tide with additional small contributions from the 0 sub 1 and N sub 2 tides. Using Lambeck's (1975) estimates for the latter, the obtained acceleration in lunar longitudal in excellent agreement with the most recent determinations from ancient and modern astronomical data.

  7. Matrix Effects Originating from Coexisting Minerals and Accurate Determination of Stable Silver Isotopes in Silver Deposits.

    PubMed

    Guo, Qi; Wei, Hai-Zhen; Jiang, Shao-Yong; Hohl, Simon; Lin, Yi-Bo; Wang, Yi-Jing; Li, Yin-Chuan

    2017-12-19

    Except for extensive studies in core formation and volatile-element depletion processes using radiogenic Ag isotopes (i.e., the Pd-Ag chronometer), recent research has revealed that the mass fractionation of silver isotopes is in principle controlled by physicochemical processes (e.g., evaporation, diffusion, chemical exchange, etc.) during magmatic emplacement and hydrothermal alteration. As these geologic processes only produce very minor variations of δ 109 Ag from -0.5 to +1.1‰, more accurate and precise measurements are required. In this work, a robust linear relationship between instrumental mass discrimination of Ag and Pd isotopes was obtained at the Ag/Pd molar ratio of 1:20. In Au-Ag ore deposits, silver minerals have complex paragenetic relationships with other minerals (e.g., chalcopyrite, sphalerite, galena, pyrite, etc.). It is difficult to remove such abundant impurities completely because the other metals are tens to thousands of times richer than silver. Both quantitative evaluation of matrix effects and modification of chemical chromatography were carried out to deal with the problems. Isobaric inferences (e.g., 65 Cu 40 Ar + to 105 Pd, 208 Pb 2+ to 104 Pd, and 67 Zn 40 Ar + to 107 Ag + ) and space charge effects dramatically shift the measured δ 109 Ag values. The selection of alternative Pd isotope pairs is effective in eliminating spectral matrix effects so as to ensure accurate analysis under the largest possible ranges for metal impurities, which are Cu/Ag ≤ 50:1, Fe/Ag ≤ 600:1, Pb/Ag ≤ 10:1, and Zn/Ag ≤ 1:1, respectively. With the modified procedure, we reported silver isotope compositions (δ 109 Ag) in geological standard materials and typical Au-Ag ore deposit samples varying from -0.029 to +0.689 ‰ with external reproducibility of ±0.009-0.084 ‰. A systemic survey of δ 109 Ag (or ε 109 Ag) variations in rocks, ore deposits, and environmental materials in nature is discussed.

  8. BAsE-Seq: a method for obtaining long viral haplotypes from short sequence reads.

    PubMed

    Hong, Lewis Z; Hong, Shuzhen; Wong, Han Teng; Aw, Pauline P K; Cheng, Yan; Wilm, Andreas; de Sessions, Paola F; Lim, Seng Gee; Nagarajan, Niranjan; Hibberd, Martin L; Quake, Stephen R; Burkholder, William F

    2014-01-01

    We present a method for obtaining long haplotypes, of over 3 kb in length, using a short-read sequencer, Barcode-directed Assembly for Extra-long Sequences (BAsE-Seq). BAsE-Seq relies on transposing a template-specific barcode onto random segments of the template molecule and assembling the barcoded short reads into complete haplotypes. We applied BAsE-Seq on mixed clones of hepatitis B virus and accurately identified haplotypes occurring at frequencies greater than or equal to 0.4%, with >99.9% specificity. Applying BAsE-Seq to a clinical sample, we obtained over 9,000 viral haplotypes, which provided an unprecedented view of hepatitis B virus population structure during chronic infection. BAsE-Seq is readily applicable for monitoring quasispecies evolution in viral diseases.

  9. On the accurate analysis of vibroacoustics in head insert gradient coils.

    PubMed

    Winkler, Simone A; Alejski, Andrew; Wade, Trevor; McKenzie, Charles A; Rutt, Brian K

    2017-10-01

    To accurately analyze vibroacoustics in MR head gradient coils. A detailed theoretical model for gradient coil vibroacoustics, including the first description and modeling of Lorentz damping, is introduced and implemented in a multiphysics software package. Numerical finite-element method simulations were used to establish a highly accurate vibroacoustic model in head gradient coils in detail, including the newly introduced Lorentz damping effect. Vibroacoustic coupling was examined through an additional modal analysis. Thorough experimental studies were used to validate simulations. Average experimental sound pressure levels (SPLs) and accelerations over the 0-3000 Hz frequency range were 97.6 dB, 98.7 dB, and 95.4 dB, as well as 20.6 g, 8.7 g, and 15.6 g for the X-, Y-, and Z-gradients, respectively. A reasonable agreement between simulations and measurements was achieved. Vibroacoustic coupling showed a coupled resonance at 2300 Hz for the Z-gradient that is responsible for a sharp peak and the highest SPL value in the acoustic spectrum. We have developed and used more realistic multiphysics simulation methods to gain novel insights into the underlying concepts for vibroacoustics in head gradient coils, which will permit improved analyses of existing gradient coils and novel SPL reduction strategies for future gradient coil designs. Magn Reson Med 78:1635-1645, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  10. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    PubMed Central

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  11. Obtaining of caffeine from Turkish tea fiber and stalk wastes.

    PubMed

    Gürü, M; Içen, H

    2004-08-01

    The aim of this study was to find a cheap method to obtain caffeine. Experiments were performed on fiber and stalk wastes of Turkish tea plants that had no economical value other than being used merely as low grade fuel and fodder. Tea stalks and fiber were obtained from tea factories. Parameters affecting caffeine extraction from tea wastes were determined to be, mixing rate, water/tea ratio, temperature, time and particle size. The maximum yields by dried mass from the tea fibers and stalks were 1.16% and 0.92%, respectively.

  12. Postanesthesia patients with large upper arm circumference: is use of an "extra-long" adult cuff or forearm cuff placement accurate?

    PubMed

    Watson, Sheri; Aguas, Marita; Bienapfl, Tracy; Colegrove, Pat; Foisy, Nancy; Jondahl, Bonnie; Yosses, Mary Beth; Yu, Larissa; Anastas, Zoe

    2011-06-01

    The purpose of this study was to determine if blood pressure (BP) measured in the forearm or with an extra-long BP cuff in the upper arm accurately reflects BP measured in the upper arm with an appropriately sized BP cuff in patients with large upper arm circumference. A method-comparison design was used with a convenience sample of 49 PACU patients. Noninvasive blood pressures were obtained in two different locations (forearm; upper arm) and in the upper arm with an extra-long adult and recommended large adult cuff sizes. Data were analyzed by calculating bias and precision for the BP cuff size and location and Student's t-tests, with P < .0125 considered significant. Significantly higher forearm systolic (P < .0001) and diastolic (P < .0002) BP measurements were found compared to BP obtained in the upper arm with the reference standard BP cuff. Significantly higher systolic (t(48df) = 5.38, P < .0001), but not diastolic (t(48df) = 4.11, P < .019), BP differences were found for BP measured with the extra-long cuff at the upper arm site compared to the upper arm, reference standard BP. Findings suggest that the clinical practice of using the forearm or an extra-long cuff in the upper arm for BP measurement in post anesthesia patients with large upper arm circumferences may result in inaccurate BP values. Copyright © 2011 American Society of PeriAnesthesia Nurses. Published by Elsevier Inc. All rights reserved.

  13. Cautionary Note on Reporting Eta-Squared Values from Multifactor ANOVA Designs

    ERIC Educational Resources Information Center

    Pierce, Charles A.; Block, Richard A.; Aguinis, Herman

    2004-01-01

    The authors provide a cautionary note on reporting accurate eta-squared values from multifactor analysis of variance (ANOVA) designs. They reinforce the distinction between classical and partial eta-squared as measures of strength of association. They provide examples from articles published in premier psychology journals in which the authors…

  14. Feasibility study for image guided kidney surgery: assessment of required intraoperative surface for accurate image to physical space registrations

    NASA Astrophysics Data System (ADS)

    Benincasa, Anne B.; Clements, Logan W.; Herrell, S. Duke; Chang, Sam S.; Cookson, Michael S.; Galloway, Robert L.

    2006-03-01

    Currently, the removal of kidney tumor masses uses only direct or laparoscopic visualizations, resulting in prolonged procedure and recovery times and reduced clear margin. Applying current image guided surgery (IGS) techniques, as those used in liver cases, to kidney resections (nephrectomies) presents a number of complications. Most notably is the limited field of view of the intraoperative kidney surface, which constrains the ability to obtain a surface delineation that is geometrically descriptive enough to drive a surface-based registration. Two different phantom orientations were used to model the laparoscopic and traditional partial nephrectomy views. For the laparoscopic view, fiducial point sets were compiled from a CT image volume using anatomical features such as the renal artery and vein. For the traditional view, markers attached to the phantom set-up were used for fiducials and targets. The fiducial points were used to perform a point-based registration, which then served as a guide for the surface-based registration. Laser range scanner (LRS) obtained surfaces were registered to each phantom surface using a rigid iterative closest point algorithm. Subsets of each phantom's LRS surface were used in a robustness test to determine the predictability of their registrations to transform the entire surface. Results from both orientations suggest that about half of the kidney's surface needs to be obtained intraoperatively for accurate registrations between the image surface and the LRS surface, suggesting the obtained kidney surfaces were geometrically descriptive enough to perform accurate registrations. This preliminary work paves the way for further development of kidney IGS systems.

  15. Machine learning of accurate energy-conserving molecular force fields.

    PubMed

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof T; Müller, Klaus-Robert

    2017-05-01

    Using conservation of energy-a fundamental property of closed classical and quantum mechanical systems-we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol -1 for energies and 1 kcal mol -1 Å̊ -1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods.

  16. Machine learning of accurate energy-conserving molecular force fields

    PubMed Central

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  17. Fast and accurate reference-free alignment of subtomograms.

    PubMed

    Chen, Yuxiang; Pfeffer, Stefan; Hrabe, Thomas; Schuller, Jan Michael; Förster, Friedrich

    2013-06-01

    In cryoelectron tomography alignment and averaging of subtomograms, each dnepicting the same macromolecule, improves the resolution compared to the individual subtomogram. Major challenges of subtomogram alignment are noise enhancement due to overfitting, the bias of an initial reference in the iterative alignment process, and the computational cost of processing increasingly large amounts of data. Here, we propose an efficient and accurate alignment algorithm via a generalized convolution theorem, which allows computation of a constrained correlation function using spherical harmonics. This formulation increases computational speed of rotational matching dramatically compared to rotation search in Cartesian space without sacrificing accuracy in contrast to other spherical harmonic based approaches. Using this sampling method, a reference-free alignment procedure is proposed to tackle reference bias and overfitting, which also includes contrast transfer function correction by Wiener filtering. Application of the method to simulated data allowed us to obtain resolutions near the ground truth. For two experimental datasets, ribosomes from yeast lysate and purified 20S proteasomes, we achieved reconstructions of approximately 20Å and 16Å, respectively. The software is ready-to-use and made public to the community. Copyright © 2013 Elsevier Inc. All rights reserved.

  18. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kearny, C.H.

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy ofmore » {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these instructions, the builder can verify

  19. Accurate Vehicle Location System Using RFID, an Internet of Things Approach.

    PubMed

    Prinsloo, Jaco; Malekian, Reza

    2016-06-04

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved.

  20. Accurate Vehicle Location System Using RFID, an Internet of Things Approach

    PubMed Central

    Prinsloo, Jaco; Malekian, Reza

    2016-01-01

    Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. PMID:27271638

  1. Preliminary results of an attempt to predict over apron occupational exposure of cardiologists from cardiac fluoroscopy procedures based on DAP (dose area product) values.

    PubMed

    Toossi, Mohammad Taghi Bahreyni; Mehrpouyan, Mohammad; Nademi, Hossein; Fardid, Reza

    2015-03-01

    This study is an effort to propose a mathematical relation between the occupational exposure measured by a dosimeter worn on a lead apron in the chest region of a cardiologist and the dose area product (DAP) recorded by a meter attached to the X-ray tube. We aimed to determine factors by which DAP values attributed to patient exposure could be converted to the over-apron entrance surface air kerma incurred by cardiologists during an angiographic procedure. A Rando phantom representing a patient was exposed by an X-ray tube from 77 pre-defined directions. DAP value for each exposure angle was recorded. Cardiologist exposure was measured by a Radcal ionization chamber 10X5-180 positioned on a second phantom representing the physician. The exposure conversion factor was determined as the quotient of over apron exposure by DAP value. To verify the validity of this method, the over-apron exposure of a cardiologist was measured using the ionization chamber while performing coronary angiography procedures on 45 patients weighing on average 75 ± 5 kg. DAP values for the corresponding procedures were also obtained. Conversion factors obtained from phantom exposure were applied to the patient DAP values to calculate physician exposure. Mathematical analysis of our results leads us to conclude that a linear relationship exists between two sets of data: (a) cardiologist exposure measured directly by Radcal & DAP values recorded by the X-ray machine system (R (2) = 0.88), (b) specialist measured and estimated exposure derived from DAP values (R (2) = 0.91). The results demonstrate that cardiologist occupational exposure can be derived from patient data accurately.

  2. Extracting Time-Accurate Acceleration Vectors From Nontrivial Accelerometer Arrangements.

    PubMed

    Franck, Jennifer A; Blume, Janet; Crisco, Joseph J; Franck, Christian

    2015-09-01

    Sports-related concussions are of significant concern in many impact sports, and their detection relies on accurate measurements of the head kinematics during impact. Among the most prevalent recording technologies are videography, and more recently, the use of single-axis accelerometers mounted in a helmet, such as the HIT system. Successful extraction of the linear and angular impact accelerations depends on an accurate analysis methodology governed by the equations of motion. Current algorithms are able to estimate the magnitude of acceleration and hit location, but make assumptions about the hit orientation and are often limited in the position and/or orientation of the accelerometers. The newly formulated algorithm presented in this manuscript accurately extracts the full linear and rotational acceleration vectors from a broad arrangement of six single-axis accelerometers directly from the governing set of kinematic equations. The new formulation linearizes the nonlinear centripetal acceleration term with a finite-difference approximation and provides a fast and accurate solution for all six components of acceleration over long time periods (>250 ms). The approximation of the nonlinear centripetal acceleration term provides an accurate computation of the rotational velocity as a function of time and allows for reconstruction of a multiple-impact signal. Furthermore, the algorithm determines the impact location and orientation and can distinguish between glancing, high rotational velocity impacts, or direct impacts through the center of mass. Results are shown for ten simulated impact locations on a headform geometry computed with three different accelerometer configurations in varying degrees of signal noise. Since the algorithm does not require simplifications of the actual impacted geometry, the impact vector, or a specific arrangement of accelerometer orientations, it can be easily applied to many impact investigations in which accurate kinematics need

  3. Partial Adaptation of Obtained and Observed Value Signals Preserves Information about Gains and Losses

    PubMed Central

    Baddeley, Michelle; Tobler, Philippe N.; Schultz, Wolfram

    2016-01-01

    Given that the range of rewarding and punishing outcomes of actions is large but neural coding capacity is limited, efficient processing of outcomes by the brain is necessary. One mechanism to increase efficiency is to rescale neural output to the range of outcomes expected in the current context, and process only experienced deviations from this expectation. However, this mechanism comes at the cost of not being able to discriminate between unexpectedly low losses when times are bad versus unexpectedly high gains when times are good. Thus, too much adaptation would result in disregarding information about the nature and absolute magnitude of outcomes, preventing learning about the longer-term value structure of the environment. Here we investigate the degree of adaptation in outcome coding brain regions in humans, for directly experienced outcomes and observed outcomes. We scanned participants while they performed a social learning task in gain and loss blocks. Multivariate pattern analysis showed two distinct networks of brain regions adapt to the most likely outcomes within a block. Frontostriatal areas adapted to directly experienced outcomes, whereas lateral frontal and temporoparietal regions adapted to observed social outcomes. Critically, in both cases, adaptation was incomplete and information about whether the outcomes arose in a gain block or a loss block was retained. Univariate analysis confirmed incomplete adaptive coding in these regions but also detected nonadapting outcome signals. Thus, although neural areas rescale their responses to outcomes for efficient coding, they adapt incompletely and keep track of the longer-term incentives available in the environment. SIGNIFICANCE STATEMENT Optimal value-based choice requires that the brain precisely and efficiently represents positive and negative outcomes. One way to increase efficiency is to adapt responding to the most likely outcomes in a given context. However, too strong adaptation would result

  4. [Stereotactic biopsy in the accurate diagnosis of lesions in the brain stem and deep brain].

    PubMed

    Qin, F; Huang, Z C; Cai, M Q; Xu, X F; Lu, T T; Dong, Q; Wu, A M; Lu, Z Z; Zhao, C; Guo, Y

    2018-06-12

    Objective: To investigate the value of stereotactic biopsy in the accurate diagnosis of lesions in the brain stem and deep brain. Methods: A total of 29 consecutive patients who underwent stereotactic biopsy of brainstem and deep brain lesions between May 2012 and January 2018 were retrospectively reviewed. The Cosman-Roberts-Wells (CRW) stereotactic frame was installed under local anesthesia. Thin-layer CT and MRI scanning were performed. Target coordinates were calculated by inputting CT-MRI data into the radionics surgical planning system. The individualized puncture path was designed according to the location of the lesions and the characteristics of the image. Target distributions were as follows: 12 cases of midbrain or pons, 2 cases of internal capsule, 3 cases of thalamus, 12 cases of basal ganglia. The biopsy samples were used for further pathological and/or genetic diagnosis. Results: Twenty-eight of the 29 cases (96.6%) were diagnosed accurately by histopathology and genomic examination following stereotactic biopsy. Pathological results were as follows: 8 cases of lymphoma, 7 cases of glioma, 4 cases of demyelination, 2 cases of germ cell tumor, 2 cases of metastatic tumor, 1 cases of cerebral sparganosis, 1 case of tuberculous granuloma, 1 case of hereditary prion disease, 1 case of glial hyperplasia, 1 case of leukemia. The accurate diagnosis of one case required a combination of histopathology and genomic examination. Undefined diagnosis was still made in 1 cases (3.45%) after biopsy. After biopsy, there were 2 cases (6.9%) with symptomatic slight hemorrhage, 1 case (3.45%) with symptomatic severe hemorrhage, and 1 cass (3.45%) with permanent neurological dysfunction. No one died because of surgery or surgical complications. Conclusions: Stereotactic biopsy is fast, safe and minimally invasive. It is an ideal strategy for accurate diagnosis of lesions in brain stem and deep brain.

  5. Accurate reliability analysis method for quantum-dot cellular automata circuits

    NASA Astrophysics Data System (ADS)

    Cui, Huanqing; Cai, Li; Wang, Sen; Liu, Xiaoqiang; Yang, Xiaokuo

    2015-10-01

    Probabilistic transfer matrix (PTM) is a widely used model in the reliability research of circuits. However, PTM model cannot reflect the impact of input signals on reliability, so it does not completely conform to the mechanism of the novel field-coupled nanoelectronic device which is called quantum-dot cellular automata (QCA). It is difficult to get accurate results when PTM model is used to analyze the reliability of QCA circuits. To solve this problem, we present the fault tree models of QCA fundamental devices according to different input signals. After that, the binary decision diagram (BDD) is used to quantitatively investigate the reliability of two QCA XOR gates depending on the presented models. By employing the fault tree models, the impact of input signals on reliability can be identified clearly and the crucial components of a circuit can be found out precisely based on the importance values (IVs) of components. So this method is contributive to the construction of reliable QCA circuits.

  6. Working harder to obtain more snack foods when wanting to eat less.

    PubMed

    Giesen, Janneke C A H; Havermans, Remco C; Nederkoorn, Chantal; Strafaci, Silvana; Jansen, Anita

    2009-01-01

    This study investigates individual differences in the reinforcing value of snack food. More specifically, it was investigated whether differences in restraint status are associated with differences in working for high-caloric snack food. Thirty-six unrestrained non-dieters, twenty restrained non-dieters and fifteen current dieters performed a concurrent schedules task in which they had the option to work for points for either snack food or fruit and vegetables. By progressively increasing the "price" of the snack foods (i.e., the amount of work required to obtain extra snack points) the relative reinforcing value of snack food was determined. As hypothesized, restrained non-dieters worked harder and current dieters worked less hard to obtain snack food as compared to unrestrained non-dieters.

  7. Accurate physical laws can permit new standard units: The two laws F→=ma→ and the proportionality of weight to mass

    NASA Astrophysics Data System (ADS)

    Saslow, Wayne M.

    2014-04-01

    Three common approaches to F→=ma→ are: (1) as an exactly true definition of force F→ in terms of measured inertial mass m and measured acceleration a→; (2) as an exactly true axiom relating measured values of a→, F→ and m; and (3) as an imperfect but accurately true physical law relating measured a→ to measured F→, with m an experimentally determined, matter-dependent constant, in the spirit of the resistance R in Ohm's law. In the third case, the natural units are those of a→ and F→, where a→ is normally specified using distance and time as standard units, and F→ from a spring scale as a standard unit; thus mass units are derived from force, distance, and time units such as newtons, meters, and seconds. The present work develops the third approach when one includes a second physical law (again, imperfect but accurate)—that balance-scale weight W is proportional to m—and the fact that balance-scale measurements of relative weight are more accurate than those of absolute force. When distance and time also are more accurately measurable than absolute force, this second physical law permits a shift to standards of mass, distance, and time units, such as kilograms, meters, and seconds, with the unit of force—the newton—a derived unit. However, were force and distance more accurately measurable than time (e.g., time measured with an hourglass), this second physical law would permit a shift to standards of force, mass, and distance units such as newtons, kilograms, and meters, with the unit of time—the second—a derived unit. Therefore, the choice of the most accurate standard units depends both on what is most accurately measurable and on the accuracy of physical law.

  8. Accurate millimetre and submillimetre rest frequencies for cis- and trans-dithioformic acid, HCSSH

    NASA Astrophysics Data System (ADS)

    Prudenzano, D.; Laas, J.; Bizzocchi, L.; Lattanzi, V.; Endres, C.; Giuliano, B. M.; Spezzano, S.; Palumbo, M. E.; Caselli, P.

    2018-04-01

    Context. A better understanding of sulphur chemistry is needed to solve the interstellar sulphur depletion problem. A way to achieve this goal is to study new S-bearing molecules in the laboratory, obtaining accurate rest frequencies for an astronomical search. We focus on dithioformic acid, HCSSH, which is the sulphur analogue of formic acid. Aims: The aim of this study is to provide an accurate line list of the two HCSSH trans and cis isomers in their electronic ground state and a comprehensive centrifugal distortion analysis with an extension of measurements in the millimetre and submillimetre range. Methods: We studied the two isomers in the laboratory using an absorption spectrometer employing the frequency-modulation technique. The molecules were produced directly within a free-space cell by glow discharge of a gas mixture. We measured lines belonging to the electronic ground state up to 478 GHz, with a total number of 204 and 139 new rotational transitions, respectively, for trans and cis isomers. The final dataset also includes lines in the centimetre range available from literature. Results: The extension of the measurements in the mm and submm range lead to an accurate set of rotational and centrifugal distortion parameters. This allows us to predict frequencies with estimated uncertainties as low as 5 kHz at 1 mm wavelength. Hence, the new dataset provided by this study can be used for astronomical search. Frequency lists are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/612/A56

  9. Real-Time and Accurate Identification of Single Oligonucleotide Photoisomers via an Aerolysin Nanopore.

    PubMed

    Hu, Zheng-Li; Li, Zi-Yuan; Ying, Yi-Lun; Zhang, Junji; Cao, Chan; Long, Yi-Tao; Tian, He

    2018-04-03

    Identification of the configuration for the photoresponsive oligonucleotide plays an important role in the ingenious design of DNA nanomolecules and nanodevices. Due to the limited resolution and sensitivity of present methods, it remains a challenge to determine the accurate configuration of photoresponsive oligonucleotides, much less a precise description of their photoconversion process. Here, we used an aerolysin (AeL) nanopore-based confined space for real-time determination and quantification of the absolute cis/ trans configuration of each azobenzene-modified oligonucleotide (Azo-ODN) with a single molecule resolution. The two completely separated current distributions with narrow peak widths at half height (<0.62 pA) are assigned to cis/ trans-Azo-ODN isomers, respectively. Due to the high current sensitivity, each isomer of Azo-ODN could be undoubtedly identified, which gives the accurate photostationary conversion values of 82.7% for trans-to- cis under UV irradiation and 82.5% for cis-to- trans under vis irradiation. Further real-time kinetic evaluation reveals that the photoresponsive rate constants of Azo-ODN from trans-to- cis and cis-to -trans are 0.43 and 0.20 min -1 , respectively. This study will promote the sophisticated design of photoresponsive ODN to achieve an efficient and applicable photocontrollable process.

  10. Robust and Accurate Anomaly Detection in ECG Artifacts Using Time Series Motif Discovery

    PubMed Central

    Sivaraks, Haemwaan

    2015-01-01

    Electrocardiogram (ECG) anomaly detection is an important technique for detecting dissimilar heartbeats which helps identify abnormal ECGs before the diagnosis process. Currently available ECG anomaly detection methods, ranging from academic research to commercial ECG machines, still suffer from a high false alarm rate because these methods are not able to differentiate ECG artifacts from real ECG signal, especially, in ECG artifacts that are similar to ECG signals in terms of shape and/or frequency. The problem leads to high vigilance for physicians and misinterpretation risk for nonspecialists. Therefore, this work proposes a novel anomaly detection technique that is highly robust and accurate in the presence of ECG artifacts which can effectively reduce the false alarm rate. Expert knowledge from cardiologists and motif discovery technique is utilized in our design. In addition, every step of the algorithm conforms to the interpretation of cardiologists. Our method can be utilized to both single-lead ECGs and multilead ECGs. Our experiment results on real ECG datasets are interpreted and evaluated by cardiologists. Our proposed algorithm can mostly achieve 100% of accuracy on detection (AoD), sensitivity, specificity, and positive predictive value with 0% false alarm rate. The results demonstrate that our proposed method is highly accurate and robust to artifacts, compared with competitive anomaly detection methods. PMID:25688284

  11. 12 CFR 703.11 - Valuing securities.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 12 Banks and Banking 7 2014-01-01 2014-01-01 false Valuing securities. 703.11 Section 703.11 Banks... DEPOSIT ACTIVITIES § 703.11 Valuing securities. (a) Before purchasing or selling a security, a Federal credit union must obtain either price quotations on the security from at least two broker-dealers or a...

  12. Effects of b-value and number of gradient directions on diffusion MRI measures obtained with Q-ball imaging

    NASA Astrophysics Data System (ADS)

    Schilling, Kurt G.; Nath, Vishwesh; Blaber, Justin; Harrigan, Robert L.; Ding, Zhaohua; Anderson, Adam W.; Landman, Bennett A.

    2017-02-01

    High-angular-resolution diffusion-weighted imaging (HARDI) MRI acquisitions have become common for use with higher order models of diffusion. Despite successes in resolving complex fiber configurations and probing microstructural properties of brain tissue, there is no common consensus on the optimal b-value and number of diffusion directions to use for these HARDI methods. While this question has been addressed by analysis of the diffusion-weighted signal directly, it is unclear how this translates to the information and metrics derived from the HARDI models themselves. Using a high angular resolution data set acquired at a range of b-values, and repeated 11 times on a single subject, we study how the b-value and number of diffusion directions impacts the reproducibility and precision of metrics derived from Q-ball imaging, a popular HARDI technique. We find that Q-ball metrics associated with tissue microstructure and white matter fiber orientation are sensitive to both the number of diffusion directions and the spherical harmonic representation of the Q-ball, and often are biased when under sampled. These results can advise researchers on appropriate acquisition and processing schemes, particularly when it comes to optimizing the number of diffusion directions needed for metrics derived from Q-ball imaging.

  13. Proteogenomics produces comprehensive and highly accurate protein-coding gene annotation in a complete genome assembly of Malassezia sympodialis

    PubMed Central

    Tellgren-Roth, Christian; Baudo, Charles D.; Kennell, John C.; Sun, Sheng; Billmyre, R. Blake; Schröder, Markus S.; Andersson, Anna; Holm, Tina; Sigurgeirsson, Benjamin; Wu, Guangxi; Sankaranarayanan, Sundar Ram; Siddharthan, Rahul; Sanyal, Kaustuv; Lundeberg, Joakim; Nystedt, Björn; Boekhout, Teun; Dawson, Thomas L.; Heitman, Joseph

    2017-01-01

    Abstract Complete and accurate genome assembly and annotation is a crucial foundation for comparative and functional genomics. Despite this, few complete eukaryotic genomes are available, and genome annotation remains a major challenge. Here, we present a complete genome assembly of the skin commensal yeast Malassezia sympodialis and demonstrate how proteogenomics can substantially improve gene annotation. Through long-read DNA sequencing, we obtained a gap-free genome assembly for M. sympodialis (ATCC 42132), comprising eight nuclear and one mitochondrial chromosome. We also sequenced and assembled four M. sympodialis clinical isolates, and showed their value for understanding Malassezia reproduction by confirming four alternative allele combinations at the two mating-type loci. Importantly, we demonstrated how proteomics data could be readily integrated with transcriptomics data in standard annotation tools. This increased the number of annotated protein-coding genes by 14% (from 3612 to 4113), compared to using transcriptomics evidence alone. Manual curation further increased the number of protein-coding genes by 9% (to 4493). All of these genes have RNA-seq evidence and 87% were confirmed by proteomics. The M. sympodialis genome assembly and annotation presented here is at a quality yet achieved only for a few eukaryotic organisms, and constitutes an important reference for future host-microbe interaction studies. PMID:28100699

  14. E/N effects on K0 values revealed by high precision measurements under low field conditions

    NASA Astrophysics Data System (ADS)

    Hauck, Brian C.; Siems, William F.; Harden, Charles S.; McHugh, Vincent M.; Hill, Herbert H.

    2016-07-01

    Ion mobility spectrometry (IMS) is used to detect chemical warfare agents, explosives, and narcotics. While IMS has a low rate of false positives, their occurrence causes the loss of time and money as the alarm is verified. Because numerous variables affect the reduced mobility (K0) of an ion, wide detection windows are required in order to ensure a low false negative response rate. Wide detection windows, however, reduce response selectivity, and interferents with similar K0 values may be mistaken for targeted compounds and trigger a false positive alarm. Detection windows could be narrowed if reference K0 values were accurately known for specific instrumental conditions. Unfortunately, there is a lack of confidence in the literature values due to discrepancies in the reported K0 values and their lack of reported error. This creates the need for the accurate control and measurement of each variable affecting ion mobility, as well as for a central accurate IMS database for reference and calibration. A new ion mobility spectrometer has been built that reduces the error of measurements affecting K0 by an order of magnitude less than ±0.2%. Precise measurements of ±0.002 cm2 V-1 s-1 or better have been produced and, as a result, an unexpected relationship between K0 and the electric field to number density ratio (E/N) has been discovered in which the K0 values of ions decreased as a function of E/N along a second degree polynomial trend line towards an apparent asymptote at approximately 4 Td.

  15. Molecular Simulation of the Free Energy for the Accurate Determination of Phase Transition Properties of Molecular Solids

    NASA Astrophysics Data System (ADS)

    Sellers, Michael; Lisal, Martin; Brennan, John

    2015-06-01

    Investigating the ability of a molecular model to accurately represent a real material is crucial to model development and use. When the model simulates materials in extreme conditions, one such property worth evaluating is the phase transition point. However, phase transitions are often overlooked or approximated because of difficulty or inaccuracy when simulating them. Techniques such as super-heating or super-squeezing a material to induce a phase change suffer from inherent timescale limitations leading to ``over-driving,'' and dual-phase simulations require many long-time runs to seek out what frequently results in an inexact location of phase-coexistence. We present a compilation of methods for the determination of solid-solid and solid-liquid phase transition points through the accurate calculation of the chemical potential. The methods are applied to the Smith-Bharadwaj atomistic potential's representation of cyclotrimethylene trinitramine (RDX) to accurately determine its melting point (Tm) and the alpha to gamma solid phase transition pressure. We also determine Tm for a coarse-grain model of RDX, and compare its value to experiment and atomistic counterpart. All methods are employed via the LAMMPS simulator, resulting in 60-70 simulations that total 30-50 ns. Approved for public release. Distribution is unlimited.

  16. An X-band waveguide measurement technique for the accurate characterization of materials with low dielectric loss permittivity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, Kenneth W., E-mail: kenneth.allen@gtri.gatech.edu; Scott, Mark M.; Reid, David R.

    In this work, we present a new X-band waveguide (WR90) measurement method that permits the broadband characterization of the complex permittivity for low dielectric loss tangent material specimens with improved accuracy. An electrically long polypropylene specimen that partially fills the cross-section is inserted into the waveguide and the transmitted scattering parameter (S{sub 21}) is measured. The extraction method relies on computational electromagnetic simulations, coupled with a genetic algorithm, to match the experimental S{sub 21} measurement. The sensitivity of the technique to sample length was explored by simulating specimen lengths from 2.54 to 15.24 cm, in 2.54 cm increments. Analysis ofmore » our simulated data predicts the technique will have the sensitivity to measure loss tangent values on the order of 10{sup −3} for materials such as polymers with relatively low real permittivity values. The ability to accurately characterize low-loss dielectric material specimens of polypropylene is demonstrated experimentally. The method was validated by excellent agreement with a free-space focused-beam system measurement of a polypropylene sheet. This technique provides the material measurement community with the ability to accurately extract material properties of low-loss material specimen over the entire X-band range. This technique could easily be extended to other frequency bands.« less

  17. Dipstick Spot urine pH does not accurately represent 24 hour urine PH measured by an electrode.

    PubMed

    Omar, Mohamed; Sarkissian, Carl; Jianbo, Li; Calle, Juan; Monga, Manoj

    2016-01-01

    To determine whether spot urine pH measured by dipstick is an accurate representation of 24 hours urine pH measured by an electrode. We retrospectively reviewed urine pH results of patients who presented to the urology stone clinic. For each patient we recorded the most recente pH result measured by dipstick from a spot urine sample that preceded the result of a 24-hour urine pH measured by the use of a pH electrode. Patients were excluded if there was a change in medications or dietary recommendations or if the two samples were more than 4 months apart. A difference of more than 0.5 pH was considered na inaccurate result. A total 600 patients were retrospectively reviewed for the pH results. The mean difference in pH between spot urine value and the 24 hours collection values was 0.52±0.45 pH. Higher pH was associated with lower accuracy (p<0.001). The accuracy of spot urine samples to predict 24-hour pH values of <5.5 was 68.9%, 68.2% for 5.5 to 6.5 and 35% for >6.5. Samples taken more than 75 days apart had only 49% the accuracy of more recent samples (p<0.002). The overall accuracy is lower than 80% (p<0.001). Influence of diurnal variation was not significant (p=0.588). Spot urine pH by dipstick is not an accurate method for evaluation of the patients with urolithiasis. Patients with alkaline urine are more prone to error with reliance on spot urine pH.

  18. Accurate quantum chemical calculations

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  19. Accurate Structural Correlations from Maximum Likelihood Superpositions

    PubMed Central

    Theobald, Douglas L; Wuttke, Deborah S

    2008-01-01

    The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR) models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA) of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method (“PCA plots”) for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology. PMID:18282091

  20. Optimization of the parameters for obtaining zirconia-alumina coatings, made by flame spraying from results of numerical simulation

    NASA Astrophysics Data System (ADS)

    Ferrer, M.; Vargas, F.; Peña, G.

    2017-12-01

    The K-Sommerfeld values (K) and the melting percentage (% F) obtained by numerical simulation using the Jets et Poudres software were used to find the projection parameters of zirconia-alumina coatings by thermal spraying flame, in order to obtain coatings with good morphological and structural properties to be used as thermal insulation. The experimental results show the relationship between the Sommerfeld parameter and the porosity of the zirconia-alumina coatings. It is found that the lowest porosity is obtained when the K-Sommerfeld value is close to 45 with an oxidant flame, on the contrary, when superoxidant flames are used K values are close 52, which improve wear resistance.