Sample records for measurement methods exist

  1. Measuring cognition in teams: a cross-domain review.

    PubMed

    Wildman, Jessica L; Salas, Eduardo; Scott, Charles P R

    2014-08-01

    The purpose of this article is twofold: to provide a critical cross-domain evaluation of team cognition measurement options and to provide novice researchers with practical guidance when selecting a measurement method. A vast selection of measurement approaches exist for measuring team cognition constructs including team mental models, transactive memory systems, team situation awareness, strategic consensus, and cognitive processes. Empirical studies and theoretical articles were reviewed to identify all of the existing approaches for measuring team cognition. These approaches were evaluated based on theoretical perspective assumed, constructs studied, resources required, level of obtrusiveness, internal consistency reliability, and predictive validity. The evaluations suggest that all existing methods are viable options from the point of view of reliability and validity, and that there are potential opportunities for cross-domain use. For example, methods traditionally used only to measure mental models may be useful for examining transactive memory and situation awareness. The selection of team cognition measures requires researchers to answer several key questions regarding the theoretical nature of team cognition and the practical feasibility of each method. We provide novice researchers with guidance regarding how to begin the search for a team cognition measure and suggest several new ideas regarding future measurement research. We provide (1) a broad overview and evaluation of existing team cognition measurement methods, (2) suggestions for new uses of those methods across research domains, and (3) critical guidance for novice researchers looking to measure team cognition.

  2. Improved cosine similarity measures of simplified neutrosophic sets for medical diagnoses.

    PubMed

    Ye, Jun

    2015-03-01

    In pattern recognition and medical diagnosis, similarity measure is an important mathematical tool. To overcome some disadvantages of existing cosine similarity measures of simplified neutrosophic sets (SNSs) in vector space, this paper proposed improved cosine similarity measures of SNSs based on cosine function, including single valued neutrosophic cosine similarity measures and interval neutrosophic cosine similarity measures. Then, weighted cosine similarity measures of SNSs were introduced by taking into account the importance of each element. Further, a medical diagnosis method using the improved cosine similarity measures was proposed to solve medical diagnosis problems with simplified neutrosophic information. The improved cosine similarity measures between SNSs were introduced based on cosine function. Then, we compared the improved cosine similarity measures of SNSs with existing cosine similarity measures of SNSs by numerical examples to demonstrate their effectiveness and rationality for overcoming some shortcomings of existing cosine similarity measures of SNSs in some cases. In the medical diagnosis method, we can find a proper diagnosis by the cosine similarity measures between the symptoms and considered diseases which are represented by SNSs. Then, the medical diagnosis method based on the improved cosine similarity measures was applied to two medical diagnosis problems to show the applications and effectiveness of the proposed method. Two numerical examples all demonstrated that the improved cosine similarity measures of SNSs based on the cosine function can overcome the shortcomings of the existing cosine similarity measures between two vectors in some cases. By two medical diagnoses problems, the medical diagnoses using various similarity measures of SNSs indicated the identical diagnosis results and demonstrated the effectiveness and rationality of the diagnosis method proposed in this paper. The improved cosine measures of SNSs based on cosine function can overcome some drawbacks of existing cosine similarity measures of SNSs in vector space, and then their diagnosis method is very suitable for handling the medical diagnosis problems with simplified neutrosophic information and demonstrates the effectiveness and rationality of medical diagnoses. Copyright © 2014 Elsevier B.V. All rights reserved.

  3. Multiple network alignment via multiMAGNA+.

    PubMed

    Vijayan, Vipin; Milenkovic, Tijana

    2017-08-21

    Network alignment (NA) aims to find a node mapping that identifies topologically or functionally similar network regions between molecular networks of different species. Analogous to genomic sequence alignment, NA can be used to transfer biological knowledge from well- to poorly-studied species between aligned network regions. Pairwise NA (PNA) finds similar regions between two networks while multiple NA (MNA) can align more than two networks. We focus on MNA. Existing MNA methods aim to maximize total similarity over all aligned nodes (node conservation). Then, they evaluate alignment quality by measuring the amount of conserved edges, but only after the alignment is constructed. Directly optimizing edge conservation during alignment construction in addition to node conservation may result in superior alignments. Thus, we present a novel MNA method called multiMAGNA++ that can achieve this. Indeed, multiMAGNA++ outperforms or is on par with existing MNA methods, while often completing faster than existing methods. That is, multiMAGNA++ scales well to larger network data and can be parallelized effectively. During method evaluation, we also introduce new MNA quality measures to allow for more fair MNA method comparison compared to the existing alignment quality measures. MultiMAGNA++ code is available on the method's web page at http://nd.edu/~cone/multiMAGNA++/.

  4. Systems and methods for detection of blowout precursors in combustors

    DOEpatents

    Lieuwen, Tim C.; Nair, Suraj

    2006-08-15

    The present invention comprises systems and methods for detecting flame blowout precursors in combustors. The blowout precursor detection system comprises a combustor, a pressure measuring device, and blowout precursor detection unit. A combustion controller may also be used to control combustor parameters. The methods of the present invention comprise receiving pressure data measured by an acoustic pressure measuring device, performing one or a combination of spectral analysis, statistical analysis, and wavelet analysis on received pressure data, and determining the existence of a blowout precursor based on such analyses. The spectral analysis, statistical analysis, and wavelet analysis further comprise their respective sub-methods to determine the existence of blowout precursors.

  5. Measuring Globalization: Existing Methods and Their Implications for Teaching Global Studies and Forecasting

    ERIC Educational Resources Information Center

    Zinkina, Julia; Korotayev, Andrey; Andreev, Aleksey I.

    2013-01-01

    Purpose: The purpose of this paper is to encourage discussions regarding the existing approaches to globalization measurement (taking mainly the form of indices and rankings) and their shortcomings in terms of applicability to developing Global Studies curricula. Another aim is to propose an outline for the globalization measurement methodology…

  6. Physiologic measures of sexual function in women: a review.

    PubMed

    Woodard, Terri L; Diamond, Michael P

    2009-07-01

    To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.

  7. Assessing Species Diversity Using Metavirome Data: Methods and Challenges.

    PubMed

    Herath, Damayanthi; Jayasundara, Duleepa; Ackland, David; Saeed, Isaam; Tang, Sen-Lin; Halgamuge, Saman

    2017-01-01

    Assessing biodiversity is an important step in the study of microbial ecology associated with a given environment. Multiple indices have been used to quantify species diversity, which is a key biodiversity measure. Measuring species diversity of viruses in different environments remains a challenge relative to measuring the diversity of other microbial communities. Metagenomics has played an important role in elucidating viral diversity by conducting metavirome studies; however, metavirome data are of high complexity requiring robust data preprocessing and analysis methods. In this review, existing bioinformatics methods for measuring species diversity using metavirome data are categorised broadly as either sequence similarity-dependent methods or sequence similarity-independent methods. The former includes a comparison of DNA fragments or assemblies generated in the experiment against reference databases for quantifying species diversity, whereas estimates from the latter are independent of the knowledge of existing sequence data. Current methods and tools are discussed in detail, including their applications and limitations. Drawbacks of the state-of-the-art method are demonstrated through results from a simulation. In addition, alternative approaches are proposed to overcome the challenges in estimating species diversity measures using metavirome data.

  8. Broadcasting a Lab Measurement over Existing Conductor Networks

    ERIC Educational Resources Information Center

    Knipp, Peter A.

    2009-01-01

    Students learn about physical laws and the scientific method when they analyze experimental data in a laboratory setting. Three common sources exist for the experimental data that they analyze: (1) "hands-on" measurements by the students themselves, (2) electronic transfer (by downloading a spreadsheet, video, or computer-aided data-acquisition…

  9. A Review of Treatment Adherence Measurement Methods

    ERIC Educational Resources Information Center

    Schoenwald, Sonja K.; Garland, Ann F.

    2013-01-01

    Fidelity measurement is critical for testing the effectiveness and implementation in practice of psychosocial interventions. Adherence is a critical component of fidelity. The purposes of this review were to catalogue adherence measurement methods and assess existing evidence for the valid and reliable use of the scores that they generate and the…

  10. Study on the application of ambient vibration tests to evaluate the effectiveness of seismic retrofitting

    NASA Astrophysics Data System (ADS)

    Liang, Li; Takaaki, Ohkubo; Guang-hui, Li

    2018-03-01

    In recent years, earthquakes have occurred frequently, and the seismic performance of existing school buildings has become particularly important. The main method for improving the seismic resistance of existing buildings is reinforcement. However, there are few effective methods to evaluate the effect of reinforcement. Ambient vibration measurement experiments were conducted before and after seismic retrofitting using wireless measurement system and the changes of vibration characteristics were compared. The changes of acceleration response spectrum, natural periods and vibration modes indicate that the wireless vibration measurement system can be effectively applied to evaluate the effect of seismic retrofitting. The method can evaluate the effect of seismic retrofitting qualitatively, it is difficult to evaluate the effect of seismic retrofitting quantitatively at this stage.

  11. An Analysis of Measured Pressure Signatures From Two Theory-Validation Low-Boom Models

    NASA Technical Reports Server (NTRS)

    Mack, Robert J.

    2003-01-01

    Two wing/fuselage/nacelle/fin concepts were designed to check the validity and the applicability of sonic-boom minimization theory, sonic-boom analysis methods, and low-boom design methodology in use at the end of the 1980is. Models of these concepts were built, and the pressure signatures they generated were measured in the wind-tunnel. The results of these measurements lead to three conclusions: (1) the existing methods could adequately predict sonic-boom characteristics of wing/fuselage/fin(s) configurations if the equivalent area distributions of each component were smooth and continuous; (2) these methods needed revision so the engine-nacelle volume and the nacelle-wing interference lift disturbances could be accurately predicted; and (3) current nacelle-configuration integration methods had to be updated. With these changes in place, the existing sonic-boom analysis and minimization methods could be effectively applied to supersonic-cruise concepts for acceptable/tolerable sonic-boom overpressures during cruise.

  12. Hybrid recommendation methods in complex networks.

    PubMed

    Fiasconaro, A; Tumminello, M; Nicosia, V; Latora, V; Mantegna, R N

    2015-07-01

    We propose two recommendation methods, based on the appropriate normalization of already existing similarity measures, and on the convex combination of the recommendation scores derived from similarity between users and between objects. We validate the proposed measures on three data sets, and we compare the performance of our methods to other recommendation systems recently proposed in the literature. We show that the proposed similarity measures allow us to attain an improvement of performances of up to 20% with respect to existing nonparametric methods, and that the accuracy of a recommendation can vary widely from one specific bipartite network to another, which suggests that a careful choice of the most suitable method is highly relevant for an effective recommendation on a given system. Finally, we study how an increasing presence of random links in the network affects the recommendation scores, finding that one of the two recommendation algorithms introduced here can systematically outperform the others in noisy data sets.

  13. Feature selection using probabilistic prediction of support vector regression.

    PubMed

    Yang, Jian-Bo; Ong, Chong-Jin

    2011-06-01

    This paper presents a new wrapper-based feature selection method for support vector regression (SVR) using its probabilistic predictions. The method computes the importance of a feature by aggregating the difference, over the feature space, of the conditional density functions of the SVR prediction with and without the feature. As the exact computation of this importance measure is expensive, two approximations are proposed. The effectiveness of the measure using these approximations, in comparison to several other existing feature selection methods for SVR, is evaluated on both artificial and real-world problems. The result of the experiments show that the proposed method generally performs better than, or at least as well as, the existing methods, with notable advantage when the dataset is sparse.

  14. Measuring carbon in forests: current status and future challenges.

    PubMed

    Brown, Sandra

    2002-01-01

    To accurately and precisely measure the carbon in forests is gaining global attention as countries seek to comply with agreements under the UN Framework Convention on Climate Change. Established methods for measuring carbon in forests exist, and are best based on permanent sample plots laid out in a statistically sound design. Measurements on trees in these plots can be readily converted to aboveground biomass using either biomass expansion factors or allometric regression equations. A compilation of existing root biomass data for upland forests of the world generated a significant regression equation that can be used to predict root biomass based on aboveground biomass only. Methods for measuring coarse dead wood have been tested in many forest types, but the methods could be improved if a non-destructive tool for measuring the density of dead wood was developed. Future measurements of carbon storage in forests may rely more on remote sensing data, and new remote data collection technologies are in development.

  15. A TWO-PROBE METHOD FOR MEASURING WATER CONTENT OF THIN FOREST FLOOR LITTER LAYERS USING TIME DOMAIN REFLECTOMETRY

    EPA Science Inventory

    Few methods exist that allow non-destructive in situ measurement of the water content of forest floor litter layers (Oa,Oe, and Oi horizons). Continuous non-destructive measurement is needed in studies of ecosystem processes because of the relationship between physical structure ...

  16. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but data to validate it did not exist until recently. In this paper, data from repeated ...

  17. Turbulence excited frequency domain damping measurement and truncation effects

    NASA Technical Reports Server (NTRS)

    Soovere, J.

    1976-01-01

    Existing frequency domain modal frequency and damping analysis methods are discussed. The effects of truncation in the Laplace and Fourier transform data analysis methods are described. Methods for eliminating truncation errors from measured damping are presented. Implications of truncation effects in fast Fourier transform analysis are discussed. Limited comparison with test data is presented.

  18. VALIDATION OF A METHOD FOR ESTIMATING LONG-TERM EXPOSURES BASED ON SHORT-TERM MEASUREMENTS

    EPA Science Inventory

    A method for estimating long-term exposures from short-term measurements is validated using data from a recent EPA study of exposure to fine particles. The method was developed a decade ago but long-term exposure data to validate it did not exist until recently. In this paper, ...

  19. Measure Guideline. Wood Window Repair, Rehabilitation, and Replacement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, P.; Eng, P.

    2012-12-01

    This measure guideline provides information and guidance on rehabilitating, retrofitting, and replacing existing window assemblies in residential construction. The intent is to provide information regarding means and methods to improve the energy and comfort performance of existing wood window assemblies in a way that takes into consideration component durability, in-service operation, and long term performance of the strategies.

  20. Measure Guideline: Window Repair, Rehabilitation, and Replacement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, P.

    2012-12-01

    This measure guideline provides information and guidance on rehabilitating, retrofitting, and replacing existing window assemblies in residential construction. The intent is to provide information regarding means and methods to improve the energy and comfort performance of existing wood window assemblies in a way that takes into consideration component durability, in-service operation, and long term performance of the strategies.

  1. Measuring the performance of livability programs.

    DOT National Transportation Integrated Search

    2013-07-01

    This report analyzes the performance measurement processes adopted by five large livability programs throughout the United States. It compares and contrasts these programs by examining existing research in performance measurement methods. The ...

  2. Methodological Issues in Mobile Computer-Supported Collaborative Learning (mCSCL): What Methods, What to Measure and When to Measure?

    ERIC Educational Resources Information Center

    Song, Yanjie

    2014-01-01

    This study aims to investigate (1) methods utilized in mobile computer-supported collaborative learning (mCSCL) research which focuses on studying, learning and collaboration mediated by mobile devices; (2) whether these methods have examined mCSCL effectively; (3) when the methods are administered; and (4) what methodological issues exist in…

  3. Measurement of plasma unbound unconjugated bilirubin.

    PubMed

    Ahlfors, C E

    2000-03-15

    A method is described for measuring the unconjugated fraction of the unbound bilirubin concentration in plasma by combining the peroxidase method for determining unbound bilirubin with a diazo method for measuring conjugated and unconjugated bilirubin. The accuracy of the unbound bilirubin determination is improved by decreasing sample dilution, eliminating interference by conjugated bilirubin, monitoring changes in bilirubin concentration using diazo derivatives, and correcting for rate-limiting dissociation of bilirubin from albumin. The unbound unconjugated bilirubin concentration by the combined method in plasma from 20 jaundiced newborns was significantly greater than and poorly correlated with the unbound bilirubin determined by the existing peroxidase method (r = 0.7), possibly due to differences in sample dilution between the methods. The unbound unconjugated bilirubin was an unpredictable fraction of the unbound bilirubin in plasma samples from patients with similar total bilirubin concentrations but varying levels of conjugated bilirubin. A bilirubin-binding competitor was readily detected at a sample dilution typically used for the combined test but not at the dilution used for the existing peroxidase method. The combined method is ideally suited to measuring unbound unconjugated bilirubin in jaundiced human newborns or animal models of kernicterus. Copyright 2000 Academic Press.

  4. Development of the psychological impact of tinnitus interview: a clinician-administered measure of tinnitus-related distress.

    PubMed

    Henry, J L; Kangas, M; Wilson, P H

    2001-01-01

    The development of valid and reliable methods for assessing psychological aspects of tinnitus continues to be an important goal of research. Such assessment methods are potentially useful in clinical and research contexts. Existing self-report measures have a number of disadvantages, and so a need exists to develop a form of assessment that is less open to response bias and the effects of experimental demand. A new approach, the Psychological Impact of Tinnitus Interview (PITI), is described, and some preliminary data on its psychometric properties are reported. The results suggest that the PITI is capable of providing a measure of separate, relatively independent dimensions of tinnitus-related distress--namely, sleep difficulties, general distress, mood, suicidal aspects, and avoidance of or interference with normal activities. This method may lead to more refined measures of these dimensions of tinnitus-related psychological difficulties. The PITI should be regarded as a promising assessment tool for use in experimental settings, pending further work on its content, coding method, and administration.

  5. An Early Years Toolbox for Assessing Early Executive Function, Language, Self-Regulation, and Social Development: Validity, Reliability, and Preliminary Norms

    PubMed Central

    Howard, Steven J.; Melhuish, Edward

    2016-01-01

    Several methods of assessing executive function (EF), self-regulation, language development, and social development in young children have been developed over previous decades. Yet new technologies make available methods of assessment not previously considered. In resolving conceptual and pragmatic limitations of existing tools, the Early Years Toolbox (EYT) offers substantial advantages for early assessment of language, EF, self-regulation, and social development. In the current study, results of our large-scale administration of this toolbox to 1,764 preschool and early primary school students indicated very good reliability, convergent validity with existing measures, and developmental sensitivity. Results were also suggestive of better capture of children’s emerging abilities relative to comparison measures. Preliminary norms are presented, showing a clear developmental trajectory across half-year age groups. The accessibility of the EYT, as well as its advantages over existing measures, offers considerably enhanced opportunities for objective measurement of young children’s abilities to enable research and educational applications. PMID:28503022

  6. Compensation of Verdet Constant Temperature Dependence by Crystal Core Temperature Measurement

    PubMed Central

    Petricevic, Slobodan J.; Mihailovic, Pedja M.

    2016-01-01

    Compensation of the temperature dependence of the Verdet constant in a polarimetric extrinsic Faraday sensor is of major importance for applying the magneto-optical effect to AC current measurements and magnetic field sensing. This paper presents a method for compensating the temperature effect on the Faraday rotation in a Bi12GeO20 crystal by sensing its optical activity effect on the polarization of a light beam. The method measures the temperature of the same volume of crystal that effects the beam polarization in a magnetic field or current sensing process. This eliminates the effect of temperature difference found in other indirect temperature compensation methods, thus allowing more accurate temperature compensation for the temperature dependence of the Verdet constant. The method does not require additional changes to an existing Δ/Σ configuration and is thus applicable for improving the performance of existing sensing devices. PMID:27706043

  7. External benefits of natural environments

    Treesearch

    Larry W. Tombaugh

    1971-01-01

    Existing methods of assessing economic benefits arising from certain physical environments left in a relatively natural condition do not include estimates of external benefits. Existence value is one such external benefit that accrues to individuals who have no intention of ever visiting the area in question. A partial measure of the existence value of National Parks...

  8. Proposal on Calculation of Ventilation Threshold Using Non-contact Respiration Measurement with Pattern Light Projection

    NASA Astrophysics Data System (ADS)

    Aoki, Hirooki; Ichimura, Shiro; Fujiwara, Toyoki; Kiyooka, Satoru; Koshiji, Kohji; Tsuzuki, Keishi; Nakamura, Hidetoshi; Fujimoto, Hideo

    We proposed a calculation method of the ventilation threshold using the non-contact respiration measurement with dot-matrix pattern light projection under pedaling exercise. The validity and effectiveness of our proposed method is examined by simultaneous measurement with the expiration gas analyzer. The experimental result showed that the correlation existed between the quasi ventilation thresholds calculated by our proposed method and the ventilation thresholds calculated by the expiration gas analyzer. This result indicates the possibility of the non-contact measurement of the ventilation threshold by the proposed method.

  9. On using summary statistics from an external calibration sample to correct for covariate measurement error.

    PubMed

    Guo, Ying; Little, Roderick J; McConnell, Daniel S

    2012-01-01

    Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.

  10. Phylogenetic rooting using minimal ancestor deviation.

    PubMed

    Tria, Fernando Domingues Kümmel; Landan, Giddy; Dagan, Tal

    2017-06-19

    Ancestor-descendent relations play a cardinal role in evolutionary theory. Those relations are determined by rooting phylogenetic trees. Existing rooting methods are hampered by evolutionary rate heterogeneity or the unavailability of auxiliary phylogenetic information. Here we present a rooting approach, the minimal ancestor deviation (MAD) method, which accommodates heterotachy by using all pairwise topological and metric information in unrooted trees. We demonstrate the performance of the method, in comparison to existing rooting methods, by the analysis of phylogenies from eukaryotes and prokaryotes. MAD correctly recovers the known root of eukaryotes and uncovers evidence for the origin of cyanobacteria in the ocean. MAD is more robust and consistent than existing methods, provides measures of the root inference quality and is applicable to any tree with branch lengths.

  11. Measuring the Return on Information Technology: A Knowledge-Based Approach for Revenue Allocation at the Process and Firm Level

    DTIC Science & Technology

    2005-07-01

    approach for measuring the return on Information Technology (IT) investments. A review of existing methods suggests the difficulty in adequately...measuring the returns of IT at various levels of analysis (e.g., firm or process level). To address this issue, this study aims to develop a method for...view (KBV), this paper proposes an analytic method for measuring the historical revenue and cost of IT investments by estimating the amount of

  12. Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)

    PubMed Central

    Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K

    2011-01-01

    To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069

  13. Investigating the technical adequacy of curriculum-based measurement in written expression for students who are deaf or hard of hearing.

    PubMed

    Cheng, Shu-Fen; Rose, Susan

    2009-01-01

    This study investigated the technical adequacy of curriculum-based measures of written expression (CBM-W) in terms of writing prompts and scoring methods for deaf and hard-of-hearing students. Twenty-two students at the secondary school-level completed 3-min essays within two weeks, which were scored for nine existing and alternative curriculum-based measurement (CBM) scoring methods. The technical features of the nine scoring methods were examined for interrater reliability, alternate-form reliability, and criterion-related validity. The existing CBM scoring method--number of correct minus incorrect word sequences--yielded the highest reliability and validity coefficients. The findings from this study support the use of the CBM-W as a reliable and valid tool for assessing general writing proficiency with secondary students who are deaf or hard of hearing. The CBM alternative scoring methods that may serve as additional indicators of written expression include correct subject-verb agreements, correct clauses, and correct morphemes.

  14. ABSCISSA ASSESSMENT WITH ALGAE: A COMPARISON OF LOCAL AND LANDSCAPE IMPAIRMENT MEASURES FOR BIOLOGICAL ASSESSMENT USING BENTHIC DIATOMS

    EPA Science Inventory

    The development of rigorous biological assessments is dependent upon well-constructed abscissa, and various methods, both subjective and objective, exist to measure expected impairment at both the landscape and local scale. A new, landscape-scale method has recently been offered...

  15. Measure Guideline: Installing Rigid Foam Insulation on the Interior of Existing Brick Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Natarajan, H.; Klocke, S.; Puttagunta, S.

    2012-06-01

    This measure guideline provides information on an effective method to insulate the interior of existing brick masonry walls with extruded polystyrene (XPS) insulation board. The guide outlines step-by-step design and installation procedures while explaining the benefits and tradeoffs where applicable. The authors intend that this document be useful to a varied audience that includes builders, remodelers, contractors and homeowners.

  16. Measure Guideline. Installing Rigid Foam Insulation on the Interior of Existing Brick Walls

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Natarajan, Hariharan; Klocke, Steve; Puttagunta, Srikanth

    2012-06-01

    This measure guideline provides information on an effective method to insulate the interior of existing brick masonry walls with extruded polystyrene (XPS) insulation board. The guide outlines step-by-step design and installation procedures while explaining the benefits and tradeoffs where applicable. The authors intend that this document be useful to a varied audience that includes builders,remodelers, contractors and homeowners.

  17. Predicting drug-target interaction for new drugs using enhanced similarity measures and super-target clustering.

    PubMed

    Shi, Jian-Yu; Yiu, Siu-Ming; Li, Yiming; Leung, Henry C M; Chin, Francis Y L

    2015-07-15

    Predicting drug-target interaction using computational approaches is an important step in drug discovery and repositioning. To predict whether there will be an interaction between a drug and a target, most existing methods identify similar drugs and targets in the database. The prediction is then made based on the known interactions of these drugs and targets. This idea is promising. However, there are two shortcomings that have not yet been addressed appropriately. Firstly, most of the methods only use 2D chemical structures and protein sequences to measure the similarity of drugs and targets respectively. However, this information may not fully capture the characteristics determining whether a drug will interact with a target. Secondly, there are very few known interactions, i.e. many interactions are "missing" in the database. Existing approaches are biased towards known interactions and have no good solutions to handle possibly missing interactions which affect the accuracy of the prediction. In this paper, we enhance the similarity measures to include non-structural (and non-sequence-based) information and introduce the concept of a "super-target" to handle the problem of possibly missing interactions. Based on evaluations on real data, we show that our similarity measure is better than the existing measures and our approach is able to achieve higher accuracy than the two best existing algorithms, WNN-GIP and KBMF2K. Our approach is available at http://web.hku.hk/∼liym1018/projects/drug/drug.html or http://www.bmlnwpu.org/us/tools/PredictingDTI_S2/METHODS.html. Copyright © 2015 Elsevier Inc. All rights reserved.

  18. Geophysical methods for determining the geotechnical engineering properties of earth materials.

    DOT National Transportation Integrated Search

    2010-03-01

    Surface and borehole geophysical methods exist to measure in-situ properties and structural : characteristics of earth materials. Application of such methods has demonstrated cost savings through : reduced design uncertainty and lower investigation c...

  19. A Doubly Stochastic Change Point Detection Algorithm for Noisy Biological Signals.

    PubMed

    Gold, Nathan; Frasch, Martin G; Herry, Christophe L; Richardson, Bryan S; Wang, Xiaogang

    2017-01-01

    Experimentally and clinically collected time series data are often contaminated with significant confounding noise, creating short, noisy time series. This noise, due to natural variability and measurement error, poses a challenge to conventional change point detection methods. We propose a novel and robust statistical method for change point detection for noisy biological time sequences. Our method is a significant improvement over traditional change point detection methods, which only examine a potential anomaly at a single time point. In contrast, our method considers all suspected anomaly points and considers the joint probability distribution of the number of change points and the elapsed time between two consecutive anomalies. We validate our method with three simulated time series, a widely accepted benchmark data set, two geological time series, a data set of ECG recordings, and a physiological data set of heart rate variability measurements of fetal sheep model of human labor, comparing it to three existing methods. Our method demonstrates significantly improved performance over the existing point-wise detection methods.

  20. An integrative approach for measuring semantic similarities using gene ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2014-01-01

    Gene Ontology (GO) provides rich information and a convenient way to study gene functional similarity, which has been successfully used in various applications. However, the existing GO based similarity measurements have limited functions for only a subset of GO information is considered in each measure. An appropriate integration of the existing measures to take into account more information in GO is demanding. We propose a novel integrative measure called InteGO2 to automatically select appropriate seed measures and then to integrate them using a metaheuristic search method. The experiment results show that InteGO2 significantly improves the performance of gene similarity in human, Arabidopsis and yeast on both molecular function and biological process GO categories. InteGO2 computes gene-to-gene similarities more accurately than tested existing measures and has high robustness. The supplementary document and software are available at http://mlg.hit.edu.cn:8082/.

  1. Wideband characterization of the complex wave number and characteristic impedance of sound absorbers.

    PubMed

    Salissou, Yacoubou; Panneton, Raymond

    2010-11-01

    Several methods for measuring the complex wave number and the characteristic impedance of sound absorbers have been proposed in the literature. These methods can be classified into single frequency and wideband methods. In this paper, the main existing methods are revisited and discussed. An alternative method which is not well known or discussed in the literature while exhibiting great potential is also discussed. This method is essentially an improvement of the wideband method described by Iwase et al., rewritten so that the setup is more ISO 10534-2 standard-compliant. Glass wool, melamine foam and acoustical/thermal insulator wool are used to compare the main existing wideband non-iterative methods with this alternative method. It is found that, in the middle and high frequency ranges the alternative method yields results that are comparable in accuracy to the classical two-cavity method and the four-microphone transfer-matrix method. However, in the low frequency range, the alternative method appears to be more accurate than the other methods, especially when measuring the complex wave number.

  2. 40 CFR 98.54 - Monitoring and QA/QC requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in paragraphs (b)(1) through (b)(3) of this section. (1) EPA Method 320, Measurement of Vapor Phase...) Direct measurement (such as using flow meters or weigh scales). (2) Existing plant procedures used for accounting purposes. (d) You must conduct all required performance tests according to the methods in § 98.54...

  3. Beyond Instrumentation: Redesigning Measures and Methods for Evaluating the Graduate College Experience

    ERIC Educational Resources Information Center

    Hardré, Patricia L.; Hackett, Shannon

    2015-01-01

    This manuscript chronicles the process and products of a redesign for evaluation of the graduate college experience (GCE) which was initiated by a university graduate college, based on its observed need to reconsider and update its measures and methods for assessing graduate students' experiences. We examined the existing instrumentation and…

  4. A systematic review of health care efficiency measures.

    PubMed

    Hussey, Peter S; de Vries, Han; Romley, John; Wang, Margaret C; Chen, Susan S; Shekelle, Paul G; McGlynn, Elizabeth A

    2009-06-01

    To review and characterize existing health care efficiency measures in order to facilitate a common understanding about the adequacy of these methods. Review of the MedLine and EconLit databases for articles published from 1990 to 2008, as well as search of the "gray" literature for additional measures developed by private organizations. We performed a systematic review for existing efficiency measures. We classified the efficiency measures by perspective, outputs, inputs, methods used, and reporting of scientific soundness. We identified 265 measures in the peer-reviewed literature and eight measures in the gray literature, with little overlap between the two sets of measures. Almost all of the measures did not explicitly consider the quality of care. Thus, if quality varies substantially across groups, which is likely in some cases, the measures reflect only the costs of care, not efficiency. Evidence on the measures' scientific soundness was mostly lacking: evidence on reliability or validity was reported for six measures (2.3 percent) and sensitivity analyses were reported for 67 measures (25.3 percent). Efficiency measures have been subjected to few rigorous evaluations of reliability and validity, and methods of accounting for quality of care in efficiency measurement are not well developed at this time. Use of these measures without greater understanding of these issues is likely to engender resistance from providers and could lead to unintended consequences.

  5. Optical Remote Sensing Method to Determine Strength of Non-point Sources

    DTIC Science & Technology

    2008-09-01

    site due to its location, which is convenient to both USEPA’s RTP campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure...campus and the ARCADIS-Durham office. The site also has appropriate NPSs to measure that are of interest to regulators. 3.2.4 Tinker Air Force Base...Existing methodology for measuring NPSs is not directly comparable to the proposed PI-ORS method because the new method provides higher quality and

  6. A novel measure of effect size for mediation analysis.

    PubMed

    Lachowicz, Mark J; Preacher, Kristopher J; Kelley, Ken

    2018-06-01

    Mediation analysis has become one of the most popular statistical methods in the social sciences. However, many currently available effect size measures for mediation have limitations that restrict their use to specific mediation models. In this article, we develop a measure of effect size that addresses these limitations. We show how modification of a currently existing effect size measure results in a novel effect size measure with many desirable properties. We also derive an expression for the bias of the sample estimator for the proposed effect size measure and propose an adjusted version of the estimator. We present a Monte Carlo simulation study conducted to examine the finite sampling properties of the adjusted and unadjusted estimators, which shows that the adjusted estimator is effective at recovering the true value it estimates. Finally, we demonstrate the use of the effect size measure with an empirical example. We provide freely available software so that researchers can immediately implement the methods we discuss. Our developments here extend the existing literature on effect sizes and mediation by developing a potentially useful method of communicating the magnitude of mediation. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. An efficient closed-form solution for acoustic emission source location in three-dimensional structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xibing; Dong, Longjun, E-mail: csudlj@163.com; Australian Centre for Geomechanics, The University of Western Australia, Crawley, 6009

    This paper presents an efficient closed-form solution (ECS) for acoustic emission(AE) source location in three-dimensional structures using time difference of arrival (TDOA) measurements from N receivers, N ≥ 6. The nonlinear location equations of TDOA are simplified to linear equations. The unique analytical solution of AE sources for unknown velocity system is obtained by solving the linear equations. The proposed ECS method successfully solved the problems of location errors resulting from measured deviations of velocity as well as the existence and multiplicity of solutions induced by calculations of square roots in existed close-form methods.

  8. Determination of the absorption coefficient of chromophoric dissolved organic matter from underway spectrophotometry.

    PubMed

    Dall'Olmo, Giorgio; Brewin, Robert J W; Nencioli, Francesco; Organelli, Emanuele; Lefering, Ina; McKee, David; Röttgers, Rüdiger; Mitchell, Catherine; Boss, Emmanuel; Bricaud, Annick; Tilstone, Gavin

    2017-11-27

    Measurements of the absorption coefficient of chromophoric dissolved organic matter (ay) are needed to validate existing ocean-color algorithms. In the surface open ocean, these measurements are challenging because of low ay values. Yet, existing global datasets demonstrate that ay could contribute between 30% to 50% of the total absorption budget in the 400-450 nm spectral range, thus making accurate measurement of ay essential to constrain these uncertainties. In this study, we present a simple way of determining ay using a commercially-available in-situ spectrophotometer operated in underway mode. The obtained ay values were validated using independent collocated measurements. The method is simple to implement, can provide measurements with very high spatio-temporal resolution, and has an accuracy of about 0.0004 m -1 and a precision of about 0.0025 m -1 when compared to independent data (at 440 nm). The only limitation for using this method at sea is that it relies on the availability of relatively large volumes of ultrapure water. Despite this limitation, the method can deliver the ay data needed for validating and assessing uncertainties in ocean-colour algorithms.

  9. End of the chain? Rugosity and fine-scale bathymetry from existing underwater digital imagery using structure-from-motion (SfM) technology

    USGS Publications Warehouse

    Storlazzi, Curt; Dartnell, Peter; Hatcher, Gerry; Gibbs, Ann E.

    2016-01-01

    The rugosity or complexity of the seafloor has been shown to be an important ecological parameter for fish, algae, and corals. Historically, rugosity has been measured either using simple and subjective manual methods such as ‘chain-and-tape’ or complicated and expensive geophysical methods. Here, we demonstrate the application of structure-from-motion (SfM) photogrammetry to generate high-resolution, three-dimensional bathymetric models of a fringing reef from existing underwater video collected to characterize the seafloor. SfM techniques are capable of achieving spatial resolution that can be orders of magnitude greater than large-scale lidar and sonar mapping of coral reef ecosystems. The resulting data provide finer-scale measurements of bathymetry and rugosity that are more applicable to ecological studies of coral reefs than provided by the more expensive and time-consuming geophysical methods. Utilizing SfM techniques for characterizing the benthic habitat proved to be more effective and quantitatively powerful than conventional methods and thus might portend the end of the ‘chain-and-tape’ method for measuring benthic complexity.

  10. A method for surface topography measurement using a new focus function based on dual-tree complex wavelet transform

    NASA Astrophysics Data System (ADS)

    Li, Shimiao; Guo, Tong; Yuan, Lin; Chen, Jinping

    2018-01-01

    Surface topography measurement is an important tool widely used in many fields to determine the characteristics and functionality of a part or material. Among existing methods for this purpose, the focus variation method has proved high performance particularly in large slope scenarios. However, its performance depends largely on the effectiveness of focus function. This paper presents a method for surface topography measurement using a new focus measurement function based on dual-tree complex wavelet transform. Experiments are conducted on simulated defocused images to prove its high performance in comparison with other traditional approaches. The results showed that the new algorithm has better unimodality and sharpness. The method was also verified by measuring a MEMS micro resonator structure.

  11. Survey of Manual Methods of Measurements of Asbestos, Beryllium, Lead, Cadmium, Selenium, and Mercury in Stationary Source Emissions. Environmental Monitoring Series.

    ERIC Educational Resources Information Center

    Coulson, Dale M.; And Others

    The purpose of this study is to evaluate existing manual methods for analyzing asbestos, beryllium, lead, cadmium, selenium, and mercury, and from this evaluation to provide the best and most practical set of analytical methods for measuring emissions of these elements from stationary sources. The work in this study was divided into two phases.…

  12. Technological Literacy for Students Aged 6-18: A New Method for Holistic Measuring of Knowledge, Capabilities, Critical Thinking and Decision-Making

    ERIC Educational Resources Information Center

    Avsec, Stanislav; Jamšek, Janez

    2016-01-01

    Technological literacy is identified as a vital achievement of technology- and engineering-intensive education. It guides the design of technology and technical components of educational systems and defines competitive employment in technological society. Existing methods for measuring technological literacy are incomplete or complicated,…

  13. Chapter 8: Whole-Building Retrofit with Consumption Data Analysis Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Agnew, Ken; Goldberg, Mimi

    Whole-building retrofits involve the installation of multiple measures. Whole-building retrofit programs take many forms. With a focus on overall building performance, these programs usually begin with an energy audit to identify cost-effective energy efficiency measures for the home. Measures are then installed, either at no cost to the homeowner or partially paid for by rebates and/or financing. The methods described here may also be applied to evaluation of single-measure retrofit programs. Related methods exist for replace-on-failure programs and for new construction, but are not the subject of this chapter.

  14. An Extraction Method of an Informative DOM Node from a Web Page by Using Layout Information

    NASA Astrophysics Data System (ADS)

    Tsuruta, Masanobu; Masuyama, Shigeru

    We propose an informative DOM node extraction method from a Web page for preprocessing of Web content mining. Our proposed method LM uses layout data of DOM nodes generated by a generic Web browser, and the learning set consists of hundreds of Web pages and the annotations of informative DOM nodes of those Web pages. Our method does not require large scale crawling of the whole Web site to which the target Web page belongs. We design LM so that it uses the information of the learning set more efficiently in comparison to the existing method that uses the same learning set. By experiments, we evaluate the methods obtained by combining one that consists of the method for extracting the informative DOM node both the proposed method and the existing methods, and the existing noise elimination methods: Heur removes advertisements and link-lists by some heuristics and CE removes the DOM nodes existing in the Web pages in the same Web site to which the target Web page belongs. Experimental results show that 1) LM outperforms other methods for extracting the informative DOM node, 2) the combination method (LM, {CE(10), Heur}) based on LM (precision: 0.755, recall: 0.826, F-measure: 0.746) outperforms other combination methods.

  15. PARTNERING TO IMPROVE HUMAN EXPOSURE METHODS

    EPA Science Inventory

    Methods development research is an application-driven scientific area that addresses programmatic needs. The goals are to reduce measurement uncertainties, address data gaps, and improve existing analytical procedures for estimating human exposures. Partnerships have been develop...

  16. Note: Measuring instrument of singlet oxygen quantum yield in photodynamic effects

    NASA Astrophysics Data System (ADS)

    Li, Zhongwei; Zhang, Pengwei; Zang, Lixin; Qin, Feng; Zhang, Zhiguo; Zhang, Hongli

    2017-06-01

    Using diphenylisobenzofuran (C20H14O) as a singlet oxygen (1O2) reporter, a comparison method, which can be used to measure the singlet oxygen quantum yield (ΦΔ) of the photosensitizer quantitatively, is presented in this paper. Based on this method, an automatic measuring instrument of singlet oxygen quantum yield is developed. The singlet oxygen quantum yield of the photosensitizer hermimether and aloe-emodin is measured. It is found that the measuring results are identical to the existing ones, which verifies the validity of the measuring instrument.

  17. Generalized Ordinary Differential Equation Models 1

    PubMed Central

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-01-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method. PMID:25544787

  18. Generalized Ordinary Differential Equation Models.

    PubMed

    Miao, Hongyu; Wu, Hulin; Xue, Hongqi

    2014-10-01

    Existing estimation methods for ordinary differential equation (ODE) models are not applicable to discrete data. The generalized ODE (GODE) model is therefore proposed and investigated for the first time. We develop the likelihood-based parameter estimation and inference methods for GODE models. We propose robust computing algorithms and rigorously investigate the asymptotic properties of the proposed estimator by considering both measurement errors and numerical errors in solving ODEs. The simulation study and application of our methods to an influenza viral dynamics study suggest that the proposed methods have a superior performance in terms of accuracy over the existing ODE model estimation approach and the extended smoothing-based (ESB) method.

  19. Flattening Property and the Existence of Global Attractors in Banach Space

    NASA Astrophysics Data System (ADS)

    Aris, Naimah; Maharani, Sitti; Jusmawati, Massalesse; Nurwahyu, Budi

    2018-03-01

    This paper analyses the existence of global attractor in infinite dimensional system using flattening property. The earlier stage we show the existence of the global attractor in complete metric space by using concept of the ω-limit compact concept with measure of non-compactness methods. Then we show that the ω-limit compact concept is equivalent with the flattening property in Banach space. If we can prove there exist an absorbing set in the system and the flattening property holds, then the global attractor exist in the system.

  20. Measuring magnetic field vector by stimulated Raman transitions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Wenli; Wei, Rong, E-mail: weirong@siom.ac.cn; Lin, Jinda

    2016-03-21

    We present a method for measuring the magnetic field vector in an atomic fountain by probing the line strength of stimulated Raman transitions. The relative line strength for a Λ-type level system with an existing magnetic field is theoretically analyzed. The magnetic field vector measured by our proposed method is consistent well with that by the traditional bias magnetic field method with an axial resolution of 6.1 mrad and a radial resolution of 0.16 rad. Dependences of the Raman transitions on laser polarization schemes are also analyzed. Our method offers the potential advantages for magnetic field measurement without requiring additional bias fields,more » beyond the limitation of magnetic field intensity, and extending the spatial measurement range. The proposed method can be widely used for measuring magnetic field vector in other precision measurement fields.« less

  1. Measurement of Physical and Hydraulic Properties of Organic Soil Using Computed Tomographic Imagery

    NASA Astrophysics Data System (ADS)

    Blais, K. E.; Quinton, W. L.; Heck, R. J.; Price, J. S.; Schmidt, M. G.

    2005-12-01

    The Lower Liard River valley is located within the continental northern boreal region and the zone of discontinuous permafrost. Lying in the centre of the Mackenzie basin, this valley is an extensive flat headwater region with a high density of open water and peatlands. Several standard methods of measuring the physical properties of organic soils exist, although many of them have several drawbacks that limit their use. Organic soils, in particular, have unique properties that require special attention to ensure that the measured hydrological characteristics are represented as they exist in nature. The goal of this research was to devise an improved method of analyzing and measuring the physical and hydraulic properties of organic soil using MicroCT imagery. Specifically, this research seeks to determine if two and three-dimensional images of peat can be used to accurately characterize air-filled porosity, active porosity, pore size distribution, pore saturated area and capillarity of porous Sphagnum cells. Results indicate that measurements derived from these images are consistent with current literature. They also suggest that this non-destructive method is a valuable tool for measuring peat physical and hydraulic properties and that there is potential for additional research using CT technology.

  2. Alternative validation practice of an automated faulting measurement method.

    DOT National Transportation Integrated Search

    2010-03-08

    A number of states have adopted profiler based systems to automatically measure faulting, : in jointed concrete pavements. However, little published work exists which documents the : validation process used for such automated faulting systems. This p...

  3. A comparative review of measurement instruments to inform and evaluate effectiveness of disability inclusive development.

    PubMed

    Goujon, Nicolas; Devine, Alexandra; Baker, Sally M; Sprunt, Beth; Edmonds, Tanya J; Booth, Jennifer K; Keeffe, Jill E

    2014-01-01

    A review of existing measurement instruments was conducted to examine their suitability to measure disability prevalence and assess quality of life, protection of disability rights and community participation by people with disabilities, specifically within the context of development programs in low and middle-income countries. From a search of PubMed and the grey literature, potentially relevant measurement instruments were identified and examined for their content and psychometric properties, where possible. Criteria for inclusion were: based on the WHO's International Classification of Functioning Disability and Health (ICF), used quantitative methods, suitable for population-based studies of disability inclusive development in English and published after 1990. Characteristics of existing instruments were analysed according to components of the ICF and quality of life domains. Ten instruments were identified and reviewed according to the criteria listed above. Each version of instruments was analysed separately. Only three instruments included a component on quality of life. Domains from the ICF that were addressed by some but not all instruments included the environment, technology and communication. The measurement instruments reviewed covered the range of elements required to measure disability-inclusion within development contexts. However no single measurement instrument has the capacity to measure both disability prevalence and changes in quality of life according to contemporary disability paradigms. The review of measurement instruments supports the need for developing an instrument specifically intended to measure disability inclusive practice within development programs. Implications for Rehabilitation Surveys and tools are needed to plan disability inclusive development. Existing measurement tools to determine prevalence of disability, wellbeing, rights and access to the community were reviewed. No single validated tool exists for population-based studies, uses quantitative methods and the components of the ICF to measure prevalence of disability, well-being of people with disability and their access to their communities. A measurement tool that reflects the UNCRPD and addresses all components of the ICF is needed to assist in disability inclusive development, especially in low and mid resource countries.

  4. Measuring water and sediment discharge from a road plot with a settling basin and tipping bucket

    Treesearch

    Thomas A. Black; Charles H. Luce

    2013-01-01

    A simple empirical method quantifies water and sediment production from a forest road surface, and is well suited for calibration and validation of road sediment models. To apply this quantitative method, the hydrologic technician installs bordered plots on existing typical road segments and measures coarse sediment production in a settling tank. When a tipping bucket...

  5. Pseudorange Measurement Method Based on AIS Signals.

    PubMed

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-05-22

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system.

  6. Pseudorange Measurement Method Based on AIS Signals

    PubMed Central

    Zhang, Jingbo; Zhang, Shufang; Wang, Jinpeng

    2017-01-01

    In order to use the existing automatic identification system (AIS) to provide additional navigation and positioning services, a complete pseudorange measurements solution is presented in this paper. Through the mathematical analysis of the AIS signal, the bit-0-phases in the digital sequences were determined as the timestamps. Monte Carlo simulation was carried out to compare the accuracy of the zero-crossing and differential peak, which are two timestamp detection methods in the additive white Gaussian noise (AWGN) channel. Considering the low-speed and low-dynamic motion characteristics of ships, an optimal estimation method based on the minimum mean square error is proposed to improve detection accuracy. Furthermore, the α difference filter algorithm was used to achieve the fusion of the optimal estimation results of the two detection methods. The results show that the algorithm can greatly improve the accuracy of pseudorange estimation under low signal-to-noise ratio (SNR) conditions. In order to verify the effectiveness of the scheme, prototypes containing the measurement scheme were developed and field tests in Xinghai Bay of Dalian (China) were performed. The test results show that the pseudorange measurement accuracy was better than 28 m (σ) without any modification of the existing AIS system. PMID:28531153

  7. Betweenness-Based Method to Identify Critical Transmission Sectors for Supply Chain Environmental Pressure Mitigation.

    PubMed

    Liang, Sai; Qu, Shen; Xu, Ming

    2016-02-02

    To develop industry-specific policies for mitigating environmental pressures, previous studies primarily focus on identifying sectors that directly generate large amounts of environmental pressures (a.k.a. production-based method) or indirectly drive large amounts of environmental pressures through supply chains (e.g., consumption-based method). In addition to those sectors as important environmental pressure producers or drivers, there exist sectors that are also important to environmental pressure mitigation as transmission centers. Economy-wide environmental pressure mitigation might be achieved by improving production efficiency of these key transmission sectors, that is, using less upstream inputs to produce unitary output. We develop a betweenness-based method to measure the importance of transmission sectors, borrowing the betweenness concept from network analysis. We quantify the betweenness of sectors by examining supply chain paths extracted from structural path analysis that pass through a particular sector. We take China as an example and find that those critical transmission sectors identified by betweenness-based method are not always identifiable by existing methods. This indicates that betweenness-based method can provide additional insights that cannot be obtained with existing methods on the roles individual sectors play in generating economy-wide environmental pressures. Betweenness-based method proposed here can therefore complement existing methods for guiding sector-level environmental pressure mitigation strategies.

  8. Revisiting the Schönbein ozone measurement methodology

    NASA Astrophysics Data System (ADS)

    Ramírez-González, Ignacio A.; Añel, Juan A.; Saiz-López, Alfonso; García-Feal, Orlando; Cid, Antonio; Mejuto, Juan Carlos; Gimeno, Luis

    2017-04-01

    Trough the XIX century the Schönbein method gained a lot of popularity by its easy way to measure tropospheric ozone. Traditionally it has been considered that Schönbein measurements are not accurate enough to be useful. Detractors of this method argue that it is sensitive to meteorological conditions, being the most important the influence of relative humidity. As a consequence the data obtained by this method have usually been discarded. Here we revisit this method taking into account that values measured during the 19th century were taken using different measurement papers. We explore several concentrations of starch and potassium iodide, the basis for this measurement method. Our results are compared with the previous ones existing in the literature. The validity of the Schönbein methodology is discussed having into account humidity and other meteorological variables.

  9. Global civil aviation black carbon emissions.

    PubMed

    Stettler, Marc E J; Boies, Adam M; Petzold, Andreas; Barrett, Steven R H

    2013-09-17

    Aircraft black carbon (BC) emissions contribute to climate forcing, but few estimates of BC emitted by aircraft at cruise exist. For the majority of aircraft engines the only BC-related measurement available is smoke number (SN)-a filter based optical method designed to measure near-ground plume visibility, not mass. While the first order approximation (FOA3) technique has been developed to estimate BC mass emissions normalized by fuel burn [EI(BC)] from SN, it is shown that it underestimates EI(BC) by >90% in 35% of directly measured cases (R(2) = -0.10). As there are no plans to measure BC emissions from all existing certified engines-which will be in service for several decades-it is necessary to estimate EI(BC) for existing aircraft on the ground and at cruise. An alternative method, called FOX, that is independent of the SN is developed to estimate BC emissions. Estimates of EI(BC) at ground level are significantly improved (R(2) = 0.68), whereas estimates at cruise are within 30% of measurements. Implementing this approach for global civil aviation estimated aircraft BC emissions are revised upward by a factor of ~3. Direct radiative forcing (RF) due to aviation BC emissions is estimated to be ~9.5 mW/m(2), equivalent to ~1/3 of the current RF due to aviation CO2 emissions.

  10. Novel Technique for Making Measurements of SO2 with a Standalone Sonde

    NASA Astrophysics Data System (ADS)

    Flynn, J. H., III; Morris, G. A.; Kotsakis, A.; Alvarez, S. L.

    2017-12-01

    A novel technique has been developed to measure SO2 using the existing electrochemical concentration cell (ECC) ozonesonde technology. An interference in the ozone measurement occurs when SO2 is introduced to the iodide redox reaction causing the signal to decrease and go to zero when [O3] < [SO2]. The original method of measuring SO2 with ozonesondes involves launching two ozonesondes together with one ozonesonde unmodified and one with an SO2 filter [Morris et al, 2010]. By taking the difference between these profiles, the SO2 profile could be determined as long as [O3] > [SO2]. A new method allows for making a direct measurement of SO2 without the need for the dual payload by modifying the existing design. The ultimate goal is to be able to measure SO2 vertical profiles in the atmosphere, such as in plumes from anthropogenic or natural sources (i.e. volcanic eruptions). The benefits of an SO2 sonde include the ability to make measurements where aircraft cannot safely fly, such as in volcanic plumes, and to provide validation of SO2 columns from satellites.

  11. Application of laser differential confocal technique in back vertex power measurement for phoropters

    NASA Astrophysics Data System (ADS)

    Li, Fei; Li, Lin; Ding, Xiang; Liu, Wenli

    2012-10-01

    A phoropter is one of the most popular ophthalmic instruments used in optometry and the back vertex power (BVP) is one of the most important parameters to evaluate the refraction characteristics of a phoropter. In this paper, a new laser differential confocal vertex-power measurement method which takes advantage of outstanding focusing ability of laser differential confocal (LDC) system is proposed for measuring the BVP of phoropters. A vertex power measurement system is built up. Experimental results are presented and some influence factor is analyzed. It is demonstrated that the method based on LDC technique has higher measurement precision and stronger environmental anti-interference capability compared to existing methods. Theoretical analysis and experimental results indicate that the measurement error of the method is about 0.02m-1.

  12. Reconstruction of Vectorial Acoustic Sources in Time-Domain Tomography

    PubMed Central

    Xia, Rongmin; Li, Xu; He, Bin

    2009-01-01

    A new theory is proposed for the reconstruction of curl-free vector field, whose divergence serves as acoustic source. The theory is applied to reconstruct vector acoustic sources from the scalar acoustic signals measured on a surface enclosing the source area. It is shown that, under certain conditions, the scalar acoustic measurements can be vectorized according to the known measurement geometry and subsequently be used to reconstruct the original vector field. Theoretically, this method extends the application domain of the existing acoustic reciprocity principle from a scalar field to a vector field, indicating that the stimulating vectorial source and the transmitted acoustic pressure vector (acoustic pressure vectorized according to certain measurement geometry) are interchangeable. Computer simulation studies were conducted to evaluate the proposed theory, and the numerical results suggest that reconstruction of a vector field using the proposed theory is not sensitive to variation in the detecting distance. The present theory may be applied to magnetoacoustic tomography with magnetic induction (MAT-MI) for reconstructing current distribution from acoustic measurements. A simulation on MAT-MI shows that, compared to existing methods, the present method can give an accurate estimation on the source current distribution and a better conductivity reconstruction. PMID:19211344

  13. Chlorine measurement in the jet singlet oxygen generator considering the effects of the droplets.

    PubMed

    Goodarzi, Mohamad S; Saghafifar, Hossein

    2016-09-01

    A new method is presented to measure chlorine concentration more accurately than conventional method in exhaust gases of a jet-type singlet oxygen generator. One problem in this measurement is the existence of micrometer-sized droplets. In this article, an empirical method is reported to eliminate the effects of the droplets. Two wavelengths from a fiber coupled LED are adopted and the measurement is made on both selected wavelengths. Chlorine is measured by the two-wavelength more accurately than the one-wavelength method by eliminating the droplet term in the equations. This method is validated without the basic hydrogen peroxide injection in the reactor. In this case, a pressure meter value in the diagnostic cell is compared with the optically calculated pressure, which is obtained by the one-wavelength and the two-wavelength methods. It is found that chlorine measurement by the two-wavelength method and pressure meter is nearly the same, while the one-wavelength method has a significant error due to the droplets.

  14. Simple algorithms for remote determination of mineral abundances and particle sizes from reflectance spectra

    NASA Technical Reports Server (NTRS)

    Johnson, Paul E.; Smith, Milton O.; Adams, John B.

    1992-01-01

    Algorithms were developed, based on Hapke's (1981) equations, for remote determinations of mineral abundances and particle sizes from reflectance spectra. In this method, spectra are modeled as a function of end-member abundances and illumination/viewing geometry. The method was tested on a laboratory data set. It is emphasized that, although there exist more sophisticated models, the present algorithms are particularly suited for remotely sensed data, where little opportunity exists to independently measure reflectance versus article size and phase function.

  15. Saturation-inversion-recovery: A method for T1 measurement

    NASA Astrophysics Data System (ADS)

    Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao

    2017-01-01

    Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.

  16. Dichotomous versus semi-quantitative scoring of ultrasound joint inflammation in rheumatoid arthritis using novel individualized joint selection methods.

    PubMed

    Tan, York Kiat; Allen, John C; Lye, Weng Kit; Conaghan, Philip G; Chew, Li-Ching; Thumboo, Julian

    2017-05-01

    The aim of the study is to compare the responsiveness of two joint inflammation scoring systems (dichotomous scoring (DS) versus semi-quantitative scoring (SQS)) using novel individualized ultrasound joint selection methods and existing ultrasound joint selection methods. Responsiveness measured by the standardized response means (SRMs) using the DS and the SQS system (for both the novel and existing ultrasound joint selection methods) was derived using the baseline and the 3-month total inflammatory scores from 20 rheumatoid arthritis patients. The relative SRM gain ratios (SRM-Gains) for both scoring system (DS and SQS) comparing the novel to the existing methods were computed. Both scoring systems (DS and SQS) demonstrated substantial SRM-Gains (ranged from 3.31 to 5.67 for the DS system and ranged from 1.82 to 3.26 for the SQS system). The SRMs using the novel methods ranged from 0.94 to 1.36 for the DS system and ranged from 0.89 to 1.11 for the SQS system. The SRMs using the existing methods ranged from 0.24 to 0.32 for the DS system and ranged from 0.34 to 0.49 for the SQS system. The DS system appears to achieve high responsiveness comparable to SQS for the novel individualized ultrasound joint selection methods.

  17. Chapter 1: Introduction. The Uniform Methods Project: Methods for Determining Energy-Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Michael; Haeri, Hossein; Reynolds, Arlis

    This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objectivemore » for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.« less

  18. Ocular Chromatic Aberrations and Their Effects on Polychromatic Retinal Image Quality

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaoxiao

    Previous studies of ocular chromatic aberrations have concentrated on chromatic difference of focus (CDF). Less is known about the chromatic difference of image position (CDP) in the peripheral retina and no experimental attempt has been made to measure the ocular chromatic difference of magnification (CDM). Consequently, theoretical modelling of human eyes is incomplete. The insufficient knowledge of ocular chromatic aberrations is partially responsible for two unsolved applied vision problems: (1) how to improve vision by correcting ocular chromatic aberration? (2) what is the impact of ocular chromatic aberration on the use of isoluminance gratings as a tool in spatial-color vision?. Using optical ray tracing methods, MTF analysis methods of image quality, and psychophysical methods, I have developed a more complete model of ocular chromatic aberrations and their effects on vision. The ocular CDM was determined psychophysically by measuring the tilt in the apparent frontal parallel plane (AFPP) induced by interocular difference in image wavelength. This experimental result was then used to verify a theoretical relationship between the ocular CDM, the ocular CDF and the entrance pupil of the eye. In the retinal image after correcting the ocular CDF with existing achromatizing methods, two forms of chromatic aberration (CDM and chromatic parallax) were examined. The CDM was predicted by theoretical ray tracing and measured with the same method used to determine ocular CDM. The chromatic parallax was predicted with a nodal ray model and measured with the two-color vernier alignment method. The influence of these two aberrations on polychromatic MTF were calculated. Using this improved model of ocular chromatic aberration, luminance artifacts in the images of isoluminance gratings were calculated. The predicted luminance artifacts were then compared with experimental data from previous investigators. The results show that: (1) A simple relationship exists between two major chromatic aberrations and the location of the pupil; (2) The ocular CDM is measurable and varies among individuals; (3) All existing methods to correct ocular chromatic aberration face another aberration, chromatic parallax, which is inherent in the methodology; (4) Ocular chromatic aberrations have the potential to contaminate psychophysical experimental results on human spatial-color vision.

  19. Optical methods for non-contact measurements of membranes

    NASA Astrophysics Data System (ADS)

    Roose, S.; Stockman, Y.; Rochus, P.; Kuhn, T.; Lang, M.; Baier, H.; Langlois, S.; Casarosa, G.

    2009-11-01

    Structures for space applications very often suffer stringent mass constraints. Lightweight structures are developed for this purpose, through the use of deployable and/or inflatable beams, and thin-film membranes. Their inherent properties (low mass and small thickness) preclude the use of conventional measurement methods (accelerometers and displacement transducers for example) during on-ground testing. In this context, innovative non-contact measurement methods need to be investigated for these stretched membranes. The object of the present project is to review existing measurement systems capable of measuring characteristics of membrane space-structures such as: dot-projection videogrammetry (static measurements), stereo-correlation (dynamic and static measurements), fringe projection (wrinkles) and 3D laser scanning vibrometry (dynamic measurements). Therefore, minimum requirements were given for the study in order to have representative test articles covering a wide range of applications. We present test results obtained with the different methods on our test articles.

  20. Emotional Expression in Children Treated with ADHD Medication: Development of a New Measure

    ERIC Educational Resources Information Center

    Perwien, Amy R.; Kratochvil, Christopher J.; Faries, Douglas; Vaughan, Brigette; Busner, Joan; Saylor, Keith E.; Buermeyer, Curtis M.; Kaplan, Stuart; Swindle, Ralph

    2008-01-01

    Objective: Although existing instruments contain items addressing the effect of ADHD medications on emotional expression, a review of measures did not yield any instruments that thoroughly evaluated positive and negative aspects of emotional expression. Method: The Expression and Emotion Scale for Children (EESC), a parent-report measure, was…

  1. Measurement System Characterization in the Presence of Measurement Errors

    NASA Technical Reports Server (NTRS)

    Commo, Sean A.

    2012-01-01

    In the calibration of a measurement system, data are collected in order to estimate a mathematical model between one or more factors of interest and a response. Ordinary least squares is a method employed to estimate the regression coefficients in the model. The method assumes that the factors are known without error; yet, it is implicitly known that the factors contain some uncertainty. In the literature, this uncertainty is known as measurement error. The measurement error affects both the estimates of the model coefficients and the prediction, or residual, errors. There are some methods, such as orthogonal least squares, that are employed in situations where measurement errors exist, but these methods do not directly incorporate the magnitude of the measurement errors. This research proposes a new method, known as modified least squares, that combines the principles of least squares with knowledge about the measurement errors. This knowledge is expressed in terms of the variance ratio - the ratio of response error variance to measurement error variance.

  2. Prediction and Validation of Disease Genes Using HeteSim Scores.

    PubMed

    Zeng, Xiangxiang; Liao, Yuanlu; Liu, Yuansheng; Zou, Quan

    2017-01-01

    Deciphering the gene disease association is an important goal in biomedical research. In this paper, we use a novel relevance measure, called HeteSim, to prioritize candidate disease genes. Two methods based on heterogeneous networks constructed using protein-protein interaction, gene-phenotype associations, and phenotype-phenotype similarity, are presented. In HeteSim_MultiPath (HSMP), HeteSim scores of different paths are combined with a constant that dampens the contributions of longer paths. In HeteSim_SVM (HSSVM), HeteSim scores are combined with a machine learning method. The 3-fold experiments show that our non-machine learning method HSMP performs better than the existing non-machine learning methods, our machine learning method HSSVM obtains similar accuracy with the best existing machine learning method CATAPULT. From the analysis of the top 10 predicted genes for different diseases, we found that HSSVM avoid the disadvantage of the existing machine learning based methods, which always predict similar genes for different diseases. The data sets and Matlab code for the two methods are freely available for download at http://lab.malab.cn/data/HeteSim/index.jsp.

  3. Report on Non-invasive acoustic monitoring of D2O concentration Oct 31 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pantea, Cristian; Sinha, Dipen N.; Lakis, Rollin Evan

    There is an urgent need for real-time monitoring of the hydrogen /deuterium ratio (H/D) for heavy water production monitoring. Based upon published literature, sound speed is sensitive to the deuterium content of heavy water and can be measured using existing acoustic methods to determine the deuterium concentration in heavy water solutions. We plan to adapt existing non-invasive acoustic techniques (Swept-Frequency Acoustic Interferometry and Gaussian-pulse acoustic technique) for the purpose of quantifying H/D ratios in solution. A successful demonstration will provide an easily implemented, low cost, and non-invasive method for remote and unattended H/D ratio measurements with a resolution of lessmore » than 0.2% vol.« less

  4. TMA Vessel Segmentation Based on Color and Morphological Features: Application to Angiogenesis Research

    PubMed Central

    Fernández-Carrobles, M. Milagro; Tadeo, Irene; Bueno, Gloria; Noguera, Rosa; Déniz, Oscar; Salido, Jesús; García-Rojo, Marcial

    2013-01-01

    Given that angiogenesis and lymphangiogenesis are strongly related to prognosis in neoplastic and other pathologies and that many methods exist that provide different results, we aim to construct a morphometric tool allowing us to measure different aspects of the shape and size of vascular vessels in a complete and accurate way. The developed tool presented is based on vessel closing which is an essential property to properly characterize the size and the shape of vascular and lymphatic vessels. The method is fast and accurate improving existing tools for angiogenesis analysis. The tool also improves the accuracy of vascular density measurements, since the set of endothelial cells forming a vessel is considered as a single object. PMID:24489494

  5. An Analytical Method for Measuring Competence in Project Management

    ERIC Educational Resources Information Center

    González-Marcos, Ana; Alba-Elías, Fernando; Ordieres-Meré, Joaquín

    2016-01-01

    The goal of this paper is to present a competence assessment method in project management that is based on participants' performance and value creation. It seeks to close an existing gap in competence assessment in higher education. The proposed method relies on information and communication technology (ICT) tools and combines Project Management…

  6. Height Measuring System On Video Using Otsu Method

    NASA Astrophysics Data System (ADS)

    Sandy, C. L. M.; Meiyanti, R.

    2017-01-01

    A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.

  7. Single-Image Distance Measurement by a Smart Mobile Device.

    PubMed

    Chen, Shangwen; Fang, Xianyong; Shen, Jianbing; Wang, Linbo; Shao, Ling

    2017-12-01

    Existing distance measurement methods either require multiple images and special photographing poses or only measure the height with a special view configuration. We propose a novel image-based method that can measure various types of distance from single image captured by a smart mobile device. The embedded accelerometer is used to determine the view orientation of the device. Consequently, pixels can be back-projected to the ground, thanks to the efficient calibration method using two known distances. Then the distance in pixel is transformed to a real distance in centimeter with a linear model parameterized by the magnification ratio. Various types of distance specified in the image can be computed accordingly. Experimental results demonstrate the effectiveness of the proposed method.

  8. Photogrammetry and Videogrammetry Methods Development for Solar Sail Structures. Masters Thesis awarded by George Washington Univ.

    NASA Technical Reports Server (NTRS)

    Pappa, Richard S. (Technical Monitor); Black, Jonathan T.

    2003-01-01

    This report discusses the development and application of metrology methods called photogrammetry and videogrammetry that make accurate measurements from photographs. These methods have been adapted for the static and dynamic characterization of gossamer structures, as four specific solar sail applications demonstrate. The applications prove that high-resolution, full-field, non-contact static measurements of solar sails using dot projection photogrammetry are possible as well as full-field, non-contact, dynamic characterization using dot projection videogrammetry. The accuracy of the measurement of the resonant frequencies and operating deflection shapes that were extracted surpassed expectations. While other non-contact measurement methods exist, they are not full-field and require significantly more time to take data.

  9. Design and Characterization of a Microfabricated Hydrogen Clearance Blood Flow Sensor

    PubMed Central

    Walton, Lindsay R.; Edwards, Martin A.; McCarty, Gregory S.; Wightman, R. Mark

    2016-01-01

    Background Modern cerebral blood flow (CBF) detection favors the use of either optical technologies that are limited to cortical brain regions, or expensive magnetic resonance. Decades ago, inhalation gas clearance was the choice method of quantifying CBF, but this suffered from poor temporal resolution. Electrolytic H2 clearance (EHC) generates and collects gas in situ at an electrode pair, which improves temporal resolution, but the probe size has prohibited meaningful subcortical use. New Method We microfabricated EHC electrodes to an order of magnitude smaller than those existing, on the scale of 100 µm, to permit use deep within the brain. Results Novel EHC probes were fabricated. The devices offered exceptional signal-to-noise, achieved high collection efficiencies (40 – 50%) in vitro, and agreed with theoretical modeling. An in vitro chemical reaction model was used to confirm that our devices detected flow rates higher than those expected physiologically. Computational modeling that incorporated realistic noise levels demonstrated devices would be sensitive to physiological CBF rates. Comparison with Existing Method The reduced size of our arrays makes them suitable for subcortical EHC measurements, as opposed to the larger, existing EHC electrodes that would cause substantial tissue damage. Our array can collect multiple CBF measurements per minute, and can thus resolve physiological changes occurring on a shorter timescale than existing gas clearance measurements. Conclusion We present and characterize microfabricated EHC electrodes and an accompanying theoretical model to interpret acquired data. Microfabrication allows for the high-throughput production of reproducible devices that are capable of monitoring deep brain CBF with sub-minute resolution. PMID:27102042

  10. SANA NetGO: a combinatorial approach to using Gene Ontology (GO) terms to score network alignments.

    PubMed

    Hayes, Wayne B; Mamano, Nil

    2018-04-15

    Gene Ontology (GO) terms are frequently used to score alignments between protein-protein interaction (PPI) networks. Methods exist to measure GO similarity between proteins in isolation, but proteins in a network alignment are not isolated: each pairing is dependent on every other via the alignment itself. Existing measures fail to take into account the frequency of GO terms across networks, instead imposing arbitrary rules on when to allow GO terms. Here we develop NetGO, a new measure that naturally weighs infrequent, informative GO terms more heavily than frequent, less informative GO terms, without arbitrary cutoffs, instead downweighting GO terms according to their frequency in the networks being aligned. This is a global measure applicable only to alignments, independent of pairwise GO measures, in the same sense that the edge-based EC or S3 scores are global measures of topological similarity independent of pairwise topological similarities. We demonstrate the superiority of NetGO in alignments of predetermined quality and show that NetGO correlates with alignment quality better than any existing GO-based alignment measures. We also demonstrate that NetGO provides a measure of taxonomic similarity between species, consistent with existing taxonomic measuresa feature not shared with existing GObased network alignment measures. Finally, we re-score alignments produced by almost a dozen aligners from a previous study and show that NetGO does a better job at separating good alignments from bad ones. Available as part of SANA. whayes@uci.edu. Supplementary data are available at Bioinformatics online.

  11. Identifying Factors that Influence State-Specific Hunger Rates in the U.S.: A Simple Analytic Method for Understanding a Persistent Problem

    ERIC Educational Resources Information Center

    Edwards, Mark Evan; Weber, Bruce; Bernell, Stephanie

    2007-01-01

    An existing measure of food insecurity with hunger in the United States may serve as an effective indicator of quality of life. State level differences in that measure can reveal important differences in quality of life across places. In this study, we advocate and demonstrate two simple methods by which analysts can explore state-specific…

  12. Review: Quantifying animal feeding behaviour with a focus on pigs.

    PubMed

    Maselyne, Jarissa; Saeys, Wouter; Van Nuffel, Annelies

    2015-01-01

    The study of animal feeding behaviour is of interest to understand feeding, to investigate the effect of treatments and conditions or to predict illness. This paper reviews the different steps to undertake when studying animal feeding behaviour, with illustrations for group-housed pigs. First, one must be aware of the mechanisms that control feeding and the various influences that can change feeding behaviour. Satiety is shown to largely influence free feeding (ad libitum and without an operant condition) in animals, but 'free' feeding seems a very fragile process, given the many factors that can influence feeding behaviour. Second, a measurement method must be chosen that is compatible with the goal of the research. Several measurement methods exist, which lead to different experimental set-ups and measurement data. Sensors are available for lab conditions, for research on group-housed pigs and also for on-farm use. Most of these methods result in a record of feeding visits. However, these feeding visits are often found to be clustered into meals. Thus, the third step is to choose which unit of feeding behaviour to use for analysis. Depending on the situation, either meals, feeding visits, other raw data, or a combination thereof can be suitable. Meals are more appropriate for analysing short-term feeding behaviour, but this may not be true for disease detection. Further research is therefore needed. To cluster visits into meals, an appropriate analysis method has to be selected. The last part of this paper provides a review and discussion of the existing methods for meal determination. A variety of methods exist, with the most recent methods based on the influence of satiety on feeding. More thorough validation of the recent methods, including validation from a behavioural point of view and uniformity in the applied methods is therefore necessary. Copyright © 2014 Elsevier Inc. All rights reserved.

  13. A systematic review and synthesis of the strengths and limitations of measuring malaria mortality through verbal autopsy.

    PubMed

    Herrera, Samantha; Enuameh, Yeetey; Adjei, George; Ae-Ngibise, Kenneth Ayuurebobi; Asante, Kwaku Poku; Sankoh, Osman; Owusu-Agyei, Seth; Yé, Yazoume

    2017-10-23

    Lack of valid and reliable data on malaria deaths continues to be a problem that plagues the global health community. To address this gap, the verbal autopsy (VA) method was developed to ascertain cause of death at the population level. Despite the adoption and wide use of VA, there are many recognized limitations of VA tools and methods, especially for measuring malaria mortality. This study synthesizes the strengths and limitations of existing VA tools and methods for measuring malaria mortality (MM) in low- and middle-income countries through a systematic literature review. The authors searched PubMed, Cochrane Library, Popline, WHOLIS, Google Scholar, and INDEPTH Network Health and Demographic Surveillance System sites' websites from 1 January 1990 to 15 January 2016 for articles and reports on MM measurement through VA. article presented results from a VA study where malaria was a cause of death; article discussed limitations/challenges related to measurement of MM through VA. Two authors independently searched the databases and websites and conducted a synthesis of articles using a standard matrix. The authors identified 828 publications; 88 were included in the final review. Most publications were VA studies; others were systematic reviews discussing VA tools or methods; editorials or commentaries; and studies using VA data to develop MM estimates. The main limitation were low sensitivity and specificity of VA tools for measuring MM. Other limitations included lack of standardized VA tools and methods, lack of a 'true' gold standard to assess accuracy of VA malaria mortality. Existing VA tools and methods for measuring MM have limitations. Given the need for data to measure progress toward the World Health Organization's Global Technical Strategy for Malaria 2016-2030 goals, the malaria community should define strategies for improving MM estimates, including exploring whether VA tools and methods could be further improved. Longer term strategies should focus on improving countries' vital registration systems for more robust and timely cause of death data.

  14. A time domain based method for the accurate measurement of Q-factor and resonance frequency of microwave resonators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gyüre, B.; Márkus, B. G.; Bernáth, B.

    2015-09-15

    We present a novel method to determine the resonant frequency and quality factor of microwave resonators which is faster, more stable, and conceptually simpler than the yet existing techniques. The microwave resonator is pumped with the microwave radiation at a frequency away from its resonance. It then emits an exponentially decaying radiation at its eigen-frequency when the excitation is rapidly switched off. The emitted microwave signal is down-converted with a microwave mixer, digitized, and its Fourier transformation (FT) directly yields the resonance curve in a single shot. Being a FT based method, this technique possesses the Fellgett (multiplex) and Connesmore » (accuracy) advantages and it conceptually mimics that of pulsed nuclear magnetic resonance. We also establish a novel benchmark to compare accuracy of the different approaches of microwave resonator measurements. This shows that the present method has similar accuracy to the existing ones, which are based on sweeping or modulating the frequency of the microwave radiation.« less

  15. Quadcopter Control Using Speech Recognition

    NASA Astrophysics Data System (ADS)

    Malik, H.; Darma, S.; Soekirno, S.

    2018-04-01

    This research reported a comparison from a success rate of speech recognition systems that used two types of databases they were existing databases and new databases, that were implemented into quadcopter as motion control. Speech recognition system was using Mel frequency cepstral coefficient method (MFCC) as feature extraction that was trained using recursive neural network method (RNN). MFCC method was one of the feature extraction methods that most used for speech recognition. This method has a success rate of 80% - 95%. Existing database was used to measure the success rate of RNN method. The new database was created using Indonesian language and then the success rate was compared with results from an existing database. Sound input from the microphone was processed on a DSP module with MFCC method to get the characteristic values. Then, the characteristic values were trained using the RNN which result was a command. The command became a control input to the single board computer (SBC) which result was the movement of the quadcopter. On SBC, we used robot operating system (ROS) as the kernel (Operating System).

  16. a Method for the Measurements of Children's Feet

    NASA Astrophysics Data System (ADS)

    Bernard, , M.; Buffevant, B.; Querio, R.; Rigal, R.

    1980-07-01

    The Centre Technique du Cuir (Leather Technical Center) has been entrusted with the task of measuring children's feet. A new equipement has been devised which makes the precision measures sure and which is quick to give informations. The paper will present : 1 - the existing engineerings, 2 - the research's and analysis's methodology, 3 - the CTC apparatus actually used in schools.

  17. Measuring Quality in Online Education: A Meta-Synthesis

    ERIC Educational Resources Information Center

    Esfijani, Azam

    2018-01-01

    This article presents a meta-synthesis review of quality of online education (QOE) measurement approaches. In order to survey the existing body of knowledge, a qualitative method was employed to investigate what quality of online education is comprised of and how the concept has been measured through the literature. To achieve this, a total of 112…

  18. Understanding Unimer Exchange Processes in Block Copolymer Micelles using NMR Diffusometry, Time-Resolved NMR, and SANS

    NASA Astrophysics Data System (ADS)

    Madsen, Louis; Kidd, Bryce; Li, Xiuli; Miller, Katherine; Cooksey, Tyler; Robertson, Megan

    Our team seeks to understand dynamic behaviors of block copolymer micelles and their interplay with encapsulated cargo molecules. Quantifying unimer and cargo exchange rates micelles can provide critical information for determining mechanisms of unimer exchange as well as designing systems for specific cargo release dynamics. We are exploring the utility of NMR spectroscopy and diffusometry techniques as complements to existing SANS and fluorescence methods. One promising new method involves time-resolved NMR spin relaxation measurements, wherein mixing of fully protonated and 2H-labeled PEO-b-PCL micelles solutions shows an increase in spin-lattice relaxation time (T1) with time after mixing. This is due to a weakening in magnetic environment surrounding 1H spins as 2H-bearing unimers join fully protonated micelles. We are measuring time constants for unimer exchange of minutes to hours, and we expect to resolve times of <1 min. This method can work on any solution NMR spectrometer and with minimal perturbation to chemical structure (as in dye-labelled fluorescence methods). Multimodal NMR can complement existing characterization tools, expanding and accelerating dynamics measurements for polymer micelle, nanogel, and nanoparticle developers.

  19. PERFORMANCE OF A NEW DIFFUSIVE SAMPLER FOR HG0 DETERMINATION IN THE TROPOSPHERE

    EPA Science Inventory

    Mercury behaves uniquely in the atmosphere due to its volatility and long lifetime. The existing methods for measuring atmospheric mercury are either expensive or labour intensive. The present paper presents a new measurement technique, the diffusive sampler, that is portable, in...

  20. Employment of sawtooth-shaped-function excitation signal and oversampling for improving resistance measurement accuracy

    NASA Astrophysics Data System (ADS)

    Lin, Ling; Li, Shujuan; Yan, Wenjuan; Li, Gang

    2016-10-01

    In order to achieve higher measurement accuracy of routine resistance without increasing the complexity and cost of the system circuit of existing methods, this paper presents a novel method that exploits a shaped-function excitation signal and oversampling technology. The excitation signal source for resistance measurement is modulated by the sawtooth-shaped-function signal, and oversampling technology is employed to increase the resolution and the accuracy of the measurement system. Compared with the traditional method of using constant amplitude excitation signal, this method can effectively enhance the measuring accuracy by almost one order of magnitude and reduce the root mean square error by 3.75 times under the same measurement conditions. The results of experiments show that the novel method can attain the aim of significantly improve the measurement accuracy of resistance on the premise of not increasing the system cost and complexity of the circuit, which is significantly valuable for applying in electronic instruments.

  1. Calibration of streamflow gauging stations at the Tenderfoot Creek Experimental Forest

    Treesearch

    Scott W. Woods

    2007-01-01

    We used tracer based methods to calibrate eleven streamflow gauging stations at the Tenderfoot Creek Experimental Forest in western Montana. At six of the stations the measured flows were consistent with the existing rating curves. At Lower and Upper Stringer Creek, Upper Sun Creek and Upper Tenderfoot Creek the published flows, based on the existing rating curves,...

  2. A method for evaluating discoverability and navigability of recommendation algorithms.

    PubMed

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  3. From air to rubber: New techniques for measuring and replicating mouthpieces, bocals, and bores

    NASA Astrophysics Data System (ADS)

    Fuks, Leonardo

    2002-11-01

    The history of musical instruments comprises a long genealogy of models and prototypes that results from a combination of copying existing specimens with the change in constructive parameters, and the addition of new devices. In making wind instruments, several techniques have been traditionally employed for extracting the external and internal dimensions of toneholes, air columns, bells, and mouthpieces. In the twentieth century, methods such as pulse reflectometry, x-ray, magnetic resonance, and ultrasound imaging have been made available for bore measurement. Advantages and drawbacks of the existing methods are discussed and a new method is presented that makes use of the injection and coating of silicon rubber, for accurate molding of the instrument. This technique is harmless to all traditional materials, being indicated also for measurements of historical instruments. The paper presents dimensional data obtained from clarinet and saxophone mouthpieces. A set of replicas of top quality clarinet and saxophone mouthpieces, trombone bocals, and flute headjoints is shown, with comparative acoustical and performance analyses. The application of such techniques for historical and modern instrument analysis, restoration, and manufacturing is proposed.

  4. Modified T-history method for measuring thermophysical properties of phase change materials at high temperature

    NASA Astrophysics Data System (ADS)

    Omaraa, Ehsan; Saman, Wasim; Bruno, Frank; Liu, Ming

    2017-06-01

    Latent heat storage using phase change materials (PCMs) can be used to store large amounts of energy in a narrow temperature difference during phase transition. The thermophysical properties of PCMs such as latent heat, specific heat and melting and solidification temperature need to be defined at high precision for the design and estimating the cost of latent heat storage systems. The existing laboratory standard methods, such as differential thermal analysis (DTA) and differential scanning calorimetry (DSC), use a small sample size (1-10 mg) to measure thermophysical properties, which makes these methods suitable for homogeneous elements. In addition, this small amount of sample has different thermophysical properties when compared with the bulk sample and may have limitations for evaluating the properties of mixtures. To avoid the drawbacks in existing methods, the temperature - history (T-history) method can be used with bulk quantities of PCM salt mixtures to characterize PCMs. This paper presents a modified T-history setup, which was designed and built at the University of South Australia to measure the melting point, heat of fusion, specific heat, degree of supercooling and phase separation of salt mixtures for a temperature range between 200 °C and 400 °C. Sodium Nitrate (NaNO3) was used to verify the accuracy of the new setup.

  5. High-precision radius automatic measurement using laser differential confocal technology

    NASA Astrophysics Data System (ADS)

    Jiang, Hongwei; Zhao, Weiqian; Yang, Jiamiao; Guo, Yongkui; Xiao, Yang

    2015-02-01

    A high precision radius automatic measurement method using laser differential confocal technology is proposed. Based on the property of an axial intensity curve that the null point precisely corresponds to the focus of the objective and the bipolar property, the method uses the composite PID (proportional-integral-derivative) control to ensure the steady movement of the motor for process of quick-trigger scanning, and uses least-squares linear fitting to obtain the position of the cat-eye and confocal positions, then calculates the radius of curvature of lens. By setting the number of measure times, precision auto-repeat measurement of the radius of curvature is achieved. The experiment indicates that the method has the measurement accuracy of better than 2 ppm, and the measuring repeatability is better than 0.05 μm. In comparison with the existing manual-single measurement, this method has a high measurement precision, a strong environment anti-interference capability, a better measuring repeatability which is only tenth of former's.

  6. 3D shape reconstruction of specular surfaces by using phase measuring deflectometry

    NASA Astrophysics Data System (ADS)

    Zhou, Tian; Chen, Kun; Wei, Haoyun; Li, Yan

    2016-10-01

    The existing estimation methods for recovering height information from surface gradient are mainly divided into Modal and Zonal techniques. Since specular surfaces used in the industry always have complex and large areas, considerations must be given to both the improvement of measurement accuracy and the acceleration of on-line processing speed, which beyond the capacity of existing estimations. Incorporating the Modal and Zonal approaches into a unifying scheme, we introduce an improved 3D shape reconstruction version of specular surfaces based on Phase Measuring Deflectometry in this paper. The Modal estimation is firstly implemented to derive the coarse height information of the measured surface as initial iteration values. Then the real shape can be recovered utilizing a modified Zonal wave-front reconstruction algorithm. By combining the advantages of Modal and Zonal estimations, the proposed method simultaneously achieves consistently high accuracy and dramatically rapid convergence. Moreover, the iterative process based on an advanced successive overrelaxation technique shows a consistent rejection of measurement errors, guaranteeing the stability and robustness in practical applications. Both simulation and experimentally measurement demonstrate the validity and efficiency of the proposed improved method. According to the experimental result, the computation time decreases approximately 74.92% in contrast to the Zonal estimation and the surface error is about 6.68 μm with reconstruction points of 391×529 pixels of an experimentally measured sphere mirror. In general, this method can be conducted with fast convergence speed and high accuracy, providing an efficient, stable and real-time approach for the shape reconstruction of specular surfaces in practical situations.

  7. Reliability and Validity of the Alcohol Consequences Expectations Scale

    ERIC Educational Resources Information Center

    Arriola, Kimberly R. Jacob; Usdan, Stuart; Mays, Darren; Weitzel, Jessica Aungst; Cremeens, Jennifer; Martin, Ryan J.; Borba, Christina; Bernhardt, Jay M.

    2009-01-01

    Objectives: To examine the reliability and validity of a new measure of alcohol outcome expectations for college students, the Alcohol Consequences Expectations Scale (ACES). Methods: College students (N = 169) completed the ACES and several other measures. Results: Results support the existence of 5 internally consistent subscales. Additionally,…

  8. Measuring School Psychology Trainee Self-Efficacy

    ERIC Educational Resources Information Center

    Lockwood, Adam B.; Mcclure, John; Sealander, Karen; Baker, Courtney N.

    2017-01-01

    There is an ever-increasing need for school psychology training programs to demonstrate their ability to produce competent practitioners. One method of addressing this need is through the assessment of self-efficacy. However, little research on self-efficacy in school psychology exists likely due to the lack of a psychometrically sound measure of…

  9. An integrative framework for sensor-based measurement of teamwork in healthcare

    PubMed Central

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. PMID:25053579

  10. [Measuring the thickness of facial soft tissues using nuclear magnetic resonance tomography for the purpose of identification].

    PubMed

    Helmer, R; Koschorek, F; Terwey, B; Frauen, T

    1986-01-01

    Nuclear spin tomography since its beginnings in the seventies has steadily gained in importance as a method of examination in medical diagnostics as it produces a picture. In the field of forensic medicine the NMR technique as used for anatomic-anthropologic issues attempting to identify skulls this is a valuable supplement to an extension of the existing methods of investigation. The results of a measurement of the thickness of soft facial tissue in a live test person is shown as compared to measures obtained by sonography.

  11. Noninvasive vacuum integrity tests on fast warm-up traveling-wave tubes

    NASA Astrophysics Data System (ADS)

    Dallos, A.; Carignan, R. G.

    1989-04-01

    A method of tube vacuum monitoring that uses the tube's existing internal electrodes as an ion gage is discussed. This method has been refined using present-day instrumentation and has proved to be a precise, simple, and fast method of tube vacuum measurement. The method is noninvasive due to operation of the cathode at low temperature, which minimizes pumping or outgassing. Because of the low current levels to be measured, anode insulator leakage must be low, and the leads must be properly shielded to minimize charging effects. A description of the method, instrumentation used, limitations, and data showing results over a period of 600 days are presented.

  12. Stored grain pack factors for wheat: comparison of three methods to field measurements

    USDA-ARS?s Scientific Manuscript database

    Storing grain in bulk storage units results in grain packing from overbearing pressure, which increases grain bulk density and storage-unit capacity. This study compared pack factors of hard red winter (HRW) wheat in vertical storage bins using different methods: the existing packing model (WPACKING...

  13. Multi-Method Assessment of Feeding Problems among Children with Autism Spectrum Disorders

    ERIC Educational Resources Information Center

    Sharp, William G.; Jaquess, David L.; Lukens, Colleen T.

    2013-01-01

    Estimates suggest that atypical eating is pervasive among children with autism spectrum disorders (ASD); however, much remains unknown regarding the nature and prevalence of feeding problems in this population due to methodological limitations, including lack of adequate assessment methods and empirical evaluation of existing measures. In the…

  14. ENERGY-BASED LAND USE PREDICTORS OF PROXIMAL FACTORS AND BENTHIC DIATOM COMPOSITION IN FLORIDA FRESHWATER MARSHES

    EPA Science Inventory

    The development of rigorous biological assessments is dependent upon well-constructed abscissa, and various methods, both subjective and objective, exist to measure expected impairment at both the landscape and local scale. A new, landscape-scale method has recently been offered...

  15. An automated and universal method for measuring mean grain size from a digital image of sediment

    USGS Publications Warehouse

    Buscombe, Daniel D.; Rubin, David M.; Warrick, Jonathan A.

    2010-01-01

    Existing methods for estimating mean grain size of sediment in an image require either complicated sequences of image processing (filtering, edge detection, segmentation, etc.) or statistical procedures involving calibration. We present a new approach which uses Fourier methods to calculate grain size directly from the image without requiring calibration. Based on analysis of over 450 images, we found the accuracy to be within approximately 16% across the full range from silt to pebbles. Accuracy is comparable to, or better than, existing digital methods. The new method, in conjunction with recent advances in technology for taking appropriate images of sediment in a range of natural environments, promises to revolutionize the logistics and speed at which grain-size data may be obtained from the field.

  16. Adapting a Cancer Literacy Measure for Use among Navajo Women

    PubMed Central

    Yost, Kathleen J.; Bauer, Mark C.; Buki, Lydia P.; Austin-Garrison, Martha; Garcia, Linda V.; Hughes, Christine A.; Patten, Christi A.

    2016-01-01

    Purpose The authors designed a community-based participatory research study to develop and test a family-based behavioral intervention to improve cancer literacy and promote mammography among Navajo women. Methods Using data from focus groups and discussions with a community advisory committee, they adapted an existing questionnaire to assess cancer knowledge, barriers to mammography, and cancer beliefs for use among Navajo women. Questions measuring health literacy, numeracy, self-efficacy, cancer communication, and family support were also adapted. Results The resulting questionnaire was found to have good content validity, and to be culturally and linguistically appropriate for use among Navajo women. Conclusions It is important to consider culture and not just language when adapting existing measures for use with AI/AN populations. English-language versions of existing literacy measures may not be culturally appropriate for AI/AN populations, which could lead to a lack of semantic, technical, idiomatic, and conceptual equivalence, resulting in misinterpretation of study outcomes. PMID:26879319

  17. Exploring a taxonomy for aggression against women: can it aid conceptual clarity?

    PubMed

    Cook, Sarah; Parrott, Dominic

    2009-01-01

    The assessment of aggression against women is demanding primarily because assessment strategies do not share a common language to describe reliably the wide range of forms of aggression women experience. The lack of a common language impairs efforts to describe these experiences, understand causes and consequences of aggression against women, and develop effective intervention and prevention efforts. This review accomplishes two goals. First, it applies a theoretically and empirically based taxonomy to behaviors assessed by existing measurement instruments. Second, it evaluates whether the taxonomy provides a common language for the field. Strengths of the taxonomy include its ability to describe and categorize all forms of aggression found in existing quantitative measures. The taxonomy also classifies numerous examples of aggression discussed in the literature but notably absent from quantitative measures. Although we use existing quantitative measures as a starting place to evaluate the taxonomy, its use is not limited to quantitative methods. Implications for theory, research, and practice are discussed.

  18. Information fusion methods based on physical laws.

    PubMed

    Rao, Nageswara S V; Reister, David B; Barhen, Jacob

    2005-01-01

    We consider systems whose parameters satisfy certain easily computable physical laws. Each parameter is directly measured by a number of sensors, or estimated using measurements, or both. The measurement process may introduce both systematic and random errors which may then propagate into the estimates. Furthermore, the actual parameter values are not known since every parameter is measured or estimated, which makes the existing sample-based fusion methods inapplicable. We propose a fusion method for combining the measurements and estimators based on the least violation of physical laws that relate the parameters. Under fairly general smoothness and nonsmoothness conditions on the physical laws, we show the asymptotic convergence of our method and also derive distribution-free performance bounds based on finite samples. For suitable choices of the fuser classes, we show that for each parameter the fused estimate is probabilistically at least as good as its best measurement as well as best estimate. We illustrate the effectiveness of this method for a practical problem of fusing well-log data in methane hydrate exploration.

  19. Aerosol single scattering albedo estimated across China from a combination of ground and satellite measurements

    Treesearch

    Kwon Ho Lee; Zhanqing Li; Man Sing Wong; Jinyuan Xin; Wang Yuesi; Wei Min Hao; Fengsheng Zhao

    2007-01-01

    Single scattering albedo (SSA) governs the strength of aerosols in absorbing solar radiation, but few methods are available to directly measure this important quantity. There currently exist many ground-based measurements of spectral transmittance from which aerosol optical thickness (AOT) are retrieved under clear sky conditions. Reflected radiances at the top of the...

  20. Multidirectional four-dimensional shape measurement system

    NASA Astrophysics Data System (ADS)

    Lenar, Janusz; Sitnik, Robert; Witkowski, Marcin

    2012-03-01

    Currently, a lot of different scanning techniques are used for 3D imaging of human body. Most of existing systems are based on static registration of internal structures using MRI or CT techniques as well as 3D scanning of outer surface of human body by laser triangulation or structured light methods. On the other hand there is an existing mature 4D method based on tracking in time the position of retro-reflective markers attached to human body. There are two main drawbacks of this solution: markers are attached to skin (no real skeleton movement is registered) and it gives (x, y, z, t) coordinates only in those points (not for the whole surface). In this paper we present a novel multidirectional structured light measurement system that is capable of measuring 3D shape of human body surface with frequency reaching 60Hz. The developed system consists of two spectrally separated and hardware-synchronized 4D measurement heads. The principle of the measurement is based on single frame analysis. Projected frame is composed from sine-modulated intensity pattern and a special stripe allowing absolute phase measurement. Several different geometrical set-ups will be proposed depending on type of movements that are to be registered.

  1. A novel statistical approach for identification of the master regulator transcription factor.

    PubMed

    Sikdar, Sinjini; Datta, Susmita

    2017-02-02

    Transcription factors are known to play key roles in carcinogenesis and therefore, are gaining popularity as potential therapeutic targets in drug development. A 'master regulator' transcription factor often appears to control most of the regulatory activities of the other transcription factors and the associated genes. This 'master regulator' transcription factor is at the top of the hierarchy of the transcriptomic regulation. Therefore, it is important to identify and target the master regulator transcription factor for proper understanding of the associated disease process and identifying the best therapeutic option. We present a novel two-step computational approach for identification of master regulator transcription factor in a genome. At the first step of our method we test whether there exists any master regulator transcription factor in the system. We evaluate the concordance of two ranked lists of transcription factors using a statistical measure. In case the concordance measure is statistically significant, we conclude that there is a master regulator. At the second step, our method identifies the master regulator transcription factor, if there exists one. In the simulation scenario, our method performs reasonably well in validating the existence of a master regulator when the number of subjects in each treatment group is reasonably large. In application to two real datasets, our method ensures the existence of master regulators and identifies biologically meaningful master regulators. An R code for implementing our method in a sample test data can be found in http://www.somnathdatta.org/software . We have developed a screening method of identifying the 'master regulator' transcription factor just using only the gene expression data. Understanding the regulatory structure and finding the master regulator help narrowing the search space for identifying biomarkers for complex diseases such as cancer. In addition to identifying the master regulator our method provides an overview of the regulatory structure of the transcription factors which control the global gene expression profiles and consequently the cell functioning.

  2. Sociometric Indicators of Leadership: An Exploratory Analysis

    DTIC Science & Technology

    2018-01-01

    streamline existing observational protocols and assessment methods . This research provides an initial test of sociometric badges in the context of the U.S...understand, the requirements of the mission. Traditional research and assessment methods focusing on leader and follower interactions require direct...based methods of social network analysis. Novel Measures of Leadership Building on these findings and earlier research , it is apparent that

  3. A Track Initiation Method for the Underwater Target Tracking Environment

    NASA Astrophysics Data System (ADS)

    Li, Dong-dong; Lin, Yang; Zhang, Yao

    2018-04-01

    A novel efficient track initiation method is proposed for the harsh underwater target tracking environment (heavy clutter and large measurement errors): track splitting, evaluating, pruning and merging method (TSEPM). Track initiation demands that the method should determine the existence and initial state of a target quickly and correctly. Heavy clutter and large measurement errors certainly pose additional difficulties and challenges, which deteriorate and complicate the track initiation in the harsh underwater target tracking environment. There are three primary shortcomings for the current track initiation methods to initialize a target: (a) they cannot eliminate the turbulences of clutter effectively; (b) there may be a high false alarm probability and low detection probability of a track; (c) they cannot estimate the initial state for a new confirmed track correctly. Based on the multiple hypotheses tracking principle and modified logic-based track initiation method, in order to increase the detection probability of a track, track splitting creates a large number of tracks which include the true track originated from the target. And in order to decrease the false alarm probability, based on the evaluation mechanism, track pruning and track merging are proposed to reduce the false tracks. TSEPM method can deal with the track initiation problems derived from heavy clutter and large measurement errors, determine the target's existence and estimate its initial state with the least squares method. What's more, our method is fully automatic and does not require any kind manual input for initializing and tuning any parameter. Simulation results indicate that our new method improves significantly the performance of the track initiation in the harsh underwater target tracking environment.

  4. Upgraded divertor Thomson scattering system on DIII-D

    NASA Astrophysics Data System (ADS)

    Glass, F.; Carlstrom, T. N.; Du, D.; McLean, A. G.; Taussig, D. A.; Boivin, R. L.

    2016-11-01

    A design to extend the unique divertor Thomson scattering system on DIII-D to allow measurements of electron temperature and density in high triangularity plasmas is presented. Access to this region is selectable on a shot-by-shot basis by redirecting the laser beam of the existing divertor Thomson system inboard — beneath the lower floor using a moveable, high-damage threshold, in-vacuum mirror — and then redirecting again vertically. The currently measured divertor region remains available with this mirror retracted. Scattered light is collected from viewchords near the divertor floor using in-vacuum, high temperature optical elements and relayed through the port window, before being coupled into optical fiber bundles. At higher elevations from the floor, measurements are made by dynamically re-focusing the existing divertor system collection optics. Nd:YAG laser timing, analysis of the scattered light spectrum via polychromators, data acquisition, and calibration are all handled by existing systems or methods of the current multi-pulse Thomson scattering system. Existing filtered polychromators with 7 spectral channels are employed to provide maximum measurement breadth (Te in the range of 0.5 eV-2 keV, ne in the range of 5 × 1018-1 × 1021 m3) for both low Te in detachment and high Te measurement up beyond the separatrix.

  5. Upgraded divertor Thomson scattering system on DIII-D.

    PubMed

    Glass, F; Carlstrom, T N; Du, D; McLean, A G; Taussig, D A; Boivin, R L

    2016-11-01

    A design to extend the unique divertor Thomson scattering system on DIII-D to allow measurements of electron temperature and density in high triangularity plasmas is presented. Access to this region is selectable on a shot-by-shot basis by redirecting the laser beam of the existing divertor Thomson system inboard - beneath the lower floor using a moveable, high-damage threshold, in-vacuum mirror - and then redirecting again vertically. The currently measured divertor region remains available with this mirror retracted. Scattered light is collected from viewchords near the divertor floor using in-vacuum, high temperature optical elements and relayed through the port window, before being coupled into optical fiber bundles. At higher elevations from the floor, measurements are made by dynamically re-focusing the existing divertor system collection optics. Nd:YAG laser timing, analysis of the scattered light spectrum via polychromators, data acquisition, and calibration are all handled by existing systems or methods of the current multi-pulse Thomson scattering system. Existing filtered polychromators with 7 spectral channels are employed to provide maximum measurement breadth (T e in the range of 0.5 eV-2 keV, n e in the range of 5 × 10 18 -1 × 10 21 m 3 ) for both low T e in detachment and high T e measurement up beyond the separatrix.

  6. Technical Note: Novel method for water vapour monitoring using wireless communication networks measurements

    NASA Astrophysics Data System (ADS)

    David, N.; Alpert, P.; Messer, H.

    2009-04-01

    We propose a new technique that overcomes the obstacles of the existing methods for monitoring near-surface water vapour, by estimating humidity from data collected through existing wireless communication networks. Weather conditions and atmospheric phenomena affect the electromagnetic channel, causing attenuations to the radio signals. Thus, wireless communication networks are in effect built-in environmental monitoring facilities. The wireless microwave links, used in these networks, are widely deployed by cellular providers for backhaul communication between base stations, a few tens of meters above ground level. As a result, if all available measurements are used, the proposed method can provide moisture observations with high spatial resolution and potentially high temporal resolution. Further, the implementation cost is minimal, since the data used are already collected and saved by the cellular operators. In addition - many of these links are installed in areas where access is difficult such as orographic terrain and complex topography. As such, our method enables measurements in places that have been hard to measure in the past, or have never been measured before. The technique is restricted to weather conditions which exclude rain, fog or clouds along the propagation path. Strong winds that may cause movement of the link transmitter or receiver (or both) may also interfere with the ability to conduct accurate measurements. We present results from real-data measurements taken from two microwave links used in a backhaul cellular network that show convincing correlation to surface station humidity measurements. The measurements were taken daily in two sites, one in northern Israel (28 measurements), the other in central Israel (29 measurements). The correlation between the microwave link measurements and the humidity gauges were 0.9 and 0.82 for the north and central sites, respectively. The Root Mean Square Differences (RMSD) were 1.8 g/m3 and 3.4 g/m3 for the northern and central site measurements, respectively.

  7. Tilt measurement using inclinometer based on redundant configuration of MEMS accelerometers

    NASA Astrophysics Data System (ADS)

    Lu, Jiazhen; Liu, Xuecong; Zhang, Hao

    2018-05-01

    Inclinometers are widely used in tilt measurement and their required accuracy is becoming ever higher. Most existing methods can effectively work only when the tilt is less than 60°, and the accuracy still can be improved. A redundant configuration of micro-electro mechanical system accelerometers is proposed in this paper and a least squares method and data processing normalization are used. A rigorous mathematical derivation is given. Simulation and experiment are used to verify its feasibility. The results of a Monte Carlo simulation, repeated 3000 times, and turntable reference experiments have shown that the tilt measure range can be expanded to 0°–90° by this method and that the measurement accuracy of θ can be improved by more than 10 times and the measurement accuracy of γ can be also improved effectively. The proposed method is proved to be effective and significant in practical application.

  8. A salient region detection model combining background distribution measure for indoor robots.

    PubMed

    Li, Na; Xu, Hui; Wang, Zhenhua; Sun, Lining; Chen, Guodong

    2017-01-01

    Vision system plays an important role in the field of indoor robot. Saliency detection methods, capturing regions that are perceived as important, are used to improve the performance of visual perception system. Most of state-of-the-art methods for saliency detection, performing outstandingly in natural images, cannot work in complicated indoor environment. Therefore, we propose a new method comprised of graph-based RGB-D segmentation, primary saliency measure, background distribution measure, and combination. Besides, region roundness is proposed to describe the compactness of a region to measure background distribution more robustly. To validate the proposed approach, eleven influential methods are compared on the DSD and ECSSD dataset. Moreover, we build a mobile robot platform for application in an actual environment, and design three different kinds of experimental constructions that are different viewpoints, illumination variations and partial occlusions. Experimental results demonstrate that our model outperforms existing methods and is useful for indoor mobile robots.

  9. Sine Rotation Vector Method for Attitude Estimation of an Underwater Robot

    PubMed Central

    Ko, Nak Yong; Jeong, Seokki; Bae, Youngchul

    2016-01-01

    This paper describes a method for estimating the attitude of an underwater robot. The method employs a new concept of sine rotation vector and uses both an attitude heading and reference system (AHRS) and a Doppler velocity log (DVL) for the purpose of measurement. First, the acceleration and magnetic-field measurements are transformed into sine rotation vectors and combined. The combined sine rotation vector is then transformed into the differences between the Euler angles of the measured attitude and the predicted attitude; the differences are used to correct the predicted attitude. The method was evaluated according to field-test data and simulation data and compared to existing methods that calculate angular differences directly without a preceding sine rotation vector transformation. The comparison verifies that the proposed method improves the attitude estimation performance. PMID:27490549

  10. LEAKAGE CHARACTERISTICS OF BASE OF RIVERBANK BY SELF POTENTIAL METHOD AND EXAMINATION OF EFFECTIVENESS OF SELF POTENTIAL METHOD TO HEALTH MONITORING OF BASE OF RIVERBANK

    NASA Astrophysics Data System (ADS)

    Matsumoto, Kensaku; Okada, Takashi; Takeuchi, Atsuo; Yazawa, Masato; Uchibori, Sumio; Shimizu, Yoshihiko

    Field Measurement of Self Potential Method using Copper Sulfate Electrode was performed in base of riverbank in WATARASE River, where has leakage problem to examine leakage characteristics. Measurement results showed typical S-shape what indicates existence of flow groundwater. The results agreed with measurement results by Ministry of Land, Infrastructure and Transport with good accuracy. Results of 1m depth ground temperature detection and Chain-Array detection showed good agreement with results of the Self Potential Method. Correlation between Self Potential value and groundwater velocity was examined model experiment. The result showed apparent correlation. These results indicate that the Self Potential Method was effective method to examine the characteristics of ground water of base of riverbank in leakage problem.

  11. A minimally invasive displacement sensor for measuring brain micromotion in 3D with nanometer scale resolution.

    PubMed

    Vähäsöyrinki, Mikko; Tuukkanen, Tuomas; Sorvoja, Hannu; Pudas, Marko

    2009-06-15

    Electrophysiological recordings from a single or population of neurons are currently the standard method for investigating neural mechanisms with high spatio-temporal resolution. It is often difficult or even impossible to obtain stable recordings because of brain movements generated by the cardiac and respiratory functions and/or motor activity. An alternative approach to extensive surgical procedures aimed to reduce these movements would be to develop a control system capable of compensating the relative movement between the recording site and the electrode. As a first step towards such a system, an accurate method capable of measuring brain micromotion, preferably in 3D, in a non-invasive manner is required. A wide variety of technical solutions exist for displacement measurement. However, increased sensitivity in the measurement is often accompanied by strict limitations to sensor handling, implementation and external environment. In addition, majority of the current methods are limited to measurement along only one axis. We present a novel, minimally invasive, 3D displacement sensor with displacement resolution exceeding 70 nm along each axis. The sensor is based on optoelectronic detection of movements of a spring-like element with three degrees of freedom. It is remarkably compact with needle-like probe and can be packaged to withstand considerable mishandling, which allow easy implementation to existing measurement systems. We quantify the sensor performance and demonstrate its capabilities with an in vivo measurement of blowfly brain micromotion in a preparation commonly used for electrophysiology.

  12. The Impact of Symptoms and Impairments on Overall Health in US National Health Data

    PubMed Central

    Stewart, Susan T.; Woodward, Rebecca M.; Rosen, Allison B.; Cutler, David M.

    2015-01-01

    Objective To assess the effects on overall self-rated health of the broad range of symptoms and impairments that are routinely asked about in national surveys. Data We use data from adults in the nationally representative Medical Expenditure Panel Survey (MEPS) 2002 with validation in an independent sample from MEPS 2000. Methods Regression analysis is used to relate impairments and symptoms to a 100-point self-rating of general health status. The effect of each impairment and symptom on health-related quality of life (HRQOL) is estimated from regression coefficients, accounting for interactions between them. Results Impairments and symptoms most strongly associated with overall health include pain, self-care limitations, and having little or no energy. The most prevalent are moderate pain, severe anxiety, moderate depressive symptoms, and low energy. Effects are stable across different waves of MEPS, and questions cover a broader range of impairments and symptoms than existing health measurement instruments. Conclusions This method makes use of the rich detail on impairments and symptoms in existing national data, quantifying their independent effects on overall health. Given the ongoing availability of these data and the shortcomings of traditional utility methods, it would be valuable to compare existing HRQOL measures to other methods, such as the one presented herein, for use in tracking population health over time. PMID:18725850

  13. Quantum Point Contact Single-Nucleotide Conductance for DNA and RNA Sequence Identification.

    PubMed

    Afsari, Sepideh; Korshoj, Lee E; Abel, Gary R; Khan, Sajida; Chatterjee, Anushree; Nagpal, Prashant

    2017-11-28

    Several nanoscale electronic methods have been proposed for high-throughput single-molecule nucleic acid sequence identification. While many studies display a large ensemble of measurements as "electronic fingerprints" with some promise for distinguishing the DNA and RNA nucleobases (adenine, guanine, cytosine, thymine, and uracil), important metrics such as accuracy and confidence of base calling fall well below the current genomic methods. Issues such as unreliable metal-molecule junction formation, variation of nucleotide conformations, insufficient differences between the molecular orbitals responsible for single-nucleotide conduction, and lack of rigorous base calling algorithms lead to overlapping nanoelectronic measurements and poor nucleotide discrimination, especially at low coverage on single molecules. Here, we demonstrate a technique for reproducible conductance measurements on conformation-constrained single nucleotides and an advanced algorithmic approach for distinguishing the nucleobases. Our quantum point contact single-nucleotide conductance sequencing (QPICS) method uses combed and electrostatically bound single DNA and RNA nucleotides on a self-assembled monolayer of cysteamine molecules. We demonstrate that by varying the applied bias and pH conditions, molecular conductance can be switched ON and OFF, leading to reversible nucleotide perturbation for electronic recognition (NPER). We utilize NPER as a method to achieve >99.7% accuracy for DNA and RNA base calling at low molecular coverage (∼12×) using unbiased single measurements on DNA/RNA nucleotides, which represents a significant advance compared to existing sequencing methods. These results demonstrate the potential for utilizing simple surface modifications and existing biochemical moieties in individual nucleobases for a reliable, direct, single-molecule, nanoelectronic DNA and RNA nucleotide identification method for sequencing.

  14. Load Measurement on Foundations of Rockfall Protection Systems

    PubMed Central

    Volkwein, Axel; Kummer, Peter; Bitnel, Hueseyin; Campana, Lorenzo

    2016-01-01

    Rockfall protection barriers are connected to the ground using steel cables fixed with anchors and foundations for the steel posts. It is common practice to measure the forces in the cables, while to date measurements of forces in the foundations have been inadequately resolved. An overview is presented of existing methods to measure the loads on the post foundations of rockfall protection barriers. Addressing some of the inadequacies of existing approaches, a novel sensor unit is presented that is able to capture the forces acting on post foundations in all six degrees of freedom. The sensor unit consists of four triaxial force sensors placed between two steel plates. To correctly convert the measurements into the directional forces acting on the foundation a special in-situ calibration procedure is proposed that delivers a corresponding conversion matrix. PMID:26840315

  15. Reliability Estimation for Aggregated Data: Applications for Organizational Research.

    ERIC Educational Resources Information Center

    Hart, Roland J.; Bradshaw, Stephen C.

    This report provides the statistical tools necessary to measure the extent of error that exists in organizational record data and group survey data. It is felt that traditional methods of measuring error are inappropriate or incomplete when applied to organizational groups, especially in studies of organizational change when the same variables are…

  16. A Methodology for Zumbo's Third Generation DIF Analyses and the Ecology of Item Responding

    ERIC Educational Resources Information Center

    Zumbo, Bruno D.; Liu, Yan; Wu, Amery D.; Shear, Benjamin R.; Olvera Astivia, Oscar L.; Ark, Tavinder K.

    2015-01-01

    Methods for detecting differential item functioning (DIF) and item bias are typically used in the process of item analysis when developing new measures; adapting existing measures for different populations, languages, or cultures; or more generally validating test score inferences. In 2007 in "Language Assessment Quarterly," Zumbo…

  17. Measurement in Instructional Communication Research: A Decade in Review

    ERIC Educational Resources Information Center

    Mazer, Joseph P.; Graham, Elizabeth E.

    2015-01-01

    Periodic assessment and scrutiny of the discipline's measurement practices, instruments, and research findings are necessary to provide clarity and direction by revealing what we know, how we know it, and where the knowledge gaps exist. Reflective reviews have produced ample appraisals of the theory, research, and methods employed in the conduct…

  18. A measurable Lawson criterion and hydro-equivalent curves for inertial confinement fusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, C. D.; Betti, R.

    2008-01-01

    This article demonstrates how the ignition condition (Lawson criterion) for inertial confinement fusion (ICF) can be cast in a form depending on the only two parameters of the compressed fuel assembly that can be measured with methods already in existence: the hot spot ion temperature and the total areal density.

  19. Flux Redux: The Spinning Coil Comes around Again

    ERIC Educational Resources Information Center

    Lund, Daniel; Dietz, Eric; Zou, Xueli; Ard, Christopher; Lee, Jaydie; Kaneshiro, Chris; Blanton, Robert; Sun, Steven

    2017-01-01

    An essential laboratory exercise for our lower-division electromagnetism course involves the measurement of Earth's local magnetic field from the emf induced in a rotating coil of wire. Although many methods exist for the measurement of Earth's field, this one gives our students some practical experience with Faraday's law. The apparatus we had…

  20. Soil Water Content Sensors as a Method of Measuring Ice Depth

    NASA Astrophysics Data System (ADS)

    Whitaker, E.; Reed, D. E.; Desai, A. R.

    2015-12-01

    Lake ice depth provides important information about local and regional climate change, weather patterns, and recreational safety, as well as impacting in situ ecology and carbon cycling. However, it is challenging to measure ice depth continuously from a remote location, as existing methods are too large, expensive, and/or time-intensive. Therefore, we present a novel application that reduces the size and cost issues by using soil water content reflectometer sensors. Analysis of sensors deployed in an environmental chamber using a scale model of a lake demonstrated their value as accurate measures of the change in ice depth over any time period, through measurement of the liquid-to-solid phase change. A robust correlation exists between volumetric water content in time as a function of environmental temperature. This relationship allows us to convert volumetric water content into ice depth. An array of these sensors will be placed in Lake Mendota, Madison, Wisconsin in winter 2015-2016, to create a temporally high-resolution ice depth record, which will be used for ecological or climatological studies while also being transmitted to the public to increase recreational safety.

  1. A modified belief entropy in Dempster-Shafer framework.

    PubMed

    Zhou, Deyun; Tang, Yongchuan; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What's more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method.

  2. A modified belief entropy in Dempster-Shafer framework

    PubMed Central

    Zhou, Deyun; Jiang, Wen

    2017-01-01

    How to quantify the uncertain information in the framework of Dempster-Shafer evidence theory is still an open issue. Quite a few uncertainty measures have been proposed in Dempster-Shafer framework, however, the existing studies mainly focus on the mass function itself, the available information represented by the scale of the frame of discernment (FOD) in the body of evidence is ignored. Without taking full advantage of the information in the body of evidence, the existing methods are somehow not that efficient. In this paper, a modified belief entropy is proposed by considering the scale of FOD and the relative scale of a focal element with respect to FOD. Inspired by Deng entropy, the new belief entropy is consistent with Shannon entropy in the sense of probability consistency. What’s more, with less information loss, the new measure can overcome the shortage of some other uncertainty measures. A few numerical examples and a case study are presented to show the efficiency and superiority of the proposed method. PMID:28481914

  3. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE PAGES

    Huang, Lei; Xue, Junpeng; Gao, Bo; ...

    2016-12-21

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  4. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Xue, Junpeng; Gao, Bo

    In this paper, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. Themore » noise influence is studied by adding white Gaussian noise to the slope data. Finally, experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.« less

  5. Proposed Modifications to Engineering Design Guidelines Related to Resistivity Measurements and Spacecraft Charging

    NASA Technical Reports Server (NTRS)

    Dennison, J. R.; Swaminathan, Prasanna; Jost, Randy; Brunson, Jerilyn; Green, Nelson; Frederickson, A. Robb

    2005-01-01

    A key parameter in modeling differential spacecraft charging is the resistivity of insulating materials. This determines how charge will accumulate and redistribute across the spacecraft, as well as the time scale for charge transport and dissipation. Existing spacecraft charging guidelines recommend use of tests and imported resistivity data from handbooks that are based principally upon ASTM methods that are more applicable to classical ground conditions and designed for problems associated with power loss through the dielectric, than for how long charge can be stored on an insulator. These data have been found to underestimate charging effects by one to four orders of magnitude for spacecraft charging applications. A review is presented of methods to measure the resistive of highly insulating materials, including the electrometer-resistance method, the electrometer-constant voltage method, the voltage rate-of-change method and the charge storage method. This is based on joint experimental studies conducted at NASA Jet Propulsion Laboratory and Utah State University to investigate the charge storage method and its relation to spacecraft charging. The different methods are found to be appropriate for different resistivity ranges and for different charging circumstances. A simple physics-based model of these methods allows separation of the polarization current and dark current components from long duration measurements of resistivity over day- to month-long time scales. Model parameters are directly related to the magnitude of charge transfer and storage and the rate of charge transport. The model largely explains the observed differences in resistivity found using the different methods and provides a framework for recommendations for the appropriate test method for spacecraft materials with different resistivities and applications. The proposed changes to the existing engineering guidelines are intended to provide design engineers more appropriate methods for consideration and measurements of resistivity for many typical spacecraft charging scenarios.

  6. Measurement of CO2 diffusivity for carbon sequestration: a microfluidic approach for reservoir-specific analysis.

    PubMed

    Sell, Andrew; Fadaei, Hossein; Kim, Myeongsub; Sinton, David

    2013-01-02

    Predicting carbon dioxide (CO(2)) security and capacity in sequestration requires knowledge of CO(2) diffusion into reservoir fluids. In this paper we demonstrate a microfluidic based approach to measuring the mutual diffusion coefficient of carbon dioxide in water and brine. The approach enables formation of fresh CO(2)-liquid interfaces; the resulting diffusion is quantified by imaging fluorescence quenching of a pH-dependent dye, and subsequent analyses. This method was applied to study the effects of site-specific variables--CO(2) pressure and salinity levels--on the diffusion coefficient. In contrast to established, macro-scale pressure-volume-temperature cell methods that require large sample volumes and testing periods of hours/days, this approach requires only microliters of sample, provides results within minutes, and isolates diffusive mass transport from convective effects. The measured diffusion coefficient of CO(2) in water was constant (1.86 [± 0.26] × 10(-9) m(2)/s) over the range of pressures (5-50 bar) tested at 26 °C, in agreement with existing models. The effects of salinity were measured with solutions of 0-5 M NaCl, where the diffusion coefficient varied up to 3 times. These experimental data support existing theory and demonstrate the applicability of this method for reservoir-specific testing.

  7. Molecular opacities for exoplanets.

    PubMed

    Bernath, Peter F

    2014-04-28

    Spectroscopic observations of exoplanets are now possible by transit methods and direct emission. Spectroscopic requirements for exoplanets are reviewed based on existing measurements and model predictions for hot Jupiters and super-Earths. Molecular opacities needed to simulate astronomical observations can be obtained from laboratory measurements, ab initio calculations or a combination of the two approaches. This discussion article focuses mainly on laboratory measurements of hot molecules as needed for exoplanet spectroscopy.

  8. Apparatus for passive removal of subsurface contaminants and mass flow measurement

    DOEpatents

    Jackson, Dennis G [Augusta, GA; Rossabi, Joseph [Aiken, SC; Riha, Brian D [Augusta, GA

    2003-07-15

    A system for improving the Baroball valve and a method for retrofitting an existing Baroball valve. This invention improves upon the Baroball valve by reshaping the interior chamber of the valve to form a flow meter measuring chamber. The Baroball valve sealing mechanism acts as a rotameter bob for determining mass flow rate through the Baroball valve. A method for retrofitting a Baroball valve includes providing static pressure ports and connecting a measuring device, to these ports, for measuring the pressure differential between the Baroball chamber and the well. A standard curve of nominal device measurements allows the mass flow rate to be determined through the retrofitted Baroball valve.

  9. Apparatus for passive removal of subsurface contaminants and volume flow measurement

    DOEpatents

    Jackson, Dennis G.; Rossabi, Joseph; Riha, Brian D.

    2002-01-01

    A system for improving the Baroball valve and a method for retrofitting an existing Baroball valve. This invention improves upon the Baroball valve by reshaping the interior chamber of the valve to form a flow meter measuring chamber. The Baroball valve sealing mechanism acts as a rotameter bob for determining volume flow rate through the Baroball valve. A method for retrofitting a Baroball valve includes providing static pressure ports and connecting a measuring device, to these ports, for measuring the pressure differential between the Baroball chamber and the well. A standard curve of nominal device measurements allows the volume flow rate to be determined through the retrofitted Baroball valve.

  10. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  11. Clinical MR-mammography: are computer-assisted methods superior to visual or manual measurements for curve type analysis? A systematic approach.

    PubMed

    Baltzer, Pascal Andreas Thomas; Freiberg, Christian; Beger, Sebastian; Vag, Tibor; Dietzel, Matthias; Herzog, Aimee B; Gajda, Mieczyslaw; Camara, Oumar; Kaiser, Werner A

    2009-09-01

    Enhancement characteristics after administration of a contrast agent are regarded as a major criterion for differential diagnosis in magnetic resonance mammography (MRM). However, no consensus exists about the best measurement method to assess contrast enhancement kinetics. This systematic investigation was performed to compare visual estimation with manual region of interest (ROI) and computer-aided diagnosis (CAD) analysis for time curve measurements in MRM. A total of 329 patients undergoing surgery after MRM (1.5 T) were analyzed prospectively. Dynamic data were measured using visual estimation, including ROI as well as CAD methods, and classified depending on initial signal increase and delayed enhancement. Pathology revealed 469 lesions (279 malignant, 190 benign). Kappa agreement between the methods ranged from 0.78 to 0.81. Diagnostic accuracies of 74.4% (visual), 75.7% (ROI), and 76.6% (CAD) were found without statistical significant differences. According to our results, curve type measurements are useful as a diagnostic criterion in breast lesions irrespective of the method used.

  12. Bad data detection in two stage estimation using phasor measurements

    NASA Astrophysics Data System (ADS)

    Tarali, Aditya

    The ability of the Phasor Measurement Unit (PMU) to directly measure the system state, has led to steady increase in the use of PMU in the past decade. However, in spite of its high accuracy and the ability to measure the states directly, they cannot completely replace the conventional measurement units due to high cost. Hence it is necessary for the modern estimators to use both conventional and phasor measurements together. This thesis presents an alternative method to incorporate the new PMU measurements into the existing state estimator in a systematic manner such that no major modification is necessary to the existing algorithm. It is also shown that if PMUs are placed appropriately, the phasor measurements can be used to detect and identify the bad data associated with critical measurements by using this model, which cannot be detected by conventional state estimation algorithm. The developed model is tested on IEEE 14, IEEE 30 and IEEE 118 bus under various conditions.

  13. Identifying Stakeholders and Their Preferences about NFR by Comparing Use Case Diagrams of Several Existing Systems

    NASA Astrophysics Data System (ADS)

    Kaiya, Haruhiko; Osada, Akira; Kaijiri, Kenji

    We present a method to identify stakeholders and their preferences about non-functional requirements (NFR) by using use case diagrams of existing systems. We focus on the changes about NFR because such changes help stakeholders to identify their preferences. Comparing different use case diagrams of the same domain helps us to find changes to be occurred. We utilize Goal-Question-Metrics (GQM) method for identifying variables that characterize NFR, and we can systematically represent changes about NFR using the variables. Use cases that represent system interactions help us to bridge the gap between goals and metrics (variables), and we can easily construct measurable NFR. For validating and evaluating our method, we applied our method to an application domain of Mail User Agent (MUA) system.

  14. A Review On Missing Value Estimation Using Imputation Algorithm

    NASA Astrophysics Data System (ADS)

    Armina, Roslan; Zain, Azlan Mohd; Azizah Ali, Nor; Sallehuddin, Roselina

    2017-09-01

    The presence of the missing value in the data set has always been a major problem for precise prediction. The method for imputing missing value needs to minimize the effect of incomplete data sets for the prediction model. Many algorithms have been proposed for countermeasure of missing value problem. In this review, we provide a comprehensive analysis of existing imputation algorithm, focusing on the technique used and the implementation of global or local information of data sets for missing value estimation. In addition validation method for imputation result and way to measure the performance of imputation algorithm also described. The objective of this review is to highlight possible improvement on existing method and it is hoped that this review gives reader better understanding of imputation method trend.

  15. Estimation of suspended sediment flux in streams using continuous turbidity and flow data coupled with laboratory concentrations

    Treesearch

    Jack Lewis

    2002-01-01

    The widening use of sediment surrogate measurements such as turbidity necessitates consideration of new methods for estimating sediment flux. Generally, existing methods can be simply be used in new ways. The effectiveness of a method varies according to the quality of the surrogate data and its relation to suspended sediment concentration (SSC). For this discussion,...

  16. Method of determining forest production from remotely sensed forest parameters

    DOEpatents

    Corey, J.C.; Mackey, H.E. Jr.

    1987-08-31

    A method of determining forest production entirely from remotely sensed data in which remotely sensed multispectral scanner (MSS) data on forest 5 composition is combined with remotely sensed radar imaging data on forest stand biophysical parameters to provide a measure of forest production. A high correlation has been found to exist between the remotely sensed radar imaging data and on site measurements of biophysical 10 parameters such as stand height, diameter at breast height, total tree height, mean area per tree, and timber stand volume.

  17. Method of Reproduction of the Luminous Flux of the LED Light Sources by a Spherical Photometer

    NASA Astrophysics Data System (ADS)

    Huriev, M.; Neyezhmakov, P.

    2018-02-01

    In connection with transition to energy-efficient temporally stable light-emitting diodes (LEDs) lighting, a problem of ensuring the traceability of results of measurement of characteristics of light sources arises. The problem is related to existing measurement standards of luminous flux based on spherical photometers optimized for the reference incandescent lamps with a relative spectral characteristic different from the spectrum of the LEDs. We propose a method for reproduction of the luminous flux, which solves this problem.

  18. Measurement of HO2 chemical kinetics with a new detection method

    NASA Technical Reports Server (NTRS)

    Lee, Long C.; Suto, Masako

    1986-01-01

    Reaction rate constants of HO2+O3 were measured at various temperatures using a newly developed HO2 detection method. HO2 was detected by the OH(A-X) emission produced from photodissociative excitation of HO2 at 147 nm. In order to examine the possible interference of other emitting species with the HO2 detection, the photoexcitation processes of all the chemical species existing in the discharge flow tube were also investigated. The results are summarized.

  19. Method for Smoke Spread Testing of Large Premises

    NASA Astrophysics Data System (ADS)

    Walmerdahl, P.; Werling, P.

    2001-11-01

    A method for performing non-destructive smoke spread tests has been developed, tested and applied to several existing buildings. Burning methanol in different size steel trays cooled by water generates the heat source. Several tray sizes are available to cover fire sources up to nearly 1MW. The smoke is supplied by means of a suitable number of smoke generators that produce a smoke, which can be described as a non-toxic aerosol. The advantage of the method is that it provides a means for performing non-destructive tests in already existing buildings and other installations for the purpose of evaluating the functionality and design of the active fire protection measures such as smoke extraction systems, etc. In the report, the method is described in detail and experimental data from the try-out of the method are also presented in addition to a discussion on applicability and flexibility of the method.

  20. Implications of neutron star properties for the existence of light dark matter

    NASA Astrophysics Data System (ADS)

    Motta, T. F.; Guichon, P. A. M.; Thomas, A. W.

    2018-05-01

    It was recently suggested that the discrepancy between two methods of measuring the lifetime of the neutron may be a result of an unseen decay mode into a dark matter particle which is almost degenerate with the neutron. We explore the consequences of this for the properties of neutron stars, finding that their known properties are in conflict with the existence of such a particle.

  1. Current issues with standards in the measurement and documentation of human skeletal anatomy.

    PubMed

    Magee, Justin; McClelland, Brian; Winder, John

    2012-09-01

    Digital modeling of human anatomy has become increasingly important and relies on well-documented quantitative anatomy literature. This type of documentation is common for the spine and pelvis; however, significant issues exist due to the lack of standardization in measurement and technique. Existing literature on quantitative anatomy for the spine and pelvis of white adults (aged 18-65 years, separated into decadal categories) was reviewed from the disciplines of anatomy, manipulative therapy, anthropometrics, occupational ergonomics, biomechanics and forensic science. The data were unified into a single normative model of the sub-axial spine. Two-dimensional orthographic drawings were produced from the 590 individual measurements identified, which informed the development of a 3D digital model. A similar review of full range of motion data was conducted as a meta-analysis and the results were applied to the existing model, providing an inter-connected, articulated digital spine. During these data analysis processes several inconsistencies were observed accompanied by an evidential lack of standardization with measurement and recording of data. These have been categorized as: anatomical terminology; scaling of measurements; measurement methodology, dimension and anatomical reference positions; global coordinate systems. There is inconsistency in anatomical terminology where independent researchers use the same terms to describe different aspects of anatomy or different terms for the same anatomy. Published standards exist for measurement methods of the human body regarding spatial interaction, anthropometric databases, automotive applications, clothing industries and for computer manikins, but none exists for skeletal anatomy. Presentation of measurements often lacks formal structure in clinical publications, seldom providing geometric reference points, therefore making digital reconstruction difficult. Published quantitative data does not follow existing international published standards relating to engineering drawing and visual communication. Large variations are also evident in standards or guidelines used for global coordinate systems across biomechanics, ergonomics, software systems and 3D software applications. This paper identifies where established good practice exists and suggests additional recommendations, informing an improved communication protocol, to assist reconstruction of skeletal anatomy using 3D digital modeling. © 2012 The Authors. Journal of Anatomy © 2012 Anatomical Society.

  2. Current issues with standards in the measurement and documentation of human skeletal anatomy

    PubMed Central

    Magee, Justin; McClelland, Brian; Winder, John

    2012-01-01

    Digital modeling of human anatomy has become increasingly important and relies on well-documented quantitative anatomy literature. This type of documentation is common for the spine and pelvis; however, significant issues exist due to the lack of standardization in measurement and technique. Existing literature on quantitative anatomy for the spine and pelvis of white adults (aged 18–65 years, separated into decadal categories) was reviewed from the disciplines of anatomy, manipulative therapy, anthropometrics, occupational ergonomics, biomechanics and forensic science. The data were unified into a single normative model of the sub-axial spine. Two-dimensional orthographic drawings were produced from the 590 individual measurements identified, which informed the development of a 3D digital model. A similar review of full range of motion data was conducted as a meta-analysis and the results were applied to the existing model, providing an inter-connected, articulated digital spine. During these data analysis processes several inconsistencies were observed accompanied by an evidential lack of standardization with measurement and recording of data. These have been categorized as: anatomical terminology; scaling of measurements; measurement methodology, dimension and anatomical reference positions; global coordinate systems. There is inconsistency in anatomical terminology where independent researchers use the same terms to describe different aspects of anatomy or different terms for the same anatomy. Published standards exist for measurement methods of the human body regarding spatial interaction, anthropometric databases, automotive applications, clothing industries and for computer manikins, but none exists for skeletal anatomy. Presentation of measurements often lacks formal structure in clinical publications, seldom providing geometric reference points, therefore making digital reconstruction difficult. Published quantitative data does not follow existing international published standards relating to engineering drawing and visual communication. Large variations are also evident in standards or guidelines used for global coordinate systems across biomechanics, ergonomics, software systems and 3D software applications. This paper identifies where established good practice exists and suggests additional recommendations, informing an improved communication protocol, to assist reconstruction of skeletal anatomy using 3D digital modeling. PMID:22747678

  3. Measuring attitudes towards the dying process: A systematic review of tools.

    PubMed

    Groebe, Bernadette; Strupp, Julia; Eisenmann, Yvonne; Schmidt, Holger; Schlomann, Anna; Rietz, Christian; Voltz, Raymond

    2018-04-01

    At the end of life, anxious attitudes concerning the dying process are common in patients in Palliative Care. Measurement tools can identify vulnerabilities, resources and the need for subsequent treatment to relieve suffering and support well-being. To systematically review available tools measuring attitudes towards dying, their operationalization, the method of measurement and the methodological quality including generalizability to different contexts. Systematic review according to the PRISMA Statement. Methodological quality of tools assessed by standardized review criteria. MEDLINE, PsycINFO, PsyndexTests and the Health and Psychosocial Instruments were searched from their inception to April 2017. A total of 94 identified studies reported the development and/or validation of 44 tools. Of these, 37 were questionnaires and 7 alternative measurement methods (e.g. projective measures). In 34 of 37 questionnaires, the emotional evaluation (e.g. anxiety) towards dying is measured. Dying is operationalized in general items ( n = 20), in several specific aspects of dying ( n = 34) and as dying of others ( n = 14). Methodological quality of tools was reported inconsistently. Nine tools reported good internal consistency. Of 37 tools, 4 were validated in a clinical sample (e.g. terminal cancer; Huntington disease), indicating questionable generalizability to clinical contexts for most tools. Many tools exist to measure attitudes towards the dying process using different endpoints. This overview can serve as decision framework on which tool to apply in which contexts. For clinical application, only few tools were available. Further validation of existing tools and potential alternative methods in various populations is needed.

  4. Measurement methods of building structures deflections

    NASA Astrophysics Data System (ADS)

    Wróblewska, Magdalena

    2018-04-01

    Underground mining exploitation is leading to the occurrence of deformations manifested by, in particular, sloping terrain. The structures situated on the deforming subsoil are subject to uneven subsidence which is leading in consequence to their deflection. Before a building rectification process takes place by, e.g. uneven raising, the structure's deflection direction and value is determined so that the structure is restored to its vertical position as a result of the undertaken remedial measures. Deflection can be determined by applying classical as well as modern measurement techniques. The article presents examples of measurement methods used considering the measured elements of building structures' constructions and field measurements. Moreover, for a given example of a mining area, the existing deflections of buildings were compared with mining terrain sloping.

  5. A systematic review of the care coordination measurement landscape

    PubMed Central

    2013-01-01

    Background Care coordination has increasingly been recognized as an important aspect of high-quality health care delivery. Robust measures of coordination processes will be essential tools to evaluate, guide and support efforts to understand and improve coordination, yet little agreement exists among stakeholders about how to best measure care coordination. We aimed to review and characterize existing measures of care coordination processes and identify areas of high and low density to guide future measure development. Methods We conducted a systematic review of measures published in MEDLINE through April 2012 and identified from additional key sources and informants. We characterized included measures with respect to the aspects of coordination measured (domain), measurement perspective (patient/family, health care professional, system representative), applicable settings and patient populations (by age and condition), and data used (survey, chart review, administrative claims). Results Among the 96 included measure instruments, most relied on survey methods (88%) and measured aspects of communication (93%), in particular the transfer of information (81%). Few measured changing coordination needs (11%). Nearly half (49%) of instruments mapped to the patient/family perspective; 29% to the system representative and 27% to the health care professionals perspective. Few instruments were applicable to settings other than primary care (58%), inpatient facilities (25%), and outpatient specialty care (22%). Conclusions New measures are needed that evaluate changing coordination needs, coordination as perceived by health care professionals, coordination in the home health setting, and for patients at the end of life. PMID:23537350

  6. Probabilistic Evaluation of Three-Dimensional Reconstructions from X-Ray Images Spanning a Limited Angle

    PubMed Central

    Frost, Anja; Renners, Eike; Hötter, Michael; Ostermann, Jörn

    2013-01-01

    An important part of computed tomography is the calculation of a three-dimensional reconstruction of an object from series of X-ray images. Unfortunately, some applications do not provide sufficient X-ray images. Then, the reconstructed objects no longer truly represent the original. Inside of the volumes, the accuracy seems to vary unpredictably. In this paper, we introduce a novel method to evaluate any reconstruction, voxel by voxel. The evaluation is based on a sophisticated probabilistic handling of the measured X-rays, as well as the inclusion of a priori knowledge about the materials that the object receiving the X-ray examination consists of. For each voxel, the proposed method outputs a numerical value that represents the probability of existence of a predefined material at the position of the voxel while doing X-ray. Such a probabilistic quality measure was lacking so far. In our experiment, false reconstructed areas get detected by their low probability. In exact reconstructed areas, a high probability predominates. Receiver Operating Characteristics not only confirm the reliability of our quality measure but also demonstrate that existing methods are less suitable for evaluating a reconstruction. PMID:23344378

  7. Improved phase-ellipse method for in-situ geophone calibration.

    USGS Publications Warehouse

    Liu, Huaibao P.; Peselnick, L.

    1986-01-01

    For amplitude and phase response calibration of moving-coil electromagnetic geophones 2 parameters are needed, namely the geophone natural frequency, fo, and the geophone upper resonance frequency fu. The phase-ellipse method is commonly used for the in situ determination of these parameters. For a given signal-to-noise ratio, the precision of the measurement of fo and fu depends on the phase sensitivity, f(delta PHI/delta PHIf). For some commercial geophones (f(delta PHI/delta PHI) at fu can be an order of magnitude less than the sensitivity at fo. Presents an improved phase-ellipse method with increased precision. Compared to measurements made with the existing phase-ellipse methods, the method shows a 6- and 3-fold improvement in the precision, respectively, on measurements of fo and fu on a commercial geophone.-from Authors

  8. New calibration method for I-scan sensors to enable the precise measurement of pressures delivered by 'pressure garments'.

    PubMed

    Macintyre, Lisa

    2011-11-01

    Accurate measurement of the pressure delivered by medical compression products is highly desirable both in monitoring treatment and in developing new pressure inducing garments or products. There are several complications in measuring pressure at the garment/body interface and at present no ideal pressure measurement tool exists for this purpose. This paper summarises a thorough evaluation of the accuracy and reproducibility of measurements taken following both of Tekscan Inc.'s recommended calibration procedures for I-scan sensors; and presents an improved method for calibrating and using I-scan pressure sensors. The proposed calibration method enables accurate (±2.1 mmHg) measurement of pressures delivered by pressure garments to body parts with a circumference ≥30 cm. This method is too cumbersome for routine clinical use but is very useful, accurate and reproducible for product development or clinical evaluation purposes. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.

  9. Modifications to the NIST reference measurement procedure (RMP) for the determination of serum glucose by isotope dilution gas chromatography/mass spectrometry.

    PubMed

    Prendergast, Jocelyn L; Sniegoski, Lorna T; Welch, Michael J; Phinney, Karen W

    2010-07-01

    The definitive method (DM), now known as the reference measurement procedure (RMP), for the analysis of glucose in serum was originally published in 1982 by the National Institute of Standards and Technology (NIST). Over the years the method has been subject to a number of modifications to adapt to newer technologies and simplify sample preparation. We discuss here an adaptation of the method associated with serum glucose measurements using a modified isotope dilution gas chromatography/mass spectrometry (ID-GC/MS) method. NIST has used this modified method to certify the concentrations of glucose in SRM 965b, Glucose in Frozen Human Serum, and SRM 1950, Metabolites in Human Plasma. Comparison of results from the revised method with certified values for existing Standard Reference Materials (SRMs) demonstrated that these modifications have not affected the quality of the measurements, giving both good precision and accuracy, while reducing the sample preparation time by a day and a half.

  10. An efficient algorithm for measurement of retinal vessel diameter from fundus images based on directional filtering

    NASA Astrophysics Data System (ADS)

    Wang, Xuchu; Niu, Yanmin

    2011-02-01

    Automatic measurement of vessels from fundus images is a crucial step for assessing vessel anomalies in ophthalmological community, where the change in retinal vessel diameters is believed to be indicative of the risk level of diabetic retinopathy. In this paper, a new retinal vessel diameter measurement method by combining vessel orientation estimation and filter response is proposed. Its interesting characteristics include: (1) different from the methods that only fit the vessel profiles, the proposed method extracts more stable and accurate vessel diameter by casting this problem as a maximal response problem of a variation of Gabor filter; (2) the proposed method can directly and efficiently estimate the vessel's orientation, which is usually captured by time-consuming multi-orientation fitting techniques in many existing methods. Experimental results shows that the proposed method both retains the computational simplicity and achieves stable and accurate estimation results.

  11. Development of a general method for detection and quantification of the P35S promoter based on assessment of existing methods

    PubMed Central

    Wu, Yuhua; Wang, Yulei; Li, Jun; Li, Wei; Zhang, Li; Li, Yunjing; Li, Xiaofei; Li, Jun; Zhu, Li; Wu, Gang

    2014-01-01

    The Cauliflower mosaic virus (CaMV) 35S promoter (P35S) is a commonly used target for detection of genetically modified organisms (GMOs). There are currently 24 reported detection methods, targeting different regions of the P35S promoter. Initial assessment revealed that due to the absence of primer binding sites in the P35S sequence, 19 of the 24 reported methods failed to detect P35S in MON88913 cotton, and the other two methods could only be applied to certain GMOs. The rest three reported methods were not suitable for measurement of P35S in some testing events, because SNPs in binding sites of the primer/probe would result in abnormal amplification plots and poor linear regression parameters. In this study, we discovered a conserved region in the P35S sequence through sequencing of P35S promoters from multiple transgenic events, and developed new qualitative and quantitative detection systems targeting this conserved region. The qualitative PCR could detect the P35S promoter in 23 unique GMO events with high specificity and sensitivity. The quantitative method was suitable for measurement of P35S promoter, exhibiting good agreement between the amount of template and Ct values for each testing event. This study provides a general P35S screening method, with greater coverage than existing methods. PMID:25483893

  12. Error model of geomagnetic-field measurement and extended Kalman-filter based compensation method

    PubMed Central

    Ge, Zhilei; Liu, Suyun; Li, Guopeng; Huang, Yan; Wang, Yanni

    2017-01-01

    The real-time accurate measurement of the geomagnetic-field is the foundation to achieving high-precision geomagnetic navigation. The existing geomagnetic-field measurement models are essentially simplified models that cannot accurately describe the sources of measurement error. This paper, on the basis of systematically analyzing the source of geomagnetic-field measurement error, built a complete measurement model, into which the previously unconsidered geomagnetic daily variation field was introduced. This paper proposed an extended Kalman-filter based compensation method, which allows a large amount of measurement data to be used in estimating parameters to obtain the optimal solution in the sense of statistics. The experiment results showed that the compensated strength of the geomagnetic field remained close to the real value and the measurement error was basically controlled within 5nT. In addition, this compensation method has strong applicability due to its easy data collection and ability to remove the dependence on a high-precision measurement instrument. PMID:28445508

  13. Academic Motivation of the First-Year University Students and the Self-Determination Theory

    ERIC Educational Resources Information Center

    Koseoglu, Yaman

    2013-01-01

    The Self Determination Theory has identified various types of motivation along a continuum from weakest to strongest. Yet, until recently, no reliable method existed to measure accurately the strength of motivation along this continuum. Vallerand et al. (1992) developed the Academic Motivation Scale (AMS) to measure the validity of the Self…

  14. Within-Subject Comparison of Changes in a Pretest-Posttest Design

    ERIC Educational Resources Information Center

    Hennig, Christian; Mullensiefen, Daniel; Bargmann, Jens

    2010-01-01

    The authors propose a method to compare the influence of a treatment on different properties within subjects. The properties are measured by several Likert-type-scaled items. The results show that many existing approaches, such as repeated measurement analysis of variance on sum and mean scores, a linear partial credit model, and a graded response…

  15. The Effect of Observation Length and Presentation Order on the Reliability and Validity of an Observational Measure of Teaching Quality

    ERIC Educational Resources Information Center

    Mashburn, Andrew J.; Meyer, J. Patrick; Allen, Joseph P.; Pianta, Robert C.

    2014-01-01

    Observational methods are increasingly being used in classrooms to evaluate the quality of teaching. Operational procedures for observing teachers are somewhat arbitrary in existing measures and vary across different instruments. To study the effect of different observation procedures on score reliability and validity, we conducted an experimental…

  16. Measuring Adult Literacy in Health Care: Performance of the Newest Vital Sign

    ERIC Educational Resources Information Center

    Osborn, Chandra Y.; Weiss, Barry D.; Davis, Terry C.; Skripkauskas, Silvia; Rodrigue, Christopher; Bass, Pat F., III; Wolf, Michael S.

    2007-01-01

    Objective: To compare performance of the newest vital sign (NVS) with existing literacy measures. Methods: We administered the NVS and REALM to 129 patients, and NVS and S-TOFHLA to 119 patients all in public clinics. Results: The NVS demonstrated high sensitivity for detecting limited literacy and moderate specificity (area under the receiver…

  17. Will Courts Shape Value-Added Methods for Teacher Evaluation? ACT Working Paper Series. WP-2014-2

    ERIC Educational Resources Information Center

    Croft, Michelle; Buddin, Richard

    2014-01-01

    As more states begin to adopt teacher evaluation systems based on value-added measures, legal challenges have been filed both seeking to limit the use of value-added measures ("Cook v. Stewart") and others seeking to require more robust evaluation systems ("Vergara v. California"). This study reviews existing teacher evaluation…

  18. Assessing biodiversity on the farm scale as basis for ecosystem service payments.

    PubMed

    von Haaren, Christina; Kempa, Daniela; Vogel, Katrin; Rüter, Stefan

    2012-12-30

    Ecosystem services payments must be based on a standardised transparent assessment of the goods and services provided. This is especially relevant in the context of EU agri-environmental programs, but also for organic-food companies that foster environmental services on their contractor farms. Addressing the farm scale is important because land users/owners are major recipients of payments and they could be more involved in data generation and conservation management. A standardised system for measuring on-farm biodiversity does not yet exist that concentrates on performance indicators and includes farmers in generating information. A method is required that produces ordinal or metric scaled assessment results as well as management measures. Another requirement is the ease of application, which includes the ease of gathering input data and understandability. In order to respond to this need, we developed a method which is designed for automated application in an open source farm assessment system named MANUELA. The method produces an ordinal scale assessment of biodiversity that includes biotopes, species, biotope connectivity and the influence of land use. In addition, specific measures for biotope types are proposed. The open source geographical information system OpenJump is used for the implementation of MANUELA. The results of the trial applications and robustness tests show that the assessment can be implemented, for the most part, using existing information as well as data available from farmers or advisors. The results are more sensitive for showing on-farm achievements and changes than existing biotope-type classifications. Such a differentiated classification is needed as a basis for ecosystem service payments and for designing effective measures. The robustness of the results with respect to biotope connectivity is comparable to that of complex models, but it should be further improved. Interviews with the test farmers substantiate that the assessment methods can be implemented on farms and they are understood by farmers. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Study of the plastic zone around the ligament of thin sheet D.E.N.T specimen subjected to tensile

    NASA Astrophysics Data System (ADS)

    Djebali, S.; Larbi, S.; Bilek, A.

    2015-03-01

    One of the assumptions of Cotterell and Reddel's method of the essential work of fracture determination is the existence of a fracture process zone surrounded by an outer plastic zone extending to the whole ligament before crack initiation. To verify this hypothesis we developed a method based on micro hardness. The hardness values measured in the domain surrounding the tensile fracture area of ST-37-2 steel sheet D.E.N.T specimens confirm the existence of the two plastic zones. The extension of the plastic deformations to the whole ligament before the crack initiation and the circular shape of the outer plastic zone are revealed by the brittle coating method.

  20. Theory-Guided Selection of Discrimination Measures for Racial/Ethnic Health Disparities Research among Older Adults

    PubMed Central

    Thrasher, Angela D.; Clay, Olivio J.; Ford, Chandra L.; Stewart, Anita L.

    2013-01-01

    Objectives Discrimination may contribute to health disparities among older adults. Existing measures of perceived discrimination have provided important insights but may have limitations when used in studies of older adults. This paper illustrates the process of assessing the appropriateness of existing measures for theory-based research on perceived discrimination and health. Methods First we describe three theoretical frameworks that are relevant to the study of perceived discrimination and health – stress-process models, life course models, and the Public Health Critical Race praxis. We then review four widely-used measures of discrimination, comparing their content and describing how well they address key aspects of each theory, and discussing potential areas of modification. Discussion Using theory to guide measure selection can help improve understanding of how perceived discrimination may contribute to racial/ethnic health disparities among older adults. PMID:22451527

  1. Molecular opacities for exoplanets

    PubMed Central

    Bernath, Peter F.

    2014-01-01

    Spectroscopic observations of exoplanets are now possible by transit methods and direct emission. Spectroscopic requirements for exoplanets are reviewed based on existing measurements and model predictions for hot Jupiters and super-Earths. Molecular opacities needed to simulate astronomical observations can be obtained from laboratory measurements, ab initio calculations or a combination of the two approaches. This discussion article focuses mainly on laboratory measurements of hot molecules as needed for exoplanet spectroscopy. PMID:24664921

  2. Using rate of divergence as an objective measure to differentiate between voice signal types based on the amount of disorder in the signal

    PubMed Central

    Calawerts, William M; Lin, Liyu; Sprott, JC; Jiang, Jack J

    2016-01-01

    Objective/Hypothesis The purpose of this paper is to introduce rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. Study Design 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. ty Methods We developed a new parameter, rate of divergence, which uses a modified version of Wolf’s algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. Results There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (p<0.001), and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. Conclusion The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. PMID:26920858

  3. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE PAGES

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; ...

    2016-08-31

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  4. InteGO2: a web tool for measuring and visualizing gene semantic similarities using Gene Ontology.

    PubMed

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang; Juan, Liran; Jiang, Qinghua; Wang, Yadong; Chen, Jin

    2016-08-31

    The Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. We present InteGO2, a web tool that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface. InteGO2 can be accessed via http://mlg.hit.edu.cn:8089/ .

  5. InteGO2: A web tool for measuring and visualizing gene semantic similarities using Gene Ontology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jiajie; Li, Hongxiang; Liu, Yongzhuang

    Here, the Gene Ontology (GO) has been used in high-throughput omics research as a major bioinformatics resource. The hierarchical structure of GO provides users a convenient platform for biological information abstraction and hypothesis testing. Computational methods have been developed to identify functionally similar genes. However, none of the existing measurements take into account all the rich information in GO. Similarly, using these existing methods, web-based applications have been constructed to compute gene functional similarities, and to provide pure text-based outputs. Without a graphical visualization interface, it is difficult for result interpretation. As a result, we present InteGO2, a web toolmore » that allows researchers to calculate the GO-based gene semantic similarities using seven widely used GO-based similarity measurements. Also, we provide an integrative measurement that synergistically integrates all the individual measurements to improve the overall performance. Using HTML5 and cytoscape.js, we provide a graphical interface in InteGO2 to visualize the resulting gene functional association networks. In conclusion, InteGO2 is an easy-to-use HTML5 based web tool. With it, researchers can measure gene or gene product functional similarity conveniently, and visualize the network of functional interactions in a graphical interface.« less

  6. Documenting Preservice Teacher Growth through Critical Assessment of Online Lesson Plans

    ERIC Educational Resources Information Center

    Cude, Michelle D.; Haraway, Dana L.

    2017-01-01

    This research explores the question of how students in a social studies methods course improve skills in analyzing and critiquing pre-existing lesson plans. It utilizes a pre-post authentic assessment tool to measure student growth in key skills of lesson plan critique over the course of one semester's methods instruction. The results support the…

  7. A Cost-Effectiveness/Benefit Analysis Model for Postsecondary Vocational Programs. Technical Report.

    ERIC Educational Resources Information Center

    Kim, Jin Eun

    A cost-effectiveness/benefit analysis is defined as a technique for measuring the outputs of existing and new programs in relation to their specified program objectives, against the costs of those programs. In terms of its specific use, the technique is conceptualized as a systems analysis method, an evaluation method, and a planning tool for…

  8. Comparison of Available Soil Nitrogen Assays in Control and Burned Forested Sites

    Treesearch

    Jennifer D. Knoepp; Wayne T. Swank

    1995-01-01

    The existence of several different methods for measuring net Nmineralization and nitrilkation rates and indexing N availability has raised questions about the comparability of these methods. We compared in situ covered cores, in situ buried bags, aerobic laboratory incubations, and tension lysimetry on control and treated plots of a prescribed burn experiment in the...

  9. Developing and refining NIR calibrations for total carbohydrate composition and isoflavones and saponins in ground whole soy meal

    USDA-ARS?s Scientific Manuscript database

    Although many near infrared (NIR) spectrometric calibrations exist for a variety of components in soy, current calibration methods are often limited by either a small sample size on which the calibrations are based or a wide variation in sample preparation and measurement methods, which yields unrel...

  10. SIMPLE SAMPLE CLEAN UP PROCEDURE AND HIGH PERFORMANCE LIQUID CHROMATOGRAPHIC METHOD FOR THE ANALYSIS OF CYANURIC ACID IN HUMAN URINE

    EPA Science Inventory

    Cyanuric acide (CA) is widely used as a chlorine stabilizer in outdoor pools. No simple method exists for CA measurement in the urine of exposed swimmers. The high hydrophilicity of CA makes usage of solid phase sorbents to extract it from urine nearly impossible because of samp...

  11. The Relationship between Task Difficulty and Second Language Fluency in French: A Mixed Methods Approach

    ERIC Educational Resources Information Center

    Préfontaine, Yvonne; Kormos, Judit

    2015-01-01

    While there exists a considerable body of literature on task-based difficulty and second language (L2) fluency in English as a second language (ESL), there has been little investigation with French learners. This mixed methods study examines learner appraisals of task difficulty and their relationship to automated utterance fluency measures in…

  12. Operations research applications in nuclear energy

    NASA Astrophysics Data System (ADS)

    Johnson, Benjamin Lloyd

    This dissertation consists of three papers; the first is published in Annals of Operations Research, the second is nearing submission to INFORMS Journal on Computing, and the third is the predecessor of a paper nearing submission to Progress in Nuclear Energy. We apply operations research techniques to nuclear waste disposal and nuclear safeguards. Although these fields are different, they allow us to showcase some benefits of using operations research techniques to enhance nuclear energy applications. The first paper, "Optimizing High-Level Nuclear Waste Disposal within a Deep Geologic Repository," presents a mixed-integer programming model that determines where to place high-level nuclear waste packages in a deep geologic repository to minimize heat load concentration. We develop a heuristic that increases the size of solvable model instances. The second paper, "Optimally Configuring a Measurement System to Detect Diversions from a Nuclear Fuel Cycle," introduces a simulation-optimization algorithm and an integer-programming model to find the best, or near-best, resource-limited nuclear fuel cycle measurement system with a high degree of confidence. Given location-dependent measurement method precisions, we (i) optimize the configuration of n methods at n locations of a hypothetical nuclear fuel cycle facility, (ii) find the most important location at which to improve method precision, and (iii) determine the effect of measurement frequency on near-optimal configurations and objective values. Our results correspond to existing outcomes but we obtain them at least an order of magnitude faster. The third paper, "Optimizing Nuclear Material Control and Accountability Measurement Systems," extends the integer program from the second paper to locate measurement methods in a larger, hypothetical nuclear fuel cycle scenario given fixed purchase and utilization budgets. This paper also presents two mixed-integer quadratic programming models to increase the precision of existing methods given a fixed improvement budget and to reduce the measurement uncertainty in the system while limiting improvement costs. We quickly obtain similar or better solutions compared to several intuitive analyses that take much longer to perform.

  13. Upgraded divertor Thomson scattering system on DIII-D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glass, F., E-mail: glassf@fusion.gat.com; Carlstrom, T. N.; Du, D.

    2016-11-15

    A design to extend the unique divertor Thomson scattering system on DIII-D to allow measurements of electron temperature and density in high triangularity plasmas is presented. Access to this region is selectable on a shot-by-shot basis by redirecting the laser beam of the existing divertor Thomson system inboard — beneath the lower floor using a moveable, high-damage threshold, in-vacuum mirror — and then redirecting again vertically. The currently measured divertor region remains available with this mirror retracted. Scattered light is collected from viewchords near the divertor floor using in-vacuum, high temperature optical elements and relayed through the port window, beforemore » being coupled into optical fiber bundles. At higher elevations from the floor, measurements are made by dynamically re-focusing the existing divertor system collection optics. Nd:YAG laser timing, analysis of the scattered light spectrum via polychromators, data acquisition, and calibration are all handled by existing systems or methods of the current multi-pulse Thomson scattering system. Existing filtered polychromators with 7 spectral channels are employed to provide maximum measurement breadth (T{sub e} in the range of 0.5 eV–2 keV, n{sub e} in the range of 5 × 10{sup 18}–1 × 10{sup 21} m{sup 3}) for both low T{sub e} in detachment and high T{sub e} measurement up beyond the separatrix.« less

  14. Measurement of water pressure and deformation with time domain reflectometry cables

    NASA Astrophysics Data System (ADS)

    Dowding, Charles H.; Pierce, Charles E.

    1995-05-01

    Time domain reflectometry (TDR) techniques can be deployed to measure water pressures and relative dam abutment displacement with an array of coaxial cables either drilled and grouted or retrofitted through existing passages. Application of TDR to dam monitoring requires determination of appropriate cable types and methods to install these cables in existing dams or during new construction. This paper briefly discusses currently applied and developing TDR techniques and describes initial design considerations for TDR-based dam instrumentation. Water pressure at the base of or within the dam can be determined by measuring the water level within a hollow or air-filled coaxial cable. The ability to retrofit existing porous stone-tipped piezometers is an attractive attribute of the TDR system. Measurement of relative lateral movement can be accomplished by monitoring local shearing of a solid polyethylene-filled coaxial cable at the interface of the dam base and foundation materials or along adversely oriented joints. Uplift can be recorded by measuring cable extension as the dam displaces upward off its foundation. Since each monitoring technique requires measurements with different types of coaxial cables, a variety may be installed within the array. Multiplexing of these cables will allow monitoring from a single pulser, and measurements can be recorded on site or remotely via a modem at any time.

  15. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  16. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  17. Background Noise Reduction Using Adaptive Noise Cancellation Determined by the Cross-Correlation

    NASA Technical Reports Server (NTRS)

    Spalt, Taylor B.; Brooks, Thomas F.; Fuller, Christopher R.

    2012-01-01

    Background noise due to flow in wind tunnels contaminates desired data by decreasing the Signal-to-Noise Ratio. The use of Adaptive Noise Cancellation to remove background noise at measurement microphones is compromised when the reference sensor measures both background and desired noise. The technique proposed modifies the classical processing configuration based on the cross-correlation between the reference and primary microphone. Background noise attenuation is achieved using a cross-correlation sample width that encompasses only the background noise and a matched delay for the adaptive processing. A present limitation of the method is that a minimum time delay between the background noise and desired signal must exist in order for the correlated parts of the desired signal to be separated from the background noise in the crosscorrelation. A simulation yields primary signal recovery which can be predicted from the coherence of the background noise between the channels. Results are compared with two existing methods.

  18. Analysis and model on space-time characteristics of wind power output based on the measured wind speed data

    NASA Astrophysics Data System (ADS)

    Shi, Wenhui; Feng, Changyou; Qu, Jixian; Zha, Hao; Ke, Dan

    2018-02-01

    Most of the existing studies on wind power output focus on the fluctuation of wind farms and the spatial self-complementary of wind power output time series was ignored. Therefore the existing probability models can’t reflect the features of power system incorporating wind farms. This paper analyzed the spatial self-complementary of wind power and proposed a probability model which can reflect temporal characteristics of wind power on seasonal and diurnal timescales based on sufficient measured data and improved clustering method. This model could provide important reference for power system simulation incorporating wind farms.

  19. Rating curve uncertainty: A comparison of estimation methods

    USGS Publications Warehouse

    Mason, Jr., Robert R.; Kiang, Julie E.; Cohn, Timothy A.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan

    2016-01-01

    The USGS is engaged in both internal development and collaborative efforts to evaluate existing methods for characterizing the uncertainty of streamflow measurements (gaugings), stage-discharge relations (ratings), and, ultimately, the streamflow records derived from them. This paper provides a brief overview of two candidate methods that may be used to characterize the uncertainty of ratings, and illustrates the results of their application to the ratings of the two USGS streamgages.

  20. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-06-22

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.

  1. Facial Masculinity: How the Choice of Measurement Method Enables to Detect Its Influence on Behaviour

    PubMed Central

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods. PMID:25389770

  2. Facial masculinity: how the choice of measurement method enables to detect its influence on behaviour.

    PubMed

    Sanchez-Pages, Santiago; Rodriguez-Ruiz, Claudia; Turiegano, Enrique

    2014-01-01

    Recent research has explored the relationship between facial masculinity, human male behaviour and males' perceived features (i.e. attractiveness). The methods of measurement of facial masculinity employed in the literature are quite diverse. In the present paper, we use several methods of measuring facial masculinity to study the effect of this feature on risk attitudes and trustworthiness. We employ two strategic interactions to measure these two traits, a first-price auction and a trust game. We find that facial width-to-height ratio is the best predictor of trustworthiness, and that measures of masculinity which use Geometric Morphometrics are the best suited to link masculinity and bidding behaviour. However, we observe that the link between masculinity and bidding in the first-price auction might be driven by competitiveness and not by risk aversion only. Finally, we test the relationship between facial measures of masculinity and perceived masculinity. As a conclusion, we suggest that researchers in the field should measure masculinity using one of these methods in order to obtain comparable results. We also encourage researchers to revise the existing literature on this topic following these measurement methods.

  3. Measurement of discharge using tracers

    USGS Publications Warehouse

    Kilpatrick, Frederick A.; Cobb, Ernest D.

    1984-01-01

    The development of fluorescent dyes and fluorometers that can measure these dyes at very low concentrations has made dye-dilution methods practical for measuring discharge. These methods are particularly useful for determining discharge under certain flow conditions that are unfavorable for current meter measurements. These include small streams, canals, and pipes where:Turbulence is excessive for current meter measurement but conducive to good mixing.Moving rocks and debris are damaging to any instruments placed in the flow.Cross-sectional areas or velocities are indeterminant or changing.There are some unsteady flows such as exist with storm-runoff events on small streams.The flow is physically inaccessible or unsafe.From a practical standpoint, such measurements are limited primarily to small streams due to excessively long channel mixing lengths required of larger streams. Very good accuracy can be obtained provided:Adequate mixing length and time are allowed.Careful field and laboratory techniques are employed.Dye losses are not significant.This manual describes the slug-injection and constant-rate injection methods of performing tracer-dilution measurements. Emphasis is on the use of fluorescent dyes as tracers and the equipment, field methods, and Laboratory procedures for performing such measurements. The tracer-velocity method is also briefly discussed.

  4. Effect of Blast-Induced Vibration from New Railway Tunnel on Existing Adjacent Railway Tunnel in Xinjiang, China

    NASA Astrophysics Data System (ADS)

    Liang, Qingguo; Li, Jie; Li, Dewu; Ou, Erfeng

    2013-01-01

    The vibrations of existing service tunnels induced by blast-excavation of adjacent tunnels have attracted much attention from both academics and engineers during recent decades in China. The blasting vibration velocity (BVV) is the most widely used controlling index for in situ monitoring and safety assessment of existing lining structures. Although numerous in situ tests and simulations had been carried out to investigate blast-induced vibrations of existing tunnels due to excavation of new tunnels (mostly by bench excavation method), research on the overall dynamical response of existing service tunnels in terms of not only BVV but also stress/strain seemed limited for new tunnels excavated by the full-section blasting method. In this paper, the impacts of blast-induced vibrations from a new tunnel on an existing railway tunnel in Xinjiang, China were comprehensively investigated by using laboratory tests, in situ monitoring and numerical simulations. The measured data from laboratory tests and in situ monitoring were used to determine the parameters needed for numerical simulations, and were compared with the calculated results. Based on the results from in situ monitoring and numerical simulations, which were consistent with each other, the original blasting design and corresponding parameters were adjusted to reduce the maximum BVV, which proved to be effective and safe. The effect of both the static stress before blasting vibrations and the dynamic stress induced by blasting on the total stresses in the existing tunnel lining is also discussed. The methods and related results presented could be applied in projects with similar ground and distance between old and new tunnels if the new tunnel is to be excavated by the full-section blasting method.

  5. A calibration method for fringe reflection technique based on the analytical phase-slope description

    NASA Astrophysics Data System (ADS)

    Wu, Yuxiang; Yue, Huimin; Pan, Zhipeng; Liu, Yong

    2018-05-01

    The fringe reflection technique (FRT) has been one of the most popular methods to measure the shape of specular surface these years. The existing system calibration methods of FRT usually contain two parts, which are camera calibration and geometric calibration. In geometric calibration, the liquid crystal display (LCD) screen position calibration is one of the most difficult steps among all the calibration procedures, and its accuracy is affected by the factors such as the imaging aberration, the plane mirror flatness, and LCD screen pixel size accuracy. In this paper, based on the deduction of FRT analytical phase-slope description, we present a novel calibration method with no requirement to calibrate the position of LCD screen. On the other hand, the system can be arbitrarily arranged, and the imaging system can either be telecentric or non-telecentric. In our experiment of measuring the 5000mm radius sphere mirror, the proposed calibration method achieves 2.5 times smaller measurement error than the geometric calibration method. In the wafer surface measuring experiment, the measurement result with the proposed calibration method is closer to the interferometer result than the geometric calibration method.

  6. Structure and information in spatial segregation

    PubMed Central

    2017-01-01

    Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. PMID:29078323

  7. Structure and information in spatial segregation.

    PubMed

    Chodrow, Philip S

    2017-10-31

    Ethnoracial residential segregation is a complex, multiscalar phenomenon with immense moral and economic costs. Modeling the structure and dynamics of segregation is a pressing problem for sociology and urban planning, but existing methods have limitations. In this paper, we develop a suite of methods, grounded in information theory, for studying the spatial structure of segregation. We first advance existing profile and decomposition methods by posing two related regionalization methods, which allow for profile curves with nonconstant spatial scale and decomposition analysis with nonarbitrary areal units. We then formulate a measure of local spatial scale, which may be used for both detailed, within-city analysis and intercity comparisons. These methods highlight detailed insights in the structure and dynamics of urban segregation that would be otherwise easy to miss or difficult to quantify. They are computationally efficient, applicable to a broad range of study questions, and freely available in open source software. Published under the PNAS license.

  8. Robust double gain unscented Kalman filter for small satellite attitude estimation

    NASA Astrophysics Data System (ADS)

    Cao, Lu; Yang, Weiwei; Li, Hengnian; Zhang, Zhidong; Shi, Jianjun

    2017-08-01

    Limited by the low precision of small satellite sensors, the estimation theories with high performance remains the most popular research topic for the attitude estimation. The Kalman filter (KF) and its extensions have been widely applied in the satellite attitude estimation and achieved plenty of achievements. However, most of the existing methods just take use of the current time-step's priori measurement residuals to complete the measurement update and state estimation, which always ignores the extraction and utilization of the previous time-step's posteriori measurement residuals. In addition, the uncertainty model errors always exist in the attitude dynamic system, which also put forward the higher performance requirements for the classical KF in attitude estimation problem. Therefore, the novel robust double gain unscented Kalman filter (RDG-UKF) is presented in this paper to satisfy the above requirements for the small satellite attitude estimation with the low precision sensors. It is assumed that the system state estimation errors can be exhibited in the measurement residual; therefore, the new method is to derive the second Kalman gain Kk2 for making full use of the previous time-step's measurement residual to improve the utilization efficiency of the measurement data. Moreover, the sequence orthogonal principle and unscented transform (UT) strategy are introduced to robust and enhance the performance of the novel Kalman Filter in order to reduce the influence of existing uncertainty model errors. Numerical simulations show that the proposed RDG-UKF is more effective and robustness in dealing with the model errors and low precision sensors for the attitude estimation of small satellite by comparing with the classical unscented Kalman Filter (UKF).

  9. Three dimensional scattering center imaging techniques

    NASA Technical Reports Server (NTRS)

    Younger, P. R.; Burnside, W. D.

    1991-01-01

    Two methods to image scattering centers in 3-D are presented. The first method uses 2-D images generated from Inverse Synthetic Aperture Radar (ISAR) measurements taken by two vertically offset antennas. This technique is shown to provide accurate 3-D imaging capability which can be added to an existing ISAR measurement system, requiring only the addition of a second antenna. The second technique uses target impulse responses generated from wideband radar measurements from three slightly different offset antennas. This technique is shown to identify the dominant scattering centers on a target in nearly real time. The number of measurements required to image a target using this technique is very small relative to traditional imaging techniques.

  10. Parallelism measurement for base plate of standard artifact with multiple tactile approaches

    NASA Astrophysics Data System (ADS)

    Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie

    2018-01-01

    Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.

  11. Stern-Gerlach-like approach to electron orbital angular momentum measurement

    DOE PAGES

    Harvey, Tyler R.; Grillo, Vincenzo; McMorran, Benjamin J.

    2017-02-28

    Many methods now exist to prepare free electrons into orbital-angular-momentum states, and the predicted applications of these electron states as probes of materials and scattering processes are numerous. The development of electron orbital-angular-momentum measurement techniques has lagged behind. We show that coupling between electron orbital angular momentum and a spatially varying magnetic field produces an angular-momentum-dependent focusing effect. We propose a design for an orbital-angular-momentum measurement device built on this principle. As the method of measurement is noninterferometric, the device works equally well for mixed, superposed, and pure final orbital-angular-momentum states. The energy and orbital-angular-momentum distributions of inelastically scattered electronsmore » may be simultaneously measurable with this technique.« less

  12. Stern-Gerlach-like approach to electron orbital angular momentum measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harvey, Tyler R.; Grillo, Vincenzo; McMorran, Benjamin J.

    Many methods now exist to prepare free electrons into orbital-angular-momentum states, and the predicted applications of these electron states as probes of materials and scattering processes are numerous. The development of electron orbital-angular-momentum measurement techniques has lagged behind. We show that coupling between electron orbital angular momentum and a spatially varying magnetic field produces an angular-momentum-dependent focusing effect. We propose a design for an orbital-angular-momentum measurement device built on this principle. As the method of measurement is noninterferometric, the device works equally well for mixed, superposed, and pure final orbital-angular-momentum states. The energy and orbital-angular-momentum distributions of inelastically scattered electronsmore » may be simultaneously measurable with this technique.« less

  13. A novel client service quality measuring model and an eHealthcare mitigating approach.

    PubMed

    Cheng, L M; Choi, Wai Ping Choi; Wong, Anita Yiu Ming

    2016-07-01

    Facing population ageing in Hong Kong, the demand of long-term elderly health care services is increasing. The challenge is to support a good quality service under the constraints faced by recent shortage of nursing and care services professionals without redesigning the work flow operated in the existing elderly health care industries. the existing elderly health care industries. The Total QoS measure based on Finite Capacity Queuing Model is a reliable method and an effective measurement for Quality of services. The value is good for measuring the staffing level and offers a measurement for efficiency enhancement when incorporate new technologies like ICT. The implemented system has improved the Quality of Service by more than 14% and the extra released manpower resource will allow clinical care provider to offer further value added services without actually increasing head count. We have developed a novel Quality of Service measurement for Clinical Care services based on multi-queue using finite capacity queue model M/M/c/K/n and the measurement is useful for estimating the shortage of staff resource in a caring institution. It is essential for future integration with the existing widely used assessment model to develop reliable measuring limits which allow an effective measurement of public fund used in health care industries. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. In vivo THz sensing of the cornea of the eye

    NASA Astrophysics Data System (ADS)

    Ozheredov, Ilya; Prokopchuk, Mikhail; Mischenko, Mikhail; Safonova, Tatiana; Solyankin, Petr; Larichev, Andrey; Angeluts, Andrey; Balakin, Alexei; Shkurinov, Alexander

    2018-05-01

    Measurement of the absolute value of the humidity of the cornea of the human eye and its dynamics is of paramount importance for the preservation of eyesight. In the present paper we have demonstrated that terahertz technologies can be practically applied for quantitative measurement of the physiological dynamics of tear film and sensing of corneal tissue hydration. We suggest uses of the equipment for application in clinics and a method for absolute calibration of the values for measurement. The proposed method is fundamentally different from existing and currently available methods of ophthalmological diagnosis. This suggests that the developed technique may have high diagnostic significance and can be used in the study and treatment of several diseases of the ocular surface.

  15. 75 FR 80117 - Methods for Measurement of Filterable PM10

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ...This action promulgates amendments to Methods 201A and 202. The final amendments to Method 201A add a particle-sizing device to allow for sampling of particulate matter with mean aerodynamic diameters less than or equal to 2.5 micrometers (PM2.5 or fine particulate matter). The final amendments to Method 202 revise the sample collection and recovery procedures of the method to reduce the formation of reaction artifacts that could lead to inaccurate measurements of condensable particulate matter. Additionally, the final amendments to Method 202 eliminate most of the hardware and analytical options in the existing method, thereby increasing the precision of the method and improving the consistency in the measurements obtained between source tests performed under different regulatory authorities. This action also announces that EPA is taking no action to affect the already established January 1, 2011 sunset date for the New Source Review (NSR) transition period, during which EPA is not requiring that State NSR programs address condensable particulate matter emissions.

  16. Power System Transient Diagnostics Based on Novel Traveling Wave Detection

    NASA Astrophysics Data System (ADS)

    Hamidi, Reza Jalilzadeh

    Modern electrical power systems demand novel diagnostic approaches to enhancing the system resiliency by improving the state-of-the-art algorithms. The proliferation of high-voltage optical transducers and high time-resolution measurements provide opportunities to develop novel diagnostic methods of very fast transients in power systems. At the same time, emerging complex configuration, such as multi-terminal hybrid transmission systems, limits the applications of the traditional diagnostic methods, especially in fault location and health monitoring. The impedance-based fault-location methods are inefficient for cross-bounded cables, which are widely used for connection of offshore wind farms to the main grid. Thus, this dissertation first presents a novel traveling wave-based fault-location method for hybrid multi-terminal transmission systems. The proposed method utilizes time-synchronized high-sampling voltage measurements. The traveling wave arrival times (ATs) are detected by observation of the squares of wavelet transformation coefficients. Using the ATs, an over-determined set of linear equations are developed for noise reduction, and consequently, the faulty segment is determined based on the characteristics of the provided equation set. Then, the fault location is estimated. The accuracy and capabilities of the proposed fault location method are evaluated and also compared to the existing traveling-wave-based method for a wide range of fault parameters. In order to improve power systems stability, auto-reclosing (AR), single-phase auto-reclosing (SPAR), and adaptive single-phase auto-reclosing (ASPAR) methods have been developed with the final objectives of distinguishing between the transient and permanent faults to clear the transient faults without de-energization of the solid phases. However, the features of the electrical arcs (transient faults) are severely influenced by a number of random parameters, including the convection of the air and plasma, wind speed, air pressure, and humidity. Therefore, the dead-time (the de-energization duration of the faulty phase) is unpredictable. Accordingly, conservatively long dead-times are usually considered by protection engineers. However, if the exact arc distinction time is determined, the power system stability and quality will enhance. Therefore, a new method for detection of arc extinction times leading to a new ASPAR method utilizing power line carrier (PLC) signals is presented. The efficiency of the proposed ASPAR method is verified through simulations and compared with the existing ASPAR methods. High-sampling measurements are prone to be skewed by the environmental noises and analog-to-digital (A/D) converters quantization errors. Therefore noise-contaminated measurements are the major source of uncertainties and errors in the outcomes of traveling wave-based diagnostic applications. The existing AT-detection methods do not provide enough sensitivity and selectivity at the same time. Therefore, a new AT-detection method based on short-time matrix pencil (STMPM) is developed to accurately detect ATs of the traveling waves with low signal-to-noise (SNR) ratios. As STMPM is based on matrix algebra, it is a challenging to implement this new technique in microprocessor-based fault locators. Hence, a fully recursive and computationally efficient method based on adaptive discrete Kalman filter (ADKF) is introduced for AT-detection, which is proper for microprocessors and able to accomplish accurate AT-detection for online applications such as ultra-high-speed protection. Both proposed AT-detection methods are evaluated based on extensive simulation studies, and the superior outcomes are compared to the existing methods.

  17. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples

    PubMed Central

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-01-01

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. PMID:26833260

  18. Review Article "Valuating the intangible effects of natural hazards - review and analysis of the costing methods"

    NASA Astrophysics Data System (ADS)

    Markantonis, V.; Meyer, V.; Schwarze, R.

    2012-05-01

    The "intangible" or "non-market" effects are those costs of natural hazards which are not, or at least not easily measurable in monetary terms, as for example, impacts on health, cultural heritage or the environment. The intangible effects are often not included in costs assessments of natural hazards leading to an incomplete and biased cost assessment. However, several methods exist which try to estimate these effects in a non-monetary or monetary form. The objective of the present paper is to review and evaluate methods for estimating the intangible effects of natural hazards, specifically related to health and environmental effects. Existing methods are analyzed and compared using various criteria, research gaps are identified, application recommendations are provided, and valuation issues that should be addressed by the scientific community are highlighted.

  19. Robust volcano plot: identification of differential metabolites in the presence of outliers.

    PubMed

    Kumar, Nishith; Hoque, Md Aminul; Sugimoto, Masahiro

    2018-04-11

    The identification of differential metabolites in metabolomics is still a big challenge and plays a prominent role in metabolomics data analyses. Metabolomics datasets often contain outliers because of analytical, experimental, and biological ambiguity, but the currently available differential metabolite identification techniques are sensitive to outliers. We propose a kernel weight based outlier-robust volcano plot for identifying differential metabolites from noisy metabolomics datasets. Two numerical experiments are used to evaluate the performance of the proposed technique against nine existing techniques, including the t-test and the Kruskal-Wallis test. Artificially generated data with outliers reveal that the proposed method results in a lower misclassification error rate and a greater area under the receiver operating characteristic curve compared with existing methods. An experimentally measured breast cancer dataset to which outliers were artificially added reveals that our proposed method produces only two non-overlapping differential metabolites whereas the other nine methods produced between seven and 57 non-overlapping differential metabolites. Our data analyses show that the performance of the proposed differential metabolite identification technique is better than that of existing methods. Thus, the proposed method can contribute to analysis of metabolomics data with outliers. The R package and user manual of the proposed method are available at https://github.com/nishithkumarpaul/Rvolcano .

  20. Multiphase fluid characterization system

    DOEpatents

    Sinha, Dipen N.

    2014-09-02

    A measurement system and method for permitting multiple independent measurements of several physical parameters of multiphase fluids flowing through pipes are described. Multiple acoustic transducers are placed in acoustic communication with or attached to the outside surface of a section of existing spool (metal pipe), typically less than 3 feet in length, for noninvasive measurements. Sound speed, sound attenuation, fluid density, fluid flow, container wall resonance characteristics, and Doppler measurements for gas volume fraction may be measured simultaneously by the system. Temperature measurements are made using a temperature sensor for oil-cut correction.

  1. Acoustic vector tomography and its application to magnetoacoustic tomography with magnetic induction (MAT-MI).

    PubMed

    Li, Xu; Xia, Rongmin; He, Bin

    2008-01-01

    A new tomographic algorithm for reconstructing a curl-free vector field, whose divergence serves as acoustic source is proposed. It is shown that under certain conditions, the scalar acoustic measurements obtained from a surface enclosing the source area can be vectorized according to the known measurement geometry and then be used to reconstruct the vector field. The proposed method is validated by numerical experiments. This method can be easily applied to magnetoacoustic tomography with magnetic induction (MAT-MI). A simulation study of applying this method to MAT-MI shows that compared to existing methods, the proposed method can give an accurate estimation of the induced current distribution and a better reconstruction of electrical conductivity within an object.

  2. Automated mask and wafer defect classification using a novel method for generalized CD variation measurements

    NASA Astrophysics Data System (ADS)

    Verechagin, V.; Kris, R.; Schwarzband, I.; Milstein, A.; Cohen, B.; Shkalim, A.; Levy, S.; Price, D.; Bal, E.

    2018-03-01

    Over the years, mask and wafers defects dispositioning has become an increasingly challenging and time consuming task. With design rules getting smaller, OPC getting complex and scanner illumination taking on free-form shapes - the probability of a user to perform accurate and repeatable classification of defects detected by mask inspection tools into pass/fail bins is reducing. The critical challenging of mask defect metrology for small nodes ( < 30 nm) was reviewed in [1]. While Critical Dimension (CD) variation measurement is still the method of choice for determining a mask defect future impact on wafer, the high complexity of OPCs combined with high variability in pattern shapes poses a challenge for any automated CD variation measurement method. In this study, a novel approach for measurement generalization is presented. CD variation assessment performance is evaluated on multiple different complex shape patterns, and is benchmarked against an existing qualified measurement methodology.

  3. Assessing the limitations of the existing physician directory for measuring electronic health record (EHR) adoption rates among physicians in Connecticut, USA: cross-sectional study.

    PubMed

    Tikoo, Minakshi

    2012-01-01

    To assess the limitations of the existing physician directory in measuring electronic health record adoption rates among a cohort of Connecticut physicians. A population-based mailing assessed the number of physicians practising in Connecticut. Information about practice site, practises pertaining to storing of patient information, sources of revenue and preferred method for receiving survey. Practice status in Connecticut, measured by yes and no. Demographic information was collected on gender, year of birth, race and ethnicity. The response rate for the postcard mailing was 19% (3105/16 462). Of the 16 462 unduplicated consumers, 233 (1%) were retired and 5828 (35%) did not practise in Connecticut. Of the 3105 valid postcard responses we received, 2159 were for physicians practising in Connecticut. Nine (0.4%) of these responses did not specify a preferred method for receiving the full physician survey; 91 physicians refused to participate in the survey; 2159 surveys were sent out using each physician's requested method for receiving the survey, that is, web-based, regular mail or telephone. As of August 2012, 898 physicians had returned surveys, resulting in a response rate of 42%. The postcard response rate based on the unduplicated lists adjusted for exclusions, such as death, retired and do not practise in Connecticut, is 30%, which is low. We may be missing physicians' population which could greatly affect the indicators being used to measure change in electronic health record adoption rates. It is difficult to obtain an accurate physician count of practising physicians in Connecticut from the existing lists. States that are participating in the projects funded under various Office of the National Coordinator for Health Information Technology (ONC) initiatives must focus on getting an accurate count of the physicians practising in their state, since their progress is being measured based on this key number.

  4. Analytical N beam position monitor method

    NASA Astrophysics Data System (ADS)

    Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.

    2017-11-01

    Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.

  5. An orientation measurement method based on Hall-effect sensors for permanent magnet spherical actuators with 3D magnet array.

    PubMed

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-10-24

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators.

  6. A hybrid degradation tendency measurement method for mechanical equipment based on moving window and Grey-Markov model

    NASA Astrophysics Data System (ADS)

    Jiang, Wei; Zhou, Jianzhong; Zheng, Yang; Liu, Han

    2017-11-01

    Accurate degradation tendency measurement is vital for the secure operation of mechanical equipment. However, the existing techniques and methodologies for degradation measurement still face challenges, such as lack of appropriate degradation indicator, insufficient accuracy, and poor capability to track the data fluctuation. To solve these problems, a hybrid degradation tendency measurement method for mechanical equipment based on a moving window and Grey-Markov model is proposed in this paper. In the proposed method, a 1D normalized degradation index based on multi-feature fusion is designed to assess the extent of degradation. Subsequently, the moving window algorithm is integrated with the Grey-Markov model for the dynamic update of the model. Two key parameters, namely the step size and the number of states, contribute to the adaptive modeling and multi-step prediction. Finally, three types of combination prediction models are established to measure the degradation trend of equipment. The effectiveness of the proposed method is validated with a case study on the health monitoring of turbine engines. Experimental results show that the proposed method has better performance, in terms of both measuring accuracy and data fluctuation tracing, in comparison with other conventional methods.

  7. Improved Measures of Integrated Information

    PubMed Central

    Tegmark, Max

    2016-01-01

    Although there is growing interest in measuring integrated information in computational and cognitive systems, current methods for doing so in practice are computationally unfeasible. Existing and novel integration measures are investigated and classified by various desirable properties. A simple taxonomy of Φ-measures is presented where they are each characterized by their choice of factorization method (5 options), choice of probability distributions to compare (3 × 4 options) and choice of measure for comparing probability distributions (7 options). When requiring the Φ-measures to satisfy a minimum of attractive properties, these hundreds of options reduce to a mere handful, some of which turn out to be identical. Useful exact and approximate formulas are derived that can be applied to real-world data from laboratory experiments without posing unreasonable computational demands. PMID:27870846

  8. Deferred slanted-edge analysis: a unified approach to spatial frequency response measurement on distorted images and color filter array subsets.

    PubMed

    van den Bergh, F

    2018-03-01

    The slanted-edge method of spatial frequency response (SFR) measurement is usually applied to grayscale images under the assumption that any distortion of the expected straight edge is negligible. By decoupling the edge orientation and position estimation step from the edge spread function construction step, it is shown in this paper that the slanted-edge method can be extended to allow it to be applied to images suffering from significant geometric distortion, such as produced by equiangular fisheye lenses. This same decoupling also allows the slanted-edge method to be applied directly to Bayer-mosaicked images so that the SFR of the color filter array subsets can be measured directly without the unwanted influence of demosaicking artifacts. Numerical simulation results are presented to demonstrate the efficacy of the proposed deferred slanted-edge method in relation to existing methods.

  9. A new method for noninvasive venous blood oxygen detection.

    PubMed

    Zhang, Xu; Zhang, Meimei; Zheng, Shengkun; Wang, Liqi; Ye, Jilun

    2016-07-19

    Blood oxygen saturation of vein (SvO2) is an important clinical parameter for patient monitoring. However, the existing clinical methods are invasive, expensive, which are also painful for patients. Based on light-absorption, this study describes a new noninvasive SvO2 measurement method by using external stimulation signal to generate cyclical fluctuation signal in the vein, which overcomes the low signal-to-noise ratio problem in the measurement process. In this way, the value of SvO2 can be obtained continuously in real time. The experimental results demonstrate that the method can successfully measure venous oxygen saturation by artificial addition of stimulation. Under hypoxic conditions, the system can reflect the overall decline of venous oxygen saturation better. When the results measured by the new method are compared with those measured by the invasive method, the root mean square error of the difference is 5.31 and the correlation coefficient of the difference is 0.72. The new method can be used to measure SvO2 and evaluate body oxygen consumption, and its accuracy needs improvement. Real-time and continuous monitoring can be achieved by replacing invasive method with noninvasive method, which provides more comprehensive clinical information in a timely manner and better meet the needs of clinical treatment. However, the accuracy of the new noninvasive SvO2 measurement based on light-absorption has to be further improved.

  10. An Evaluation of Fractal Surface Measurement Methods for Characterizing Landscape Complexity from Remote-Sensing Imagery

    NASA Technical Reports Server (NTRS)

    Lam, Nina Siu-Ngan; Qiu, Hong-Lie; Quattrochi, Dale A.; Emerson, Charles W.; Arnold, James E. (Technical Monitor)

    2001-01-01

    The rapid increase in digital data volumes from new and existing sensors necessitates the need for efficient analytical tools for extracting information. We developed an integrated software package called ICAMS (Image Characterization and Modeling System) to provide specialized spatial analytical functions for interpreting remote sensing data. This paper evaluates the three fractal dimension measurement methods: isarithm, variogram, and triangular prism, along with the spatial autocorrelation measurement methods Moran's I and Geary's C, that have been implemented in ICAMS. A modified triangular prism method was proposed and implemented. Results from analyzing 25 simulated surfaces having known fractal dimensions show that both the isarithm and triangular prism methods can accurately measure a range of fractal surfaces. The triangular prism method is most accurate at estimating the fractal dimension of higher spatial complexity, but it is sensitive to contrast stretching. The variogram method is a comparatively poor estimator for all of the surfaces, particularly those with higher fractal dimensions. Similar to the fractal techniques, the spatial autocorrelation techniques are found to be useful to measure complex images but not images with low dimensionality. These fractal measurement methods can be applied directly to unclassified images and could serve as a tool for change detection and data mining.

  11. Structural modal parameter identification using local mean decomposition

    NASA Astrophysics Data System (ADS)

    Keyhani, Ali; Mohammadi, Saeed

    2018-02-01

    Modal parameter identification is the first step in structural health monitoring of existing structures. Already, many powerful methods have been proposed for this concept and each method has some benefits and shortcomings. In this study, a new method based on local mean decomposition is proposed for modal identification of civil structures from free or ambient vibration measurements. The ability of the proposed method was investigated using some numerical studies and the results compared with those obtained from the Hilbert-Huang transform (HHT). As a major advantage, the proposed method can extract natural frequencies and damping ratios of all active modes from only one measurement. The accuracy of the identified modes depends on their participation in the measured responses. Nevertheless, the identified natural frequencies have reasonable accuracy in both cases of free and ambient vibration measurements, even in the presence of noise. The instantaneous phase angle and the natural logarithm of instantaneous amplitude curves obtained from the proposed method have more linearity rather than those from the HHT algorithm. Also, the end effect is more restricted for the proposed method.

  12. Comparing biomarkers as principal surrogate endpoints.

    PubMed

    Huang, Ying; Gilbert, Peter B

    2011-12-01

    Recently a new definition of surrogate endpoint, the "principal surrogate," was proposed based on causal associations between treatment effects on the biomarker and on the clinical endpoint. Despite its appealing interpretation, limited research has been conducted to evaluate principal surrogates, and existing methods focus on risk models that consider a single biomarker. How to compare principal surrogate value of biomarkers or general risk models that consider multiple biomarkers remains an open research question. We propose to characterize a marker or risk model's principal surrogate value based on the distribution of risk difference between interventions. In addition, we propose a novel summary measure (the standardized total gain) that can be used to compare markers and to assess the incremental value of a new marker. We develop a semiparametric estimated-likelihood method to estimate the joint surrogate value of multiple biomarkers. This method accommodates two-phase sampling of biomarkers and is more widely applicable than existing nonparametric methods by incorporating continuous baseline covariates to predict the biomarker(s), and is more robust than existing parametric methods by leaving the error distribution of markers unspecified. The methodology is illustrated using a simulated example set and a real data set in the context of HIV vaccine trials. © 2011, The International Biometric Society.

  13. 30 CFR 285.653 - What other reports or notices must I submit to MMS under my approved GAP?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ..., DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER... effective, you must include your recommendations for new mitigation measures or monitoring methods. ...

  14. A New Method for Reconstructing Sea-Level and Deep-Sea-Temperature Variability over the Past 5.3 Million Years

    NASA Astrophysics Data System (ADS)

    Rohling, E. J.

    2014-12-01

    Ice volume (and hence sea level) and deep-sea temperature are key measures of global climate change. Sea level has been documented using several independent methods over the past 0.5 million years (Myr). Older periods, however, lack such independent validation; all existing records are related to deep-sea oxygen isotope (d18O) data that are influenced by processes unrelated to sea level. For deep-sea temperature, only one continuous high-resolution (Mg/Ca-based) record exists, with related sea-level estimates, spanning the past 1.5 Myr. We have recently presented a novel sea-level reconstruction, with associated estimates of deep-sea temperature, which independently validates the previous 0-1.5 Myr reconstruction and extends it back to 5.3 Myr ago. A serious of caveats applies to this new method, especially in older times of its application, as is always the case with new methods. Independent validation exercises are needed to elucidate where consistency exists, and where solutions drift away from each other. A key observation from our new method is that a large temporal offset existed during the onset of Plio-Pleistocene ice ages, between a marked cooling step at 2.73 Myr ago and the first major glaciation at 2.15 Myr ago. This observation relies on relative changes within the dataset, which are more robust than absolute values. I will discuss our method and its main caveats and avenues for improvement.

  15. Probe classification of on-off type DNA microarray images with a nonlinear matching measure

    NASA Astrophysics Data System (ADS)

    Ryu, Munho; Kim, Jong Dae; Min, Byoung Goo; Kim, Jongwon; Kim, Y. Y.

    2006-01-01

    We propose a nonlinear matching measure, called counting measure, as a signal detection measure that is defined as the number of on pixels in the spot area. It is applied to classify probes for an on-off type DNA microarray, where each probe spot is classified as hybridized or not. The counting measure also incorporates the maximum response search method, where the expected signal is obtained by taking the maximum among the measured responses of the various positions and sizes of the spot template. The counting measure was compared to existing signal detection measures such as the normalized covariance and the median for 2390 patient samples tested on the human papillomavirus (HPV) DNA chip. The counting measure performed the best regardless of whether or not the maximum response search method was used. The experimental results showed that the counting measure combined with the positional search was the most preferable.

  16. Development of a Measure of Asthma-Specific Quality of Life among Adults

    PubMed Central

    Eberhart, Nicole K.; Sherbourne, Cathy D.; Edelen, Maria Orlando; Stucky, Brian D.; Sin, Nancy L.; Lara, Marielena

    2014-01-01

    Purpose A key goal in asthma treatment is improvement in quality of life (QoL), but existing measures often confound QoL with symptoms and functional impairment. The current study addresses these limitations and the need for valid patient-reported outcome measures by using state-of-the-art methods to develop an item bank assessing QoL in adults with asthma. This article describes the process for developing an initial item pool for field testing. Methods Five focus group interviews were conducted with a total of 50 asthmatic adults. We used “pile sorting/binning” and “winnowing” methods to identify key QoL dimensions and develop a pool of items based on statements made in the focus group interviews. We then conducted a literature review and consulted with an expert panel to ensure that no key concepts were omitted. Finally, we conducted individual cognitive interviews to ensure that items were well understood and inform final item refinement. Results 661 QoL statements were identified from focus group interview transcripts and subsequently used to generate a pool of 112 items in 16 different content areas. Conclusions Items covering a broad range of content were developed that can serve as a valid gauge of individuals’ perceptions of the effects of asthma and its treatment on their lives. These items do not directly measure symptoms or functional impairment, yet they include a broader range of content than most existent measures of asthma-specific QoL. PMID:24062237

  17. An integrative framework for sensor-based measurement of teamwork in healthcare.

    PubMed

    Rosen, Michael A; Dietz, Aaron S; Yang, Ting; Priebe, Carey E; Pronovost, Peter J

    2015-01-01

    There is a strong link between teamwork and patient safety. Emerging evidence supports the efficacy of teamwork improvement interventions. However, the availability of reliable, valid, and practical measurement tools and strategies is commonly cited as a barrier to long-term sustainment and spread of these teamwork interventions. This article describes the potential value of sensor-based technology as a methodology to measure and evaluate teamwork in healthcare. The article summarizes the teamwork literature within healthcare, including team improvement interventions and measurement. Current applications of sensor-based measurement of teamwork are reviewed to assess the feasibility of employing this approach in healthcare. The article concludes with a discussion highlighting current application needs and gaps and relevant analytical techniques to overcome the challenges to implementation. Compelling studies exist documenting the feasibility of capturing a broad array of team input, process, and output variables with sensor-based methods. Implications of this research are summarized in a framework for development of multi-method team performance measurement systems. Sensor-based measurement within healthcare can unobtrusively capture information related to social networks, conversational patterns, physical activity, and an array of other meaningful information without having to directly observe or periodically survey clinicians. However, trust and privacy concerns present challenges that need to be overcome through engagement of end users in healthcare. Initial evidence exists to support the feasibility of sensor-based measurement to drive feedback and learning across individual, team, unit, and organizational levels. Future research is needed to refine methods, technologies, theory, and analytical strategies. © The Author 2014. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.comFor numbered affiliations see end of article.

  18. Limits of optical transmission measurements with application to particle sizing techniques.

    PubMed

    Swanson, N L; Billard, B D; Gennaro, T L

    1999-09-20

    Considerable confusion exists regarding the applicability limits of the Bouguer-Lambert-Beer law of optical transmission. We review the derivation of the law and discuss its application to the optical thickness of the light-scattering medium. We demonstrate the range of applicability by presenting a method for determining particle size by measuring optical transmission at two wavelengths.

  19. The Abbreviation of Personality, or how to Measure 200 Personality Scales with 200 Items

    PubMed Central

    Yarkoni, Tal

    2010-01-01

    Personality researchers have recently advocated the use of very short personality inventories in order to minimize administration time. However, few such inventories are currently available. Here I introduce an automated method that can be used to abbreviate virtually any personality inventory with minimal effort. After validating the method against existing measures in Studies 1 and 2, a new 181-item inventory is generated in Study 3 that accurately recaptures scores on 8 different broadband inventories comprising 203 distinct scales. Collectively, the results validate a powerful new way to improve the efficiency of personality measurement in research settings. PMID:20419061

  20. Learning context-sensitive shape similarity by graph transduction.

    PubMed

    Bai, Xiang; Yang, Xingwei; Latecki, Longin Jan; Liu, Wenyu; Tu, Zhuowen

    2010-05-01

    Shape similarity and shape retrieval are very important topics in computer vision. The recent progress in this domain has been mostly driven by designing smart shape descriptors for providing better similarity measure between pairs of shapes. In this paper, we provide a new perspective to this problem by considering the existing shapes as a group, and study their similarity measures to the query shape in a graph structure. Our method is general and can be built on top of any existing shape similarity measure. For a given similarity measure, a new similarity is learned through graph transduction. The new similarity is learned iteratively so that the neighbors of a given shape influence its final similarity to the query. The basic idea here is related to PageRank ranking, which forms a foundation of Google Web search. The presented experimental results demonstrate that the proposed approach yields significant improvements over the state-of-art shape matching algorithms. We obtained a retrieval rate of 91.61 percent on the MPEG-7 data set, which is the highest ever reported in the literature. Moreover, the learned similarity by the proposed method also achieves promising improvements on both shape classification and shape clustering.

  1. Link-Based Similarity Measures Using Reachability Vectors

    PubMed Central

    Yoon, Seok-Ho; Kim, Ji-Soo; Ryu, Minsoo; Choi, Ho-Jin

    2014-01-01

    We present a novel approach for computing link-based similarities among objects accurately by utilizing the link information pertaining to the objects involved. We discuss the problems with previous link-based similarity measures and propose a novel approach for computing link based similarities that does not suffer from these problems. In the proposed approach each target object is represented by a vector. Each element of the vector corresponds to all the objects in the given data, and the value of each element denotes the weight for the corresponding object. As for this weight value, we propose to utilize the probability of reaching from the target object to the specific object, computed using the “Random Walk with Restart” strategy. Then, we define the similarity between two objects as the cosine similarity of the two vectors. In this paper, we provide examples to show that our approach does not suffer from the aforementioned problems. We also evaluate the performance of the proposed methods in comparison with existing link-based measures, qualitatively and quantitatively, with respect to two kinds of data sets, scientific papers and Web documents. Our experimental results indicate that the proposed methods significantly outperform the existing measures. PMID:24701188

  2. Initial constructs for patient-centered outcome measures to evaluate brain-computer interfaces

    PubMed Central

    Andresen, Elena M.; Fried-Oken, Melanie; Peters, Betts; Patrick, Donald L.

    2016-01-01

    Purpose The authors describe preliminary work toward the creation of patient-centered outcome (PCO) measures to evaluate brain-computer interface (BCI) as an assistive technology for individuals with severe speech and physical impairments (SSPI). Method In Phase 1, 591 items from 15 existing measures were mapped to the International Classification of Functioning, Disability and Health (ICF). In Phase 2, qualitative interviews were conducted with eight people with SSPI and seven caregivers. Resulting text data were coded in an iterative analysis. Results Most items (79%) mapped to the ICF environmental domain; over half (53%) mapped to more than one domain. The ICF framework was well suited for mapping items related to body functions and structures, but less so for items in other areas, including personal factors. Two constructs emerged from qualitative data: Quality of Life (QOL) and Assistive Technology. Component domains and themes were identified for each. Conclusions Preliminary constructs, domains, and themes were generated for future PCO measures relevant to BCI. Existing instruments are sufficient for initial items but do not adequately match the values of people with SSPI and their caregivers. Field methods for interviewing people with SSPI were successful, and support the inclusion of these individuals in PCO research. PMID:25806719

  3. A two-step method for rapid characterization of electroosmotic flows in capillary electrophoresis.

    PubMed

    Zhang, Wenjing; He, Muyi; Yuan, Tao; Xu, Wei

    2017-12-01

    The measurement of electroosmotic flow (EOF) is important in a capillary electrophoresis (CE) experiment in terms of performance optimization and stability improvement. Although several methods exist, there are demanding needs to accurately characterize ultra-low electroosmotic flow rates (EOF rates), such as in coated capillaries used in protein separations. In this work, a new method, called the two-step method, was developed to accurately and rapidly measure EOF rates in a capillary, especially for measuring the ultra-low EOF rates in coated capillaries. In this two-step method, the EOF rates were calculated by measuring the migration time difference of a neutral marker in two consecutive experiments, in which a pressure driven was introduced to accelerate the migration and the DC voltage was reversed to switch the EOF direction. Uncoated capillaries were first characterized by both this two-step method and a conventional method to confirm the validity of this new method. Then this new method was applied in the study of coated capillaries. Results show that this new method is not only fast in speed, but also better in accuracy. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. 3D Measurement of Anatomical Cross-sections of Foot while Walking

    NASA Astrophysics Data System (ADS)

    Kimura, Makoto; Mochimaru, Masaaki; Kanade, Takeo

    Recently, techniques for measuring and modeling of human body are taking attention, because human models are useful for ergonomic design in manufacturing. We aim to measure accurate shape of human foot that will be useful for the design of shoes. For such purpose, shape measurement of foot in motion is obviously important, because foot shape in the shoe is deformed while walking or running. In this paper, we propose a method to measure anatomical cross-sections of foot while walking. No one had ever measured dynamic shape of anatomical cross-sections, though they are very basic and popular in the field of biomechanics. Our proposed method is based on multi-view stereo method. The target cross-sections are painted in individual colors (red, green, yellow and blue), and the proposed method utilizes the characteristic of target shape in the camera captured images. Several nonlinear conditions are introduced in the process to find the consistent correspondence in all images. Our desired accuracy is less than 1mm error, which is similar to the existing 3D scanners for static foot measurement. In our experiments, the proposed method achieved the desired accuracy.

  5. Toward an applied technology for quality measurement in health care.

    PubMed

    Berwick, D M

    1988-01-01

    Cost containment, financial incentives to conserve resources, the growth of for-profit hospitals, an aggressive malpractice environment, and demands from purchasers are among the forces today increasing the need for improved methods that measure quality in health care. At the same time, increasingly sophisticated databases and the existence of managed care systems yield new opportunities to observe and correct quality problems. Research on targets of measurement (structure, process, and outcome) and methods of measurement (implicit, explicit, and sentinel methods) has not yet produced managerially useful applied technology for quality measurement in real-world settings. Such an applied technology would have to be cheaper, faster, more flexible, better reported, and more multidimensional than the majority of current research on quality assurance. In developing a new applied technology for the measurement of health care quality, quantitative disciplines have much to offer, such as decision support systems, criteria based on rigorous decision analyses, utility theory, tools for functional status measurement, and advances in operations research.

  6. Absorbed dose measurement in low temperature samples:. comparative methods using simulated material

    NASA Astrophysics Data System (ADS)

    Garcia, Ruth; Harris, Anthony; Winters, Martell; Howard, Betty; Mellor, Paul; Patil, Deepak; Meiner, Jason

    2004-09-01

    There is a growing need to reliably measure absorbed dose in low temperature samples, especially in the pharmaceutical and tissue banking industries. All dosimetry systems commonly used in the irradiation industry are temperature sensitive. Radiation of low temperature samples, such as those packaged with dry ice, must therefore take these dosimeter temperature effects into consideration. This paper will suggest a method to accurately deliver an absorbed radiation dose using dosimetry techniques designed to abrogate the skewing effects of low temperature environments on existing dosimetry systems.

  7. Method for measuring surface temperature

    DOEpatents

    Baker, Gary A [Los Alamos, NM; Baker, Sheila N [Los Alamos, NM; McCleskey, T Mark [Los Alamos, NM

    2009-07-28

    The present invention relates to a method for measuring a surface temperature using is a fluorescent temperature sensor or optical thermometer. The sensor includes a solution of 1,3-bis(1-pyrenyl)propane within a 1-butyl-1-1-methyl pyrrolidinium bis(trifluoromethylsulfonyl)imide ionic liquid solvent. The 1,3-bis(1-pyrenyl)propane remains unassociated when in the ground state while in solution. When subjected to UV light, an excited state is produced that exists in equilibrium with an excimer. The position of the equilibrium between the two excited states is temperature dependent.

  8. New advances in the partial-reflection-drifts experiment using microprocessors

    NASA Technical Reports Server (NTRS)

    Ruggerio, R. L.; Bowhill, S. A.

    1982-01-01

    Improvements to the partial reflection drifts experiment are completed. The results of the improvements include real time processing and simultaneous measurements of the D region with coherent scatter. Preliminary results indicate a positive correlation between drift velocities calculated by both methods during a two day interval. The possibility now exists for extended observations between partial reflection and coherent scatter. In addition, preliminary measurements could be performed between partial reflection and meteor radar to complete a comparison of methods used to determine velocities in the D region.

  9. High-speed event detector for embedded nanopore bio-systems.

    PubMed

    Huang, Yiyun; Magierowski, Sebastian; Ghafar-Zadeh, Ebrahim; Wang, Chengjie

    2015-08-01

    Biological measurements of microscopic phenomena often deal with discrete-event signals. The ability to automatically carry out such measurements at high-speed in a miniature embedded system is desirable but compromised by high-frequency noise along with practical constraints on filter quality and sampler resolution. This paper presents a real-time event-detection method in the context of nanopore sensing that helps to mitigate these drawbacks and allows accurate signal processing in an embedded system. Simulations show at least a 10× improvement over existing on-line detection methods.

  10. Chapter 16: Retrocommissioning Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Tiessen, Alex

    Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.

  11. Highly sensitive distributed birefringence measurements based on a two-pulse interrogation of a dynamic Brillouin grating

    NASA Astrophysics Data System (ADS)

    Soto, Marcelo A.; Denisov, Andrey; Angulo-Vinuesa, Xabier; Martin-Lopez, Sonia; Thévenaz, Luc; Gonzalez-Herraez, Miguel

    2017-04-01

    A method for distributed birefringence measurements is proposed based on the interference pattern generated by the interrogation of a dynamic Brillouin grating (DBG) using two short consecutive optical pulses. Compared to existing DBG interrogation techniques, the method here offers an improved sensitivity to birefringence changes thanks to the interferometric effect generated by the reflections of the two pulses. Experimental results demonstrate the possibility to obtain the longitudinal birefringence profile of a 20 m-long Panda fibre with an accuracy of 10-8 using 16 averages and 30 cm spatial resolution. The method enables sub-metric and highly-accurate distributed temperature and strain sensing.

  12. Construction of mathematical model for measuring material concentration by colorimetric method

    NASA Astrophysics Data System (ADS)

    Liu, Bing; Gao, Lingceng; Yu, Kairong; Tan, Xianghua

    2018-06-01

    This paper use the method of multiple linear regression to discuss the data of C problem of mathematical modeling in 2017. First, we have established a regression model for the concentration of 5 substances. But only the regression model of the substance concentration of urea in milk can pass through the significance test. The regression model established by the second sets of data can pass the significance test. But this model exists serious multicollinearity. We have improved the model by principal component analysis. The improved model is used to control the system so that it is possible to measure the concentration of material by direct colorimetric method.

  13. The Basic Principles and Methods of the System Approach to Compression of Telemetry Data

    NASA Astrophysics Data System (ADS)

    Levenets, A. V.

    2018-01-01

    The task of data compressing of measurement data is still urgent for information-measurement systems. In paper the basic principles necessary for designing of highly effective systems of compression of telemetric information are offered. A basis of the offered principles is representation of a telemetric frame as whole information space where we can find of existing correlation. The methods of data transformation and compressing algorithms realizing the offered principles are described. The compression ratio for offered compression algorithm is about 1.8 times higher, than for a classic algorithm. Thus, results of a research of methods and algorithms showing their good perspectives.

  14. Graphical methods for the sensitivity analysis in discriminant analysis

    DOE PAGES

    Kim, Youngil; Anderson-Cook, Christine M.; Dae-Heung, Jang

    2015-09-30

    Similar to regression, many measures to detect influential data points in discriminant analysis have been developed. Many follow similar principles as the diagnostic measures used in linear regression in the context of discriminant analysis. Here we focus on the impact on the predicted classification posterior probability when a data point is omitted. The new method is intuitive and easily interpretative compared to existing methods. We also propose a graphical display to show the individual movement of the posterior probability of other data points when a specific data point is omitted. This enables the summaries to capture the overall pattern ofmore » the change.« less

  15. Determination of glycerol in oils and fats using liquid chromatography chloride attachment electrospray ionization mass spectrometry.

    PubMed

    Jin, Chunfen; Viidanoja, Jyrki

    2017-01-15

    Existing liquid chromatography - mass spectrometry method for the analysis of short chain carboxylic acids was expanded and validated to cover also the measurement of glycerol from oils and fats. The method employs chloride anion attachment and two ions, [glycerol+ 35 Cl] - and [glycerol+ 37 Cl] - , as alternative quantifiers for improved selectivity of glycerol measurement. The averaged within run precision, between run precision and accuracy ranged between 0.3-7%, 0.4-6% and 94-99%, respectively, depending on the analyte ion and sample matrix. Selected renewable diesel feedstocks were analyzed with the method. Copyright © 2016 Elsevier B.V. All rights reserved.

  16. Methods of 14СО2, 13СО2 and 12СО2 detection in gaseous media in real time

    NASA Astrophysics Data System (ADS)

    Kireev, S. V.; Kondrashov, A. A.; Shnyrev, S. L.; Simanovsky, I. G.

    2017-10-01

    A comparative analytical review of the existing methods and techniques for measuring 13СО2 and 14СО2 mixed with 12СО2 in gases is provided. It shows that one of the most promising approaches is the method of infrared laser spectroscopy using frequency-tunable diode laser operating near the wavelengths of 4.3 or 2 µm. Measuring near the wavelength of 4.3 µm provides the most accurate results for 13СО2 and 14СО2, but requires more expensive equipment and has complex operation.

  17. Measures of Human Mobility Using Mobile Phone Records Enhanced with GIS Data.

    PubMed

    Williams, Nathalie E; Thomas, Timothy A; Dunbar, Matthew; Eagle, Nathan; Dobra, Adrian

    2015-01-01

    In the past decade, large scale mobile phone data have become available for the study of human movement patterns. These data hold an immense promise for understanding human behavior on a vast scale, and with a precision and accuracy never before possible with censuses, surveys or other existing data collection techniques. There is already a significant body of literature that has made key inroads into understanding human mobility using this exciting new data source, and there have been several different measures of mobility used. However, existing mobile phone based mobility measures are inconsistent, inaccurate, and confounded with social characteristics of local context. New measures would best be developed immediately as they will influence future studies of mobility using mobile phone data. In this article, we do exactly this. We discuss problems with existing mobile phone based measures of mobility and describe new methods for measuring mobility that address these concerns. Our measures of mobility, which incorporate both mobile phone records and detailed GIS data, are designed to address the spatial nature of human mobility, to remain independent of social characteristics of context, and to be comparable across geographic regions and time. We also contribute a discussion of the variety of uses for these new measures in developing a better understanding of how human mobility influences micro-level human behaviors and well-being, and macro-level social organization and change.

  18. A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity

    PubMed Central

    Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli

    2014-01-01

    Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994

  19. The Application of Sensors on Guardrails for the Purpose of Real Time Impact Detection

    DTIC Science & Technology

    2012-03-01

    collection methods ; however, there are major differences in the measures of performance for policy goals and objectives (U.S. DOT, 2002). The goal here is...seriousness of this issue has motivated the US Department of Transportation and Transportation Research Board to develop and deploy new methods and... methods to integrate new sensing capabilities into existing Intelligent Transportation Systems in a time efficient and cost effective manner. In

  20. Predicting chaos in memristive oscillator via harmonic balance method.

    PubMed

    Wang, Xin; Li, Chuandong; Huang, Tingwen; Duan, Shukai

    2012-12-01

    This paper studies the possible chaotic behaviors in a memristive oscillator with cubic nonlinearities via harmonic balance method which is also called the method of describing function. This method was proposed to detect chaos in classical Chua's circuit. We first transform the considered memristive oscillator system into Lur'e model and present the prediction of the existence of chaotic behaviors. To ensure the prediction result is correct, the distortion index is also measured. Numerical simulations are presented to show the effectiveness of theoretical results.

  1. Characterization of Adipose Tissue Product Quality Using Measurements of Oxygen Consumption Rate.

    PubMed

    Suszynski, Thomas M; Sieber, David A; Mueller, Kathryn; Van Beek, Allen L; Cunningham, Bruce L; Kenkel, Jeffrey M

    2018-03-14

    Fat grafting is a common procedure in plastic surgery but associated with unpredictable graft retention. Adipose tissue (AT) "product" quality is affected by the methods used for harvest, processing and transfer, which vary widely amongst surgeons. Currently, there is no method available to accurately assess the quality of AT. In this study, we present a novel method for the assessment of AT product quality through direct measurements of oxygen consumption rate (OCR). OCR has exhibited potential in predicting outcomes following pancreatic islet transplant. Our study aim was to reapportion existing technology for its use with AT preparations and to confirm that these measurements are feasible. OCR was successfully measured for en bloc and postprocessed AT using a stirred microchamber system. OCR was then normalized to DNA content (OCR/DNA), which represents the AT product quality. Mean (±SE) OCR/DNA values for fresh en bloc and post-processed AT were 149.8 (± 9.1) and 61.1 (± 6.1) nmol/min/mg DNA, respectively. These preliminary data suggest that: (1) OCR and OCR/DNA measurements of AT harvested using conventional protocol are feasible; and (2) standard AT processing results in a decrease in overall AT product quality. OCR measurements of AT using existing technology can be done and enables accurate, real-time, quantitative assessment of the quality of AT product prior to transfer. The availability and further validation of this type of assay could enable optimization of fat grafting protocol by providing a tool for the more detailed study of procedural variables that affect AT product quality.

  2. A graph-based semantic similarity measure for the gene ontology.

    PubMed

    Alvarez, Marco A; Yan, Changhui

    2011-12-01

    Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.

  3. Efficient Feature Selection and Classification of Protein Sequence Data in Bioinformatics

    PubMed Central

    Faye, Ibrahima; Samir, Brahim Belhaouari; Md Said, Abas

    2014-01-01

    Bioinformatics has been an emerging area of research for the last three decades. The ultimate aims of bioinformatics were to store and manage the biological data, and develop and analyze computational tools to enhance their understanding. The size of data accumulated under various sequencing projects is increasing exponentially, which presents difficulties for the experimental methods. To reduce the gap between newly sequenced protein and proteins with known functions, many computational techniques involving classification and clustering algorithms were proposed in the past. The classification of protein sequences into existing superfamilies is helpful in predicting the structure and function of large amount of newly discovered proteins. The existing classification results are unsatisfactory due to a huge size of features obtained through various feature encoding methods. In this work, a statistical metric-based feature selection technique has been proposed in order to reduce the size of the extracted feature vector. The proposed method of protein classification shows significant improvement in terms of performance measure metrics: accuracy, sensitivity, specificity, recall, F-measure, and so forth. PMID:25045727

  4. An optimization approach for observation association with systemic uncertainty applied to electro-optical systems

    NASA Astrophysics Data System (ADS)

    Worthy, Johnny L.; Holzinger, Marcus J.; Scheeres, Daniel J.

    2018-06-01

    The observation to observation measurement association problem for dynamical systems can be addressed by determining if the uncertain admissible regions produced from each observation have one or more points of intersection in state space. An observation association method is developed which uses an optimization based approach to identify local Mahalanobis distance minima in state space between two uncertain admissible regions. A binary hypothesis test with a selected false alarm rate is used to assess the probability that an intersection exists at the point(s) of minimum distance. The systemic uncertainties, such as measurement uncertainties, timing errors, and other parameter errors, define a distribution about a state estimate located at the local Mahalanobis distance minima. If local minima do not exist, then the observations are not associated. The proposed method utilizes an optimization approach defined on a reduced dimension state space to reduce the computational load of the algorithm. The efficacy and efficiency of the proposed method is demonstrated on observation data collected from the Georgia Tech Space Object Research Telescope.

  5. Why Measure Outcomes?

    PubMed

    Kuhn, John E

    2016-01-01

    The concept of measuring the outcomes of treatment in health care was promoted by Ernest Amory Codman in the early 1900s, but, until recently, his ideas were generally ignored. The forces that have advanced outcome measurement to the forefront of health care include the shift in payers for health care from the patient to large insurance companies or government agencies, the movement toward assessing the care of populations not individuals, and the effort to find value (or cost-effective treatments) amid rising healthcare costs. No ideal method exists to measure outcomes, and the information gathered depends on the reason the outcome information is required. Outcome measures used in research are best able to answer research questions. The methods for assessing physician and hospital performance include process measures, patient-experience measures, structure measures, and measures used to assess the outcomes of treatment. The methods used to assess performance should be validated, be reliable, and reflect a patient's perception of the treatment results. The healthcare industry must measure outcomes to identify which treatments are most effective and provide the most benefit to patients.

  6. Flip-angle profile of slice-selective excitation and the measurement of the MR longitudinal relaxation time with steady-state magnetization

    NASA Astrophysics Data System (ADS)

    Hsu, Jung-Jiin

    2015-08-01

    In MRI, the flip angle (FA) of slice-selective excitation is not uniform across the slice-thickness dimension. This work investigates the effect of the non-uniform FA profile on the accuracy of a commonly-used method for the measurement, in which the T1 value, i.e., the longitudinal relaxation time, is determined from the steady-state signals of an equally-spaced RF pulse train. By using the numerical solutions of the Bloch equation, it is shown that, because of the non-uniform FA profile, the outcome of the T1 measurement depends significantly on T1 of the specimen and on the FA and the inter-pulse spacing τ of the pulse train. A new method to restore the accuracy of the T1 measurement is described. Different from the existing approaches, the new method also removes the FA profile effect for the measurement of the FA, which is normally a part of the T1 measurement. In addition, the new method does not involve theoretical modeling, approximation, or modification to the underlying principle of the T1 measurement. An imaging experiment is performed, which shows that the new method can remove the FA-, the τ-, and the T1-dependence and produce T1 measurements in excellent agreement with the ones obtained from a gold standard method (the inversion-recovery method).

  7. Cement bond evaluation method in horizontal wells using segmented bond tool

    NASA Astrophysics Data System (ADS)

    Song, Ruolong; He, Li

    2018-06-01

    Most of the existing cement evaluation technologies suffer from tool eccentralization due to gravity in highly deviated wells and horizontal wells. This paper proposes a correction method to lessen the effects of tool eccentralization on evaluation results of cement bond using segmented bond tool, which has an omnidirectional sonic transmitter and eight segmented receivers evenly arranged around the tool 2 ft from the transmitter. Using 3-D finite difference parallel numerical simulation method, we investigate the logging responses of centred and eccentred segmented bond tool in a variety of bond conditions. From the numerical results, we find that the tool eccentricity and channel azimuth can be estimated from measured sector amplitude. The average of the sector amplitude when the tool is eccentred can be corrected to the one when the tool is centred. Then the corrected amplitude will be used to calculate the channel size. The proposed method is applied to both synthetic and field data. For synthetic data, it turns out that this method can estimate the tool eccentricity with small error and the bond map is improved after correction. For field data, the tool eccentricity has a good agreement with the measured well deviation angle. Though this method still suffers from the low accuracy of calculating channel azimuth, the credibility of corrected bond map is improved especially in horizontal wells. It gives us a choice to evaluate the bond condition for horizontal wells using existing logging tool. The numerical results in this paper can provide aids for understanding measurements of segmented tool in both vertical and horizontal wells.

  8. A method and instruments to identify the torque, the power and the efficiency of an internal combustion engine of a wheeled vehicle

    NASA Astrophysics Data System (ADS)

    Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.

    2018-01-01

    In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.

  9. Adaptive strategy for joint measurements

    NASA Astrophysics Data System (ADS)

    Uola, Roope; Luoma, Kimmo; Moroder, Tobias; Heinosaari, Teiko

    2016-08-01

    We develop a technique to find simultaneous measurements for noisy quantum observables in finite-dimensional Hilbert spaces. We use the method to derive lower bounds for the noise needed to make incompatible measurements jointly measurable. Using our strategy together with recent developments in the field of one-sided quantum information processing we show that the attained lower bounds are tight for various symmetric sets of quantum measurements. We use this characterisation to prove the existence of so called 4-Specker sets, i.e. sets of four incompatible observables with compatible subsets in the qubit case.

  10. Investigation of threaded fastener structural integrity

    NASA Technical Reports Server (NTRS)

    1977-01-01

    Technical nondestructive evaluation approaches to the determination of fastener integrity were assessed. Existing instruments and methods used to measure stress or strain were examined, with particular interest in fastener shank stress. Industry procedures being followed were evaluated to establish fastener integrity criteria.

  11. ASSAYING PARTICLE-BOUND POLYCYCLIC AROMATIC HYDROCARBONS (PAH) FROM ARCHIVED PM2.5 FILTERS

    EPA Science Inventory

    Airborne particulate matter contains numerous organic species, including several polycyclic aromatic hydrocarbons (PAHs) that are known or suspected carcinogens. Existing methods for measuring airborne PAHs are complex and costly, primarily because they are designed to collect...

  12. Review of LMIs, Interior Point Methods, Complexity Theory, and Robustness Analysis

    NASA Technical Reports Server (NTRS)

    Mesbahi, M.

    1996-01-01

    From end of intro: ...We would like to show that for certain problems in systems and control theory, there exist algorithms for which corresponding (xi) can be viewed as a certain measure of robustness, e.g., stability margin.

  13. Measurement of discharge using tracers

    USGS Publications Warehouse

    Kilpatrick, F.A.; Cobb, Ernest D.

    1985-01-01

    The development of fluorescent dyes and fluorometers that can measure these dyes at very low concentrations has made dye-dilution methods practical for measuring discharge. These methods are particularly useful for determining discharge under certain flow conditions that are unfavorable for current meter measurements. These include small streams, canals, and pipes where 1. Turbulence is excessive for current-meter measurement but conducive to good mixing. 2. Moving rocks and debris may damage instruments placed in the flow. 3. Cross-sectional areas or velocities are indeterminate or changing. 4. The flow is unsteady, such as the flow that exists with storm-runoff events on small streams and urban storm-sewer systems. 5. The flow is physically inaccessible or unsafe. From a practical standpoint, such methods are limited primarily to small streams, because of the excessively long channel-mixing lengths required for larger streams. Very good accuracy can be obtained provided that 1. Adequate mixing length and time are allowed. 2. Careful field and laboratory techniques are used. 3. Dye losses are not significant. This manual describes the slug-injection and constant-rate injection methods of performing tracer-dilution measurements. Emphasis is on the use of fluorescent dyes as tracers and the equipment, field methods, and laboratory procedures for performing such measurements. The tracer-velocity method is also briefly discussed.

  14. Assessment methods in human body composition.

    PubMed

    Lee, Seon Yeong; Gallagher, Dympna

    2008-09-01

    The present study reviews the most recently developed and commonly used methods for the determination of human body composition in vivo with relevance for nutritional assessment. Body composition measurement methods are continuously being perfected with the most commonly used methods being bioelectrical impedance analysis, dilution techniques, air displacement plethysmography, dual energy X-ray absorptiometry, and MRI or magnetic resonance spectroscopy. Recent developments include three-dimensional photonic scanning and quantitative magnetic resonance. Collectively, these techniques allow for the measurement of fat, fat-free mass, bone mineral content, total body water, extracellular water, total adipose tissue and its subdepots (visceral, subcutaneous, and intermuscular), skeletal muscle, select organs, and ectopic fat depots. There is an ongoing need to perfect methods that provide information beyond mass and structure (static measures) to kinetic measures that yield information on metabolic and biological functions. On the basis of the wide range of measurable properties, analytical methods and known body composition models, clinicians and scientists can quantify a number of body components and with longitudinal assessment, can track changes in health and disease with implications for understanding efficacy of nutritional and clinical interventions, diagnosis, prevention, and treatment in clinical settings. With the greater need to understand precursors of health risk beginning in childhood, a gap exists in appropriate in-vivo measurement methods beginning at birth.

  15. Assessment methods in human body composition

    PubMed Central

    Lee, Seon Yeong; Gallagher, Dympna

    2009-01-01

    Purpose of review The present study reviews the most recently developed and commonly used methods for the determination of human body composition in vivo with relevance for nutritional assessment. Recent findings Body composition measurement methods are continuously being perfected with the most commonly used methods being bioelectrical impedance analysis, dilution techniques, air displacement plethysmography, dual energy X-ray absorptiometry, and MRI or magnetic resonance spectroscopy. Recent developments include three-dimensional photonic scanning and quantitative magnetic resonance. Collectively, these techniques allow for the measurement of fat, fat-free mass, bone mineral content, total body water, extracellular water, total adipose tissue and its subdepots (visceral, subcutaneous, and intermuscular), skeletal muscle, select organs, and ectopic fat depots. Summary There is an ongoing need to perfect methods that provide information beyond mass and structure (static measures) to kinetic measures that yield information on metabolic and biological functions. On the basis of the wide range of measurable properties, analytical methods and known body composition models, clinicians and scientists can quantify a number of body components and with longitudinal assessment, can track changes in health and disease with implications for understanding efficacy of nutritional and clinical interventions, diagnosis, prevention, and treatment in clinical settings. With the greater need to understand precursors of health risk beginning in childhood, a gap exists in appropriate in-vivo measurement methods beginning at birth. PMID:18685451

  16. Dissecting Reactor Antineutrino Flux Calculations

    NASA Astrophysics Data System (ADS)

    Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.

    2017-09-01

    Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235U, 239Pu, 241Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In the present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238U contribution as well as the effective charge and the allowed shape assumption used in the conversion method. We observe that including a shape correction of about +6 % MeV-1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.

  17. Dissecting Reactor Antineutrino Flux Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sonzogni, A. A.; McCutchan, E. A.; Hayes, A. C.

    2017-09-15

    Current predictions for the antineutrino yield and spectra from a nuclear reactor rely on the experimental electron spectra from 235 U , 239 Pu , 241 Pu and a numerical method to convert these aggregate electron spectra into their corresponding antineutrino ones. In our present work we investigate quantitatively some of the basic assumptions and approximations used in the conversion method, studying first the compatibility between two recent approaches for calculating electron and antineutrino spectra. We then explore different possibilities for the disagreement between the measured Daya Bay and the Huber-Mueller antineutrino spectra, including the 238 U contribution as wellmore » as the effective charge and the allowed shape assumption used in the conversion method. Here, we observe that including a shape correction of about + 6 % MeV - 1 in conversion calculations can better describe the Daya Bay spectrum. Because of a lack of experimental data, this correction cannot be ruled out, concluding that in order to confirm the existence of the reactor neutrino anomaly, or even quantify it, precisely measured electron spectra for about 50 relevant fission products are needed. With the advent of new rare ion facilities, the measurement of shape factors for these nuclides, for many of which precise beta intensity data from TAGS experiments already exist, would be highly desirable.« less

  18. E-Flux2 and SPOT: Validated Methods for Inferring Intracellular Metabolic Flux Distributions from Transcriptomic Data.

    PubMed

    Kim, Min Kyung; Lane, Anatoliy; Kelley, James J; Lun, Desmond S

    2016-01-01

    Several methods have been developed to predict system-wide and condition-specific intracellular metabolic fluxes by integrating transcriptomic data with genome-scale metabolic models. While powerful in many settings, existing methods have several shortcomings, and it is unclear which method has the best accuracy in general because of limited validation against experimentally measured intracellular fluxes. We present a general optimization strategy for inferring intracellular metabolic flux distributions from transcriptomic data coupled with genome-scale metabolic reconstructions. It consists of two different template models called DC (determined carbon source model) and AC (all possible carbon sources model) and two different new methods called E-Flux2 (E-Flux method combined with minimization of l2 norm) and SPOT (Simplified Pearson cOrrelation with Transcriptomic data), which can be chosen and combined depending on the availability of knowledge on carbon source or objective function. This enables us to simulate a broad range of experimental conditions. We examined E. coli and S. cerevisiae as representative prokaryotic and eukaryotic microorganisms respectively. The predictive accuracy of our algorithm was validated by calculating the uncentered Pearson correlation between predicted fluxes and measured fluxes. To this end, we compiled 20 experimental conditions (11 in E. coli and 9 in S. cerevisiae), of transcriptome measurements coupled with corresponding central carbon metabolism intracellular flux measurements determined by 13C metabolic flux analysis (13C-MFA), which is the largest dataset assembled to date for the purpose of validating inference methods for predicting intracellular fluxes. In both organisms, our method achieves an average correlation coefficient ranging from 0.59 to 0.87, outperforming a representative sample of competing methods. Easy-to-use implementations of E-Flux2 and SPOT are available as part of the open-source package MOST (http://most.ccib.rutgers.edu/). Our method represents a significant advance over existing methods for inferring intracellular metabolic flux from transcriptomic data. It not only achieves higher accuracy, but it also combines into a single method a number of other desirable characteristics including applicability to a wide range of experimental conditions, production of a unique solution, fast running time, and the availability of a user-friendly implementation.

  19. Kinematic Determination of an Unmodeled Serial Manipulator by Means of an IMU

    NASA Astrophysics Data System (ADS)

    Ciarleglio, Constance A.

    Kinematic determination for an unmodeled manipulator is usually done through a-priori knowledge of the manipulator physical characteristics or external sensor information. The mathematics of the kinematic estimation, often based on Denavit- Hartenberg convention, are complex and have high computation requirements, in addition to being unique to the manipulator for which the method is developed. Analytical methods that can compute kinematics on-the fly have the potential to be highly beneficial in dynamic environments where different configurations and variable manipulator types are often required. This thesis derives a new screw theory based method of kinematic determination, using a single inertial measurement unit (IMU), for use with any serial, revolute manipulator. The method allows the expansion of reconfigurable manipulator design and simplifies the kinematic process for existing manipulators. A simulation is presented where the theory of the method is verified and characterized with error. The method is then implemented on an existing manipulator as a verification of functionality.

  20. Evaluation of the validity of the Bolton Index using cone-beam computed tomography (CBCT)

    PubMed Central

    Llamas, José M.; Cibrián, Rosa; Gandía, José L.; Paredes, Vanessa

    2012-01-01

    Aims: To evaluate the reliability and reproducibility of calculating the Bolton Index using cone-beam computed tomography (CBCT), and to compare this with measurements obtained using the 2D Digital Method. Material and Methods: Traditional study models were obtained from 50 patients, which were then digitized in order to be able to measure them using the Digital Method. Likewise, CBCTs of those same patients were undertaken using the Dental Picasso Master 3D® and the images obtained were then analysed using the InVivoDental programme. Results: By determining the regression lines for both measurement methods, as well as the difference between both of their values, the two methods are shown to be comparable, despite the fact that the measurements analysed presented statistically significant differences. Conclusions: The three-dimensional models obtained from the CBCT are as accurate and reproducible as the digital models obtained from the plaster study casts for calculating the Bolton Index. The differences existing between both methods were clinically acceptable. Key words:Tooth-size, digital models, bolton index, CBCT. PMID:22549690

  1. Automation is an Effective Way to Improve Quality of Verification (Calibration) of Measuring Instruments

    NASA Astrophysics Data System (ADS)

    Golobokov, M.; Danilevich, S.

    2018-04-01

    In order to assess calibration reliability and automate such assessment, procedures for data collection and simulation study of thermal imager calibration procedure have been elaborated. The existing calibration techniques do not always provide high reliability. A new method for analyzing the existing calibration techniques and developing new efficient ones has been suggested and tested. A type of software has been studied that allows generating instrument calibration reports automatically, monitoring their proper configuration, processing measurement results and assessing instrument validity. The use of such software allows reducing man-hours spent on finalization of calibration data 2 to 5 times and eliminating a whole set of typical operator errors.

  2. Identification of Successive ``Unobservable'' Cyber Data Attacks in Power Systems Through Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.

    2016-11-01

    This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.

  3. Micro gas analyzer measurement of nitric oxide in breath by direct wet scrubbing and fluorescence detection.

    PubMed

    Toda, Kei; Koga, Takahiro; Kosuge, Junichi; Kashiwagi, Mieko; Oguchi, Hiroshi; Arimoto, Takemi

    2009-08-15

    A novel method is proposed to measure NO in breath. Breath NO is a useful diagnostic measure for asthma patients. Due to the low water solubility of NO, existing wet chemical NO measurements are conducted on NO(2) after removal of pre-existing NO(2) and conversion of NO to NO(2). In contrast, this study utilizes direct measurement of NO by wet chemistry. Gaseous NO was collected into an aqueous phase by a honeycomb-patterned microchannel scrubber and reacted with diaminofluorescein-2 (DAF-2). Fluorescence of the product was measured using a miniature detector, comprising a blue light-emitting diode (LED) and a photodiode. The response intensity was found to dramatically increase following addition of NO(2) into the absorbing solution or air sample. By optimizing the conditions, the sensitivity obtained was sufficient to measure parts per billion by volume levels of NO continuously. The system was applied to real analysis of NO in breath, and the effect of coexisting compounds was investigated. The proposed system could successfully measure breath NO.

  4. Methods and techniques for measuring gas emissions from agricultural and animal feeding operations.

    PubMed

    Hu, Enzhu; Babcock, Esther L; Bialkowski, Stephen E; Jones, Scott B; Tuller, Markus

    2014-01-01

    Emissions of gases from agricultural and animal feeding operations contribute to climate change, produce odors, degrade sensitive ecosystems, and pose a threat to public health. The complexity of processes and environmental variables affecting these emissions complicate accurate and reliable quantification of gas fluxes and production rates. Although a plethora of measurement technologies exist, each method has its limitations that exacerbate accurate quantification of gas fluxes. Despite a growing interest in gas emission measurements, only a few available technologies include real-time, continuous monitoring capabilities. Commonly applied state-of-the-art measurement frameworks and technologies were critically examined and discussed, and recommendations for future research to address real-time monitoring requirements for forthcoming regulation and management needs are provided.

  5. Normalized Rotational Multiple Yield Surface Framework (NRMYSF) stress-strain curve prediction method based on small strain triaxial test data on undisturbed Auckland residual clay soils

    NASA Astrophysics Data System (ADS)

    Noor, M. J. Md; Ibrahim, A.; Rahman, A. S. A.

    2018-04-01

    Small strain triaxial test measurement is considered to be significantly accurate compared to the external strain measurement using conventional method due to systematic errors normally associated with the test. Three submersible miniature linear variable differential transducer (LVDT) mounted on yokes which clamped directly onto the soil sample at equally 120° from the others. The device setup using 0.4 N resolution load cell and 16 bit AD converter was capable of consistently resolving displacement of less than 1µm and measuring axial strains ranging from less than 0.001% to 2.5%. Further analysis of small strain local measurement data was performed using new Normalized Multiple Yield Surface Framework (NRMYSF) method and compared with existing Rotational Multiple Yield Surface Framework (RMYSF) prediction method. The prediction of shear strength based on combined intrinsic curvilinear shear strength envelope using small strain triaxial test data confirmed the significant improvement and reliability of the measurement and analysis methods. Moreover, the NRMYSF method shows an excellent data prediction and significant improvement toward more reliable prediction of soil strength that can reduce the cost and time of experimental laboratory test.

  6. Cultural adaptation and translation of measures: an integrated method.

    PubMed

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  7. Drawing on Resilience: Piloting the Utility of the Kinetic Family Drawing to Measure Resilience in Children of HIV-Positive Mothers

    ERIC Educational Resources Information Center

    Ebersöhn, L.; Eloff, I.; Finestone, M.; van Dullemen, I.; Sikkema, K.; Forsyth, B.

    2012-01-01

    In this article we describe how using a visual, child-friendly measure of resilience in a randomised control trial (RCT), the Kgolo Mmogo (KM) project, resulted in representative insights on resilience in a mother-child relationship where the mother is HIV-positive. We used the existing psychological method Kinetic Family Drawing (KFD) to measure…

  8. Internal Stress Monitoring of In-Service Structural Steel Members with Ultrasonic Method

    PubMed Central

    Li, Zuohua; He, Jingbo; Teng, Jun; Wang, Ying

    2016-01-01

    Internal stress in structural steel members is an important parameter for steel structures in their design, construction, and service stages. However, it is hard to measure via traditional approaches. Among the existing non-destructive testing (NDT) methods, the ultrasonic method has received the most research attention. Longitudinal critically refracted (Lcr) waves, which propagate parallel to the surface of the material within an effective depth, have shown great potential as an effective stress measurement approach. This paper presents a systematic non-destructive evaluation method to determine the internal stress in in-service structural steel members using Lcr waves. Based on theory of acoustoelasticity, a stress evaluation formula is derived. Factor of stress to acoustic time difference is used to describe the relationship between stress and measurable acoustic results. A testing facility is developed and used to demonstrate the performance of the proposed method. Two steel members are measured by using the proposed method and the traditional strain gauge method for verification. Parametric studies are performed on three steel members and the aluminum plate to investigate the factors that influence the testing results. The results show that the proposed method is effective and accurate for determining stress in in-service structural steel members. PMID:28773347

  9. Internal Stress Monitoring of In-Service Structural Steel Members with Ultrasonic Method.

    PubMed

    Li, Zuohua; He, Jingbo; Teng, Jun; Wang, Ying

    2016-03-23

    Internal stress in structural steel members is an important parameter for steel structures in their design, construction, and service stages. However, it is hard to measure via traditional approaches. Among the existing non-destructive testing (NDT) methods, the ultrasonic method has received the most research attention. Longitudinal critically refracted (Lcr) waves, which propagate parallel to the surface of the material within an effective depth, have shown great potential as an effective stress measurement approach. This paper presents a systematic non-destructive evaluation method to determine the internal stress in in-service structural steel members using Lcr waves. Based on theory of acoustoelasticity, a stress evaluation formula is derived. Factor of stress to acoustic time difference is used to describe the relationship between stress and measurable acoustic results. A testing facility is developed and used to demonstrate the performance of the proposed method. Two steel members are measured by using the proposed method and the traditional strain gauge method for verification. Parametric studies are performed on three steel members and the aluminum plate to investigate the factors that influence the testing results. The results show that the proposed method is effective and accurate for determining stress in in-service structural steel members.

  10. An information-theoretical perspective on weighted ensemble forecasts

    NASA Astrophysics Data System (ADS)

    Weijs, Steven V.; van de Giesen, Nick

    2013-08-01

    This paper presents an information-theoretical method for weighting ensemble forecasts with new information. Weighted ensemble forecasts can be used to adjust the distribution that an existing ensemble of time series represents, without modifying the values in the ensemble itself. The weighting can, for example, add new seasonal forecast information in an existing ensemble of historically measured time series that represents climatic uncertainty. A recent article in this journal compared several methods to determine the weights for the ensemble members and introduced the pdf-ratio method. In this article, a new method, the minimum relative entropy update (MRE-update), is presented. Based on the principle of minimum discrimination information, an extension of the principle of maximum entropy (POME), the method ensures that no more information is added to the ensemble than is present in the forecast. This is achieved by minimizing relative entropy, with the forecast information imposed as constraints. From this same perspective, an information-theoretical view on the various weighting methods is presented. The MRE-update is compared with the existing methods and the parallels with the pdf-ratio method are analysed. The paper provides a new, information-theoretical justification for one version of the pdf-ratio method that turns out to be equivalent to the MRE-update. All other methods result in sets of ensemble weights that, seen from the information-theoretical perspective, add either too little or too much (i.e. fictitious) information to the ensemble.

  11. Using existing population-based data sets to measure the American Academy of Pediatrics definition of medical home for all children and children with special health care needs.

    PubMed

    Bethell, Christina D; Read, Debra; Brockwood, Krista

    2004-05-01

    National health goals include ensuring that all children have a medical home. Historically, medical home has been determined by the presence of a usual or primary source of care, such as a pediatrician or a family physician. More recent definitions expand on this simplistic notion of medical home. A definition of medical home set forth by the American Academy of Pediatrics (AAP) includes 7 dimensions and 37 discrete concepts for determining the presence of a medical home for a child. Standardized methods to operationalize these definitions for purposes of national, state, health plan, or medical practice level reporting on the presence of medical homes for children are essential to assessing and improving health care system performance in this area. The objective of this study was to identify methods to measure the presence of medical homes for all children and for children with special health care needs (CSHCN) using existing population-based data sets. Methods were developed for using existing population-based data sets to assess the presence of medical homes, as defined by the AAP, for children with and without special health care needs. Data sets evaluated included the National Survey of Children With Special Health Care Needs, the National Medical Expenditures Panel Survey, the Consumer Assessment of Health Plans Study Child Survey (CAHPS), and the Consumer Assessment of Health Plans Study Child Survey--Children With Chronic Conditions (CAHPS-CCC2.0H). Alternative methods for constructing measures using existing data were compared and results used to inform the design of a new method for use in the upcoming National Survey of Children's Health. Data from CAHPS-CCC2.0H are used to illustrate measurement options and variations in the overall presence of medical homes for children across managed health care plans as well as to evaluate in which areas of the AAP definition of medical home improvements may be most needed for all CSHCN. Existing surveys vary in their coverage of concepts included in the AAP definition of medical home and, therefore, in their capacity to evaluate medical home for children with and without special health care needs. Using data from CAHPS-CCC2.0H, the overall proportion of children who were enrolled in managed care health plans and met criteria for having a medical home varied from 43.9% to 74% depending on the specific scoring method selected for these items. Wide variations across health plans were observed and were most prominent in the areas of "accessible care" and "comprehensive care." Performance was uniformly poorest in the area of "coordinated care" and for CSHCN. Although children with a personal doctor or nurse were more likely to meet the AAP criteria for having a medical home, simply having a personal doctor or nurse was not highly predictive of whether a child experienced the other core qualities of a medical home (positive predictive value: .50; negative predictive value: .59). Despite differences across existing surveys and gaps in concepts represented, we believe that the AAP definition of medical home can be well represented by the small subset of concepts represented in the National Survey of Children With Special Health Care Needs and the CAHPS-CCC2.0H. A less comprehensive yet still worthwhile measure is possible using the Medical Expenditures Panel Survey. The varying degrees of empirical evidence and consensus for each of the AAP definition domains for medical home suggest the need for constructing measures that also vary in terms of criteria for determining that a child does or does not have a medical home. In addition to a simple "yes or no," or rate-based, measure, a continuous medical "homeness" score that places a child or group of children on a continuum of medical "homeness" is also valuable. Findings indicate that health plans have an important role to play in ensuring medical homes for children in addition to medical practices and those who set policies that guide the design and delivery of health care for children. Oven. Overall, using existing population-based data, a measure of medical home that is aligned with the AAP definition is feasible to include in the annual National Healthcare Quality Report, in state reports on the quality of Medicaid, State Children's Health Insurance Program, and Title V programs as well as to evaluate performance on the Healthy People 2010 objectives and the President's New Freedom Initiative.

  12. Updating and improving methodology for prioritizing highway project locations on the strategic intermodal system : [summary].

    DOT National Transportation Integrated Search

    2016-05-01

    Florida International University researchers examined the existing performance measures and the project prioritization method in the CMP and updated them to better reflect the current conditions and strategic goals of FDOT. They also developed visual...

  13. Multilevel Modeling with Correlated Effects

    ERIC Educational Resources Information Center

    Kim, Jee-Seon; Frees, Edward W.

    2007-01-01

    When there exist omitted effects, measurement error, and/or simultaneity in multilevel models, explanatory variables may be correlated with random components, and standard estimation methods do not provide consistent estimates of model parameters. This paper introduces estimators that are consistent under such conditions. By employing generalized…

  14. Accuracy of lagoon gas emissions using an inverse dispersion method

    USDA-ARS?s Scientific Manuscript database

    Measuring gas emissions from treatment lagoons and storage ponds poses challenging conditions for existing micrometeorological techniques because of non-ideal wind conditions. These include those induced by trees and crops surrounding the lagoons, and lagoons with dimensions too small to establish ...

  15. Nuclear States with Abnormally Large Radii (size Isomers)

    NASA Astrophysics Data System (ADS)

    Ogloblin, A. A.; Demyanova, A. S.; Danilov, A. N.; Belyaeva, T. L.; Goncharov, S. A.

    2015-06-01

    Application of the methods of measuring the radii of the short-lived excited states (Modified diffraction model MDM, Inelastic nuclear rainbow scattering method INRS, Asymptotic normalization coefficients method ANC) to the analysis of some nuclear reactions provide evidence of existing in 9Be, 11B, 12C, 13C the excited states whose radii exceed those of the corresponding ground states by ~ 30%. Two types of structure of these "size isomers" were identified: neutron halo an α-clusters.

  16. Measuring urban water conservation policies: Toward a comprehensive index

    USGS Publications Warehouse

    Hess, David; Wold, Christopher; Worland, Scott C.; Hornberger, George M.

    2017-01-01

    This article (1) discusses existing efforts to measure water conservation policies (WCPs) in the United States (U.S.); (2) suggests general methodological guidelines for creating robust water conservation indices (WCIs); (3) presents a comprehensive template for coding WCPs; (4) introduces a summary index, the Vanderbilt Water Conservation Index (VWCI), which is derived from 79 WCP observations for 197 cities for the year 2015; and (5) compares the VWCI to WCP data extracted from the 2010 American Water Works Association (AWWA) Water and Wastewater Rates survey. Existing approaches to measuring urban WCPs in U.S. cities are limited because they consider only a portion of WCPs or they are restricted geographically. The VWCI consists of a more comprehensive set of 79 observations classified as residential, commercial/industrial, billing structure, drought plan, or general. Our comparison of the VWCI and AWWA survey responses indicate reasonable agreement (ρ = 0.76) between the two WCIs for 98 cities where the data overlap. The correlation suggests the AWWA survey responses can provide fairly robust longitudinal WCP information, but we argue the measurement of WCPs is still in its infancy, and our approach suggests strategies for improving existing methods.

  17. Assessing the Utility of Item Response Theory Models: Differential Item Functioning.

    ERIC Educational Resources Information Center

    Scheuneman, Janice Dowd

    The current status of item response theory (IRT) is discussed. Several IRT methods exist for assessing whether an item is biased. Focus is on methods proposed by L. M. Rudner (1975), F. M. Lord (1977), D. Thissen et al. (1988) and R. L. Linn and D. Harnisch (1981). Rudner suggested a measure of the area lying between the two item characteristic…

  18. Mobile robot self-localization system using single webcam distance measurement technology in indoor environments.

    PubMed

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-27

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.

  19. Method and apparatus for measuring reactivity of fissile material

    DOEpatents

    Lee, D.M.; Lindquist, L.O.

    1982-09-07

    Given are a method and apparatus for measuring nondestructively and noninvasively (i.e., using no internal probing) the burnup, reactivity, or fissile content of any material which emits neutrons and which has fissionable components. The assay is accomplished by altering the return flux of neutrons into the fuel assembly by means of changing the reflecting material. The existing passive neutron emissions in the material being assayed are used as the source of interrogating neutrons. Two measurements of either emitted neutron or emitted gamma-ray count rates are made and are then correlated to either reactivity, burnup, or fissionable content of the material being assayed, thus providing a measurement of either reactivity, burnup, or fissionable content of the material being assayed. Spent fuel which has been freshly discharged from a reactor can be assayed using this method and apparatus. Precisions of 1000 MWd/tU appear to be feasible.

  20. Mobile Robot Self-Localization System Using Single Webcam Distance Measurement Technology in Indoor Environments

    PubMed Central

    Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen

    2014-01-01

    A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282

  1. Two-point method uncertainty during control and measurement of cylindrical element diameters

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  2. A brief measure of attitudes toward mixed methods research in psychology.

    PubMed

    Roberts, Lynne D; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; 'Limited Exposure,' '(in)Compatibility,' 'Validity,' and 'Tokenistic Qualitative Component'; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs.

  3. Harvesting tree biomass at the stand level to assess the accuracy of field and airborne biomass estimation in savannas.

    PubMed

    Colgan, Matthew S; Asner, Gregory P; Swemmer, Tony

    2013-07-01

    Tree biomass is an integrated measure of net growth and is critical for understanding, monitoring, and modeling ecosystem functions. Despite the importance of accurately measuring tree biomass, several fundamental barriers preclude direct measurement at large spatial scales, including the facts that trees must be felled to be weighed and that even modestly sized trees are challenging to maneuver once felled. Allometric methods allow for estimation of tree mass using structural characteristics, such as trunk diameter. Savanna trees present additional challenges, including limited available allometry and a prevalence of multiple stems per individual. Here we collected airborne lidar data over a semiarid savanna adjacent to the Kruger National Park, South Africa, and then harvested and weighed woody plant biomass at the plot scale to provide a standard against which field and airborne estimation methods could be compared. For an existing airborne lidar method, we found that half of the total error was due to averaging canopy height at the plot scale. This error was eliminated by instead measuring maximum height and crown area of individual trees from lidar data using an object-based method to identify individual tree crowns and estimate their biomass. The best object-based model approached the accuracy of field allometry at both the tree and plot levels, and it more than doubled the accuracy compared to existing airborne methods (17% vs. 44% deviation from harvested biomass). Allometric error accounted for less than one-third of the total residual error in airborne biomass estimates at the plot scale when using allometry with low bias. Airborne methods also gave more accurate predictions at the plot level than did field methods based on diameter-only allometry. These results provide a novel comparison of field and airborne biomass estimates using harvested plots and advance the role of lidar remote sensing in savanna ecosystems.

  4. Comparison of usual and alternative methods to measure height in mechanically ventilated patients: potential impact on protective ventilation.

    PubMed

    Bojmehrani, Azadeh; Bergeron-Duchesne, Maude; Bouchard, Carmelle; Simard, Serge; Bouchard, Pierre-Alexandre; Vanderschuren, Abel; L'Her, Erwan; Lellouche, François

    2014-07-01

    Protective ventilation implementation requires the calculation of predicted body weight (PBW), determined by a formula based on gender and height. Consequently, height inaccuracy may be a limiting factor to correctly set tidal volumes. The objective of this study was to evaluate the accuracy of different methods in measuring heights in mechanically ventilated patients. Before cardiac surgery, actual height was measured with a height gauge while subjects were standing upright (reference method); the height was also estimated by alternative methods based on lower leg and forearm measurements. After cardiac surgery, upon ICU admission, a subject's height was visually estimated by a clinician and then measured with a tape measure while the subject was supine and undergoing mechanical ventilation. One hundred subjects (75 men, 25 women) were prospectively included. Mean PBW was 61.0 ± 9.7 kg, and mean actual weight was 30.3% higher. In comparison with the reference method, estimating the height visually and using the tape measure were less accurate than both lower leg and forearm measurements. Errors above 10% in calculating the PBW were present in 25 and 40 subjects when the tape measure or visual estimation of height was used in the formula, respectively. With lower leg and forearm measurements, 15 subjects had errors above 10% (P < .001). Our results demonstrate that significant variability exists between the different methods used to measure height in bedridden patients on mechanical ventilation. Alternative methods based on lower leg and forearm measurements are potentially interesting solutions to facilitate the accurate application of protective ventilation. Copyright © 2014 by Daedalus Enterprises.

  5. Research on uncertainty evaluation measure and method of voltage sag severity

    NASA Astrophysics Data System (ADS)

    Liu, X. N.; Wei, J.; Ye, S. Y.; Chen, B.; Long, C.

    2018-01-01

    Voltage sag is an inevitable serious problem of power quality in power system. This paper focuses on a general summarization and reviews on the concepts, indices and evaluation methods about voltage sag severity. Considering the complexity and uncertainty of influencing factors, damage degree, the characteristics and requirements of voltage sag severity in the power source-network-load sides, the measure concepts and their existing conditions, evaluation indices and methods of voltage sag severity have been analyzed. Current evaluation techniques, such as stochastic theory, fuzzy logic, as well as their fusion, are reviewed in detail. An index system about voltage sag severity is provided for comprehensive study. The main aim of this paper is to propose thought and method of severity research based on advanced uncertainty theory and uncertainty measure. This study may be considered as a valuable guide for researchers who are interested in the domain of voltage sag severity.

  6. Accurate evaluation of fast threshold voltage shift for SiC MOS devices under various gate bias stress conditions

    NASA Astrophysics Data System (ADS)

    Sometani, Mitsuru; Okamoto, Mitsuo; Hatakeyama, Tetsuo; Iwahashi, Yohei; Hayashi, Mariko; Okamoto, Dai; Yano, Hiroshi; Harada, Shinsuke; Yonezawa, Yoshiyuki; Okumura, Hajime

    2018-04-01

    We investigated methods of measuring the threshold voltage (V th) shift of 4H-silicon carbide (SiC) metal–oxide–semiconductor field-effect transistors (MOSFETs) under positive DC, negative DC, and AC gate bias stresses. A fast measurement method for V th shift under both positive and negative DC stresses revealed the existence of an extremely large V th shift in the short-stress-time region. We then examined the effect of fast V th shifts on drain current (I d) changes within a pulse under AC operation. The fast V th shifts were suppressed by nitridation. However, the I d change within one pulse occurred even in commercially available SiC MOSFETs. The correlation between I d changes within one pulse and V th shifts measured by a conventional method is weak. Thus, a fast and in situ measurement method is indispensable for the accurate evaluation of I d changes under AC operation.

  7. Robust estimation of partially linear models for longitudinal data with dropouts and measurement error.

    PubMed

    Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing

    2016-12-20

    Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  8. Validation of an arterial tortuosity measure with application to hypertension collection of clinical hypertensive patients

    PubMed Central

    2011-01-01

    Background Hypertension may increase tortuosity or twistedness of arteries. We applied a centerline extraction algorithm and tortuosity metric to magnetic resonance angiography (MRA) brain images to quantitatively measure the tortuosity of arterial vessel centerlines. The most commonly used arterial tortuosity measure is the distance factor metric (DFM). This study tested a DFM based measurement’s ability to detect increases in arterial tortuosity of hypertensives using existing images. Existing images presented challenges such as different resolutions which may affect the tortuosity measurement, different depths of the area imaged, and different artifacts of imaging that require filtering. Methods The stability and accuracy of alternative centerline algorithms was validated in numerically generated models and test brain MRA data. Existing images were gathered from previous studies and clinical medical systems by manually reading electronic medical records to identify hypertensives and negatives. Images of different resolutions were interpolated to similar resolutions. Arterial tortuosity in MRA images was measured from a DFM curve and tested on numerically generated models as well as MRA images from two hypertensive and three negative control populations. Comparisons were made between different resolutions, different filters, hypertensives versus negatives, and different negative controls. Results In tests using numerical models of a simple helix, the measured tortuosity increased as expected with more tightly coiled helices. Interpolation reduced resolution-dependent differences in measured tortuosity. The Korean hypertensive population had significantly higher arterial tortuosity than its corresponding negative control population across multiple arteries. In addition one negative control population of different ethnicity had significantly less arterial tortuosity than the other two. Conclusions Tortuosity can be compared between images of different resolutions by interpolating from lower to higher resolutions. Use of a universal negative control was not possible in this study. The method described here detected elevated arterial tortuosity in a hypertensive population compared to the negative control population and can be used to study this relation in other populations. PMID:22166145

  9. Looking for trees in the forest: summary tree from posterior samples

    PubMed Central

    2013-01-01

    Background Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. Results We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the “truth”, i.e to the tree used to generate the sequences. Conclusions Our simulations indicate that no single method is “best”. Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree. PMID:24093883

  10. Looking for trees in the forest: summary tree from posterior samples.

    PubMed

    Heled, Joseph; Bouckaert, Remco R

    2013-10-04

    Bayesian phylogenetic analysis generates a set of trees which are often condensed into a single tree representing the whole set. Many methods exist for selecting a representative topology for a set of unrooted trees, few exist for assigning branch lengths to a fixed topology, and even fewer for simultaneously setting the topology and branch lengths. However, there is very little research into locating a good representative for a set of rooted time trees like the ones obtained from a BEAST analysis. We empirically compare new and known methods for generating a summary tree. Some new methods are motivated by mathematical constructions such as tree metrics, while the rest employ tree concepts which work well in practice. These use more of the posterior than existing methods, which discard information not directly mapped to the chosen topology. Using results from a large number of simulations we assess the quality of a summary tree, measuring (a) how well it explains the sequence data under the model and (b) how close it is to the "truth", i.e to the tree used to generate the sequences. Our simulations indicate that no single method is "best". Methods producing good divergence time estimates have poor branch lengths and lower model fit, and vice versa. Using the results presented here, a user can choose the appropriate method based on the purpose of the summary tree.

  11. Research on position and orientation measurement method for roadheader based on vision/INS

    NASA Astrophysics Data System (ADS)

    Yang, Jinyong; Zhang, Guanqin; Huang, Zhe; Ye, Yaozhong; Ma, Bowen; Wang, Yizhong

    2018-01-01

    Roadheader which is a kind of special equipment for large tunnel excavation has been widely used in Coal Mine. It is one of the main mechanical-electrical equipment for mine production and also has been regarded as the core equipment for underground tunnel driving construction. With the deep application of the rapid driving system, underground tunnel driving methods with higher automation level are required. In this respect, the real-time position and orientation measurement technique for roadheader is one of the most important research contents. For solving the problem of position and orientation measurement automatically in real time for roadheaders, this paper analyses and compares the features of several existing measuring methods. Then a new method based on the combination of monocular vision and strap down inertial navigation system (SINS) would be proposed. By realizing five degree-of-freedom (DOF) measurement of real-time position and orientation of roadheader, this method has been verified by the rapid excavation equipment in Daliuta coal mine. Experiment results show that the accuracy of orientation measurement is better than 0.1°, the standard deviation of static drift is better than 0.25° and the accuracy of position measurement is better than 1cm. It is proved that this method can be used in real-time position and orientation measurement application for roadheader which has a broad prospect in coal mine engineering.

  12. Segmentation quality evaluation using region-based precision and recall measures for remote sensing images

    NASA Astrophysics Data System (ADS)

    Zhang, Xueliang; Feng, Xuezhi; Xiao, Pengfeng; He, Guangjun; Zhu, Liujun

    2015-04-01

    Segmentation of remote sensing images is a critical step in geographic object-based image analysis. Evaluating the performance of segmentation algorithms is essential to identify effective segmentation methods and optimize their parameters. In this study, we propose region-based precision and recall measures and use them to compare two image partitions for the purpose of evaluating segmentation quality. The two measures are calculated based on region overlapping and presented as a point or a curve in a precision-recall space, which can indicate segmentation quality in both geometric and arithmetic respects. Furthermore, the precision and recall measures are combined by using four different methods. We examine and compare the effectiveness of the combined indicators through geometric illustration, in an effort to reveal segmentation quality clearly and capture the trade-off between the two measures. In the experiments, we adopted the multiresolution segmentation (MRS) method for evaluation. The proposed measures are compared with four existing discrepancy measures to further confirm their capabilities. Finally, we suggest using a combination of the region-based precision-recall curve and the F-measure for supervised segmentation evaluation.

  13. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment

    PubMed Central

    Seo, Aria; Kim, Yeichang

    2017-01-01

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users’ situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS. PMID:28805709

  14. Cyber Physical Systems for User Reliability Measurements in a Sharing Economy Environment.

    PubMed

    Seo, Aria; Jeong, Junho; Kim, Yeichang

    2017-08-13

    As the sharing economic market grows, the number of users is also increasing but many problems arise in terms of reliability between providers and users in the processing of services. The existing methods provide shared economic systems that judge the reliability of the provider from the viewpoint of the user. In this paper, we have developed a system for establishing mutual trust between providers and users in a shared economic environment to solve existing problems. In order to implement a system that can measure and control users' situation in a shared economic environment, we analyzed the necessary factors in a cyber physical system (CPS). In addition, a user measurement system based on a CPS structure in a sharing economic environment is implemented through analysis of the factors to consider when constructing a CPS.

  15. Active learning for noisy oracle via density power divergence.

    PubMed

    Sogawa, Yasuhiro; Ueno, Tsuyoshi; Kawahara, Yoshinobu; Washio, Takashi

    2013-10-01

    The accuracy of active learning is critically influenced by the existence of noisy labels given by a noisy oracle. In this paper, we propose a novel pool-based active learning framework through robust measures based on density power divergence. By minimizing density power divergence, such as β-divergence and γ-divergence, one can estimate the model accurately even under the existence of noisy labels within data. Accordingly, we develop query selecting measures for pool-based active learning using these divergences. In addition, we propose an evaluation scheme for these measures based on asymptotic statistical analyses, which enables us to perform active learning by evaluating an estimation error directly. Experiments with benchmark datasets and real-world image datasets show that our active learning scheme performs better than several baseline methods. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Aquifer Recharge Estimation In Unsaturated Porous Rock Using Darcian And Geophysical Methods.

    NASA Astrophysics Data System (ADS)

    Nimmo, J. R.; De Carlo, L.; Masciale, R.; Turturro, A. C.; Perkins, K. S.; Caputo, M. C.

    2016-12-01

    Within the unsaturated zone a constant downward gravity-driven flux of water commonly exists at depths ranging from a few meters to tens of meters depending on climate, medium, and vegetation. In this case a steady-state application of Darcy's law can provide recharge rate estimates.We have applied an integrated approach that combines field geophysical measurements with laboratory hydraulic property measurements on core samples to produce accurate estimates of steady-state aquifer recharge, or, in cases where episodic recharge also occurs, the steady component of recharge. The method requires (1) measurement of the water content existing in the deep unsaturated zone at the location of a core sample retrieved for lab measurements, and (2) measurement of the core sample's unsaturated hydraulic conductivity over a range of water content that includes the value measured in situ. Both types of measurements must be done with high accuracy. Darcy's law applied with the measured unsaturated hydraulic conductivity and gravitational driving force provides recharge estimates.Aquifer recharge was estimated using Darcian and geophysical methods at a deep porous rock (calcarenite) experimental site in Canosa, southern Italy. Electrical Resistivity Tomography (ERT) and Vertical Electrical Sounding (VES) profiles were collected from the land surface to water table to provide data for Darcian recharge estimation. Volumetric water content was estimated from resistivity profiles using a laboratory-derived calibration function based on Archie's law for rock samples from the experimental site, where electrical conductivity of the rock was related to the porosity and water saturation. Multiple-depth core samples were evaluated using the Quasi-Steady Centrifuge (QSC) method to obtain hydraulic conductivity (K), matric potential (ψ), and water content (θ) estimates within this profile. Laboratory-determined unsaturated hydraulic conductivity ranged from 3.90 x 10-9 to 1.02 x 10-5 m/s over a volumetric water content range from 0.1938 to 0.4311 m3/m3. Using these measured properties, the water content estimated from geophysical measurements has been used to identify the unsaturated hydraulic conductivity indicative of the steady component of the aquifer recharge rate at Canosa.

  17. An Orientation Measurement Method Based on Hall-effect Sensors for Permanent Magnet Spherical Actuators with 3D Magnet Array

    PubMed Central

    Yan, Liang; Zhu, Bo; Jiao, Zongxia; Chen, Chin-Yin; Chen, I-Ming

    2014-01-01

    An orientation measurement method based on Hall-effect sensors is proposed for permanent magnet (PM) spherical actuators with three-dimensional (3D) magnet array. As there is no contact between the measurement system and the rotor, this method could effectively avoid friction torque and additional inertial moment existing in conventional approaches. Curved surface fitting method based on exponential approximation is proposed to formulate the magnetic field distribution in 3D space. The comparison with conventional modeling method shows that it helps to improve the model accuracy. The Hall-effect sensors are distributed around the rotor with PM poles to detect the flux density at different points, and thus the rotor orientation can be computed from the measured results and analytical models. Experiments have been conducted on the developed research prototype of the spherical actuator to validate the accuracy of the analytical equations relating the rotor orientation and the value of magnetic flux density. The experimental results show that the proposed method can measure the rotor orientation precisely, and the measurement accuracy could be improved by the novel 3D magnet array. The study result could be used for real-time motion control of PM spherical actuators. PMID:25342000

  18. Interpretation of scanning electron microscope measurements of minority carrier diffusion lengths in semiconductors

    NASA Technical Reports Server (NTRS)

    Flat, A.; Milnes, A. G.

    1978-01-01

    In scanning electron microscope (SEM) injection measurements of minority carrier diffusion lengths some uncertainties of interpretation exist when the response current is nonlinear with distance. This is significant in epitaxial layers where the layer thickness is not large in relation to the diffusion length, and where there are large surface recombination velocities on the incident and contact surfaces. An image method of analysis is presented for such specimens. A method of using the results to correct the observed response in a simple convenient way is presented. The technique is illustrated with reference to measurements in epitaxial layers of GaAs. Average beam penetration depth may also be estimated from the curve shape.

  19. Study on the system-level test method of digital metering in smart substation

    NASA Astrophysics Data System (ADS)

    Zhang, Xiang; Yang, Min; Hu, Juan; Li, Fuchao; Luo, Ruixi; Li, Jinsong; Ai, Bing

    2017-03-01

    Nowadays, the test methods of digital metering system in smart substation are used to test and evaluate the performance of a single device, but these methods can only effectively guarantee the accuracy and reliability of the measurement results of a digital metering device in a single run, it does not completely reflect the performance when each device constitutes a complete system. This paper introduced the shortages of the existing test methods. A system-level test method of digital metering in smart substation was proposed, and the feasibility of the method was proved by the actual test.

  20. Research progress on expansive soil cracks under changing environment.

    PubMed

    Shi, Bei-xiao; Zheng, Cheng-feng; Wu, Jin-kun

    2014-01-01

    Engineering problems shunned previously rise to the surface gradually with the activities of reforming the natural world in depth, the problem of expansive soil crack under the changing environment becoming a control factor of expansive soil slope stability. The problem of expansive soil crack has gradually become a research hotspot, elaborates the occurrence and development of cracks from the basic properties of expansive soil, and points out the role of controlling the crack of expansive soil strength. We summarize the existing research methods and results of expansive soil crack characteristics. Improving crack measurement and calculation method and researching the crack depth measurement, statistical analysis method, crack depth and surface feature relationship will be the future direction.

  1. Window-based method for approximating the Hausdorff in three-dimensional range imagery

    DOEpatents

    Koch, Mark W [Albuquerque, NM

    2009-06-02

    One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.

  2. Improved non-invasive method for aerosol particle charge measurement employing in-line digital holography

    NASA Astrophysics Data System (ADS)

    Tripathi, Anjan Kumar

    Electrically charged particles are found in a wide range of applications ranging from electrostatic powder coating, mineral processing, and powder handling to rain-producing cloud formation in atmospheric turbulent flows. In turbulent flows, particle dynamics is influenced by the electric force due to particle charge generation. Quantifying particle charges in such systems will help in better predicting and controlling particle clustering, relative motion, collision, and growth. However, there is a lack of noninvasive techniques to measure particle charges. Recently, a non-invasive method for particle charge measurement using in-line Digital Holographic Particle Tracking Velocimetry (DHPTV) technique was developed in our lab, where charged particles to be measured were introduced to a uniform electric field, and their movement towards the oppositely charged electrode was deemed proportional to the amount of charge on the particles (Fan Yang, 2014 [1]). However, inherent speckle noise associated with reconstructed images was not adequately removed and therefore particle tracking data was contaminated. Furthermore, particle charge calculation based on particle deflection velocity neglected the particle drag force and rebound effect of the highly charged particles from the electrodes. We improved upon the existing particle charge measurement method by: 1) hologram post processing, 2) taking drag force into account in charge calculation, 3) considering rebound effect. The improved method was first fine-tuned through a calibration experiment. The complete method was then applied to two different experiments, namely conduction charging and enclosed fan-driven turbulence chamber, to measure particle charges. In all three experiments conducted, the particle charge was found to obey non-central t-location scale family of distribution. It was also noted that the charge distribution was insensitive to the change in voltage applied between the electrodes. The range of voltage applied where reliable particle charges can be measured was also quantified by taking into account the rebound effect of highly charged particles. Finally, in the enclosed chamber experiment, it was found that using carbon conductive coating on the inner walls of the chamber minimized the charge generation inside the chamber when glass bubble particles were used. The value of electric charges obtained in calibration experiment through the improved method was found to have the same order as reported in the existing work (Y.C Ahn et al. 2004 [2]), indicating that the method is indeed effective.

  3. Multi-attribute integrated measurement of node importance in complex networks.

    PubMed

    Wang, Shibo; Zhao, Jinlou

    2015-11-01

    The measure of node importance in complex networks is very important to the research of networks stability and robustness; it also can ensure the security of the whole network. Most researchers have used a single indicator to measure the networks node importance, so that the obtained measurement results only reflect certain aspects of the networks with a loss of information. Meanwhile, because of the difference of networks topology, the nodes' importance should be described by combining the character of the networks topology. Most of the existing evaluation algorithms cannot completely reflect the circumstances of complex networks, so this paper takes into account the degree of centrality, the relative closeness centrality, clustering coefficient, and topology potential and raises an integrated measuring method to measure the nodes' importance. This method can reflect nodes' internal and outside attributes and eliminate the influence of network structure on the node importance. The experiments of karate network and dolphin network show that networks topology structure integrated measure has smaller range of metrical result than a single indicator and more universal. Experiments show that attacking the North American power grid and the Internet network with the method has a faster convergence speed than other methods.

  4. Cantilever spring constant calibration using laser Doppler vibrometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ohler, Benjamin

    2007-06-15

    Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less

  5. A method to evaluate process performance by integrating time and resources

    NASA Astrophysics Data System (ADS)

    Wang, Yu; Wei, Qingjie; Jin, Shuang

    2017-06-01

    The purpose of process mining is to improve the existing process of the enterprise, so how to measure the performance of the process is particularly important. However, the current research on the performance evaluation method is still insufficient. The main methods of evaluation are mainly using time or resource. These basic statistics cannot evaluate process performance very well. In this paper, a method of evaluating the performance of the process based on time dimension and resource dimension is proposed. This method can be used to measure the utilization and redundancy of resources in the process. This paper will introduce the design principle and formula of the evaluation algorithm. Then, the design and the implementation of the evaluation method will be introduced. Finally, we will use the evaluating method to analyse the event log from a telephone maintenance process and propose an optimization plan.

  6. [Review of research design and statistical methods in Chinese Journal of Cardiology].

    PubMed

    Zhang, Li-jun; Yu, Jin-ming

    2009-07-01

    To evaluate the research design and the use of statistical methods in Chinese Journal of Cardiology. Peer through the research design and statistical methods in all of the original papers in Chinese Journal of Cardiology from December 2007 to November 2008. The most frequently used research designs are cross-sectional design (34%), prospective design (21%) and experimental design (25%). In all of the articles, 49 (25%) use wrong statistical methods, 29 (15%) lack some sort of statistic analysis, 23 (12%) have inconsistencies in description of methods. There are significant differences between different statistical methods (P < 0.001). The correction rates of multifactor analysis were low and repeated measurement datas were not used repeated measurement analysis. Many problems exist in Chinese Journal of Cardiology. Better research design and correct use of statistical methods are still needed. More strict review by statistician and epidemiologist is also required to improve the literature qualities.

  7. Reserves in load capacity assessment of existing bridges

    NASA Astrophysics Data System (ADS)

    Žitný, Jan; Ryjáček, Pavel

    2017-09-01

    High percentage of all railway bridges in the Czech Republic is made of structural steel. Majority of these bridges is designed according to historical codes and according to the deterioration, they have to be assessed if they satisfy the needs of modern railway traffic. The load capacity assessment of existing bridges according to Eurocodes is however often too conservative and especially, braking and acceleration forces cause huge problems to structural elements of the bridge superstructure. The aim of this paper is to review the different approaches for the determination of braking and acceleration forces. Both, current and historical theoretical models and in-situ measurements are considered. The research of several local European state norms superior to Eurocode for assessment of existing railway bridges shows the big diversity of used local approaches and the conservativeness of Eurocode. This paper should also work as an overview for designers dealing with load capacity assessment, revealing the reserves for existing bridges. Based on these different approaches, theoretical models and data obtained from the measurements, the method for determination of braking and acceleration forces on the basis of real traffic data should be proposed.

  8. Spectral Discrete Probability Density Function of Measured Wind Turbine Noise in the Far Field

    PubMed Central

    Ashtiani, Payam; Denison, Adelaide

    2015-01-01

    Of interest is the spectral character of wind turbine noise at typical residential set-back distances. In this paper, a spectral statistical analysis has been applied to immission measurements conducted at three locations. This method provides discrete probability density functions for the Turbine ONLY component of the measured noise. This analysis is completed for one-third octave sound levels, at integer wind speeds, and is compared to existing metrics for measuring acoustic comfort as well as previous discussions on low-frequency noise sources. PMID:25905097

  9. Comparison and harmonization of measuring methods for air contaminants in the working environment.

    PubMed

    Leichnitz, K

    1998-09-01

    The objective of this work was to demonstrate that the measurement of air contaminants in the workplace requires a special approach. Decisive in carrying out the measuring task is the quality of the sampling strategy, including selection of the appropriate measuring method. Methods developed at a national level may be more suitable for this purpose than methods described in international standards. Measurements of air contaminants in the workplace should always be the basis for the prevention and control of occupational hazards. Such measurements, therefore, are also an essential element of risk assessment. Industrial processes and chemical agents are myriad. Each manufacturing stage may apply different conditions (e.g., batch production or continuous process, temperature, pressure) and agents (e.g. a wide variety of chemical substances): In each of these stages, different job functions may be necessary and may be subject to different exposure conditions. Distance from emission sources and physical parameters, such as rates of release, air current, meteorological variations, also have a profound influence. The measuring task in the workplace is quite different in comparison to many others (e.g., blood or soil sample analysis). Firstly, the selection of sampling time and sampling location are crucial steps in air analysis. Transportation and storage of the samples, may however, also influence measuring results; interlaboratory tests show the existing problems. Generally, in analytics, the substance to be determined remains "well covered" in its matrix during sampling, transportation and storage. In air analysis, however, the contaminant is usually "torn" from its surrounding matrix (the air) and "forced" into the sorbent, where it finds a completely new environment; reactions yielding artefacts may take place. Several international organizations have issued guidelines and standards on measuring methods for air contaminants in the working environment, such as the World Health Organization (WHO), the International Union of Pure and Applied Chemistry (IUPAC), and the International Organization for Standardization (ISO). Most of these international documents are substance-related and mainly cover the analytical steps, which constitute only part of the whole measuring process. The approach of the Commission of the European Union is useful in solving the task of air testing in the workplace. This body has issued an EU Directive which includes general requirements for measuring methods. In the Directive it is also stated that persons who carry out measurements must possess the necessary expertise. The Directive, in addition, refers to the European Committee for Standardization (CEN), and that to general requirements for measuring procedures. The advantage of the EU/CEN approach is its aspect of general requirements. This allows the development of new or improved methods without any restricting effect on existing substance-related standards.

  10. Improved mapping of radio sources from VLBI data by least-square fit

    NASA Technical Reports Server (NTRS)

    Rodemich, E. R.

    1985-01-01

    A method is described for producing improved mapping of radio sources from Very Long Base Interferometry (VLBI) data. The method described is more direct than existing Fourier methods, is often more accurate, and runs at least as fast. The visibility data is modeled here, as in existing methods, as a function of the unknown brightness distribution and the unknown antenna gains and phases. These unknowns are chosen so that the resulting function values are as near as possible to the observed values. If researchers use the radio mapping source deviation to measure the closeness of this fit to the observed values, they are led to the problem of minimizing a certain function of all the unknown parameters. This minimization problem cannot be solved directly, but it can be attacked by iterative methods which we show converge automatically to the minimum with no user intervention. The resulting brightness distribution will furnish the best fit to the data among all brightness distributions of given resolution.

  11. Advancing methods for research on household water insecurity: Studying entitlements and capabilities, socio-cultural dynamics, and political processes, institutions and governance.

    PubMed

    Wutich, Amber; Budds, Jessica; Eichelberger, Laura; Geere, Jo; Harris, Leila; Horney, Jennifer; Jepson, Wendy; Norman, Emma; O'Reilly, Kathleen; Pearson, Amber; Shah, Sameer; Shinn, Jamie; Simpson, Karen; Staddon, Chad; Stoler, Justin; Teodoro, Manuel P; Young, Sera

    2017-11-01

    Household water insecurity has serious implications for the health, livelihoods and wellbeing of people around the world. Existing methods to assess the state of household water insecurity focus largely on water quality, quantity or adequacy, source or reliability, and affordability. These methods have significant advantages in terms of their simplicity and comparability, but are widely recognized to oversimplify and underestimate the global burden of household water insecurity. In contrast, a broader definition of household water insecurity should include entitlements and human capabilities, sociocultural dynamics, and political institutions and processes. This paper proposes a mix of qualitative and quantitative methods that can be widely adopted across cultural, geographic, and demographic contexts to assess hard-to-measure dimensions of household water insecurity. In doing so, it critically evaluates existing methods for assessing household water insecurity and suggests ways in which methodological innovations advance a broader definition of household water insecurity.

  12. Orientation of doubly rotated quartz plates.

    PubMed

    Sherman, J R

    1989-01-01

    A derivation from classical spherical trigonometry of equations to compute the orientation of doubly-rotated quartz blanks from Bragg X-ray data is discussed. These are usually derived by compact and efficient vector methods, which are reviewed briefly. They are solved by generating a quadratic equation with numerical coefficients. Two methods exist for performing the computation from measurements against two planes: a direct solution by a quadratic equation and a process of convergent iteration. Both have a spurious solution. Measurement against three lattice planes yields a set of three linear equations the solution of which is an unambiguous result.

  13. Community Air Sensor Network (CAIRSENSE) Project: Lower Cost, Continuous Ambient Monitoring Methods

    EPA Science Inventory

    Advances in air pollution sensor technology have enabled the development of small and low cost systems to measure outdoor air pollution. The deployment of numerous sensors across a small geographic area would have potential benefits to supplement existing monitoring networks and ...

  14. 30 CFR 285.633 - How do I comply with my COP?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 285.633 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, REGULATION, AND ENFORCEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER... effective, then you must make recommendations for new mitigation measures or monitoring methods. (c) As...

  15. TESTING OF INDOOR RADON REDUCTION TECHNIQUES IN 19 MARYLAND HOUSES

    EPA Science Inventory

    The report gives results of testing of indoor radon reduction techniques in 19 existing houses in Maryland. The focus was on passive measures: various passive soil depressurization methods, where natural wind and temperature effects are utilized to develop suction in the system; ...

  16. SENSOR FOR MONITORING OF PARTICULATE EMISSIONS IN DIESEL EXHAUST GASES - PHASE I

    EPA Science Inventory

    Active Spectrum, Inc., proposes a novel, low-cost soot sensor for on-board measurement of soot emissions in diesel exhaust gases. The proposed technology is differentiated from existing methods by excellent sensitivity, high specificity to carbon particulates, and robustness ...

  17. Training of U.S. Air Traffic Controllers. (IDA Report No. R-206).

    ERIC Educational Resources Information Center

    Henry, James H.; And Others

    The report reviews the evolution of existing national programs for air traffic controller training, estimates the number of persons requiring developmental and supplementary training, examines present controller selection and training programs, investigates performance measurement methods, considers standardization and quality control, discusses…

  18. Utility-preserving anonymization for health data publishing.

    PubMed

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  19. Patient, staff and physician satisfaction: a new model, instrument and their implications.

    PubMed

    York, Anne S; McCarthy, Kim A

    2011-01-01

    Customer satisfaction's importance is well-documented in the marketing literature and is rapidly gaining wide acceptance in the healthcare industry. The purpose of this paper is to introduce a new customer-satisfaction measuring method - Reichheld's ultimate question - and compare it with traditional techniques using data gathered from four healthcare clinics. A new survey method, called the ultimate question, was used to collect patient satisfaction data. It was subsequently compared with the data collected via an existing method. Findings suggest that the ultimate question provides similar ratings to existing models at lower costs. A relatively small sample size may affect the generalizability of the results; it is also possible that potential spill-over effects exist owing to two patient satisfaction surveys administered at the same time. This new ultimate question method greatly improves the process and ease with which hospital or clinic administrators are able to collect patient (as well as staff and physician) satisfaction data in healthcare settings. Also, the feedback gained from this method is actionable and can be used to make strategic improvements that will impact business and ultimately increase profitability. The paper's real value is pinpointing specific quality improvement areas based not just on patient ratings but also physician and staff satisfaction, which often underlie patients' clinical experiences.

  20. Uncertainties have a meaning: Information entropy as a quality measure for 3-D geological models

    NASA Astrophysics Data System (ADS)

    Wellmann, J. Florian; Regenauer-Lieb, Klaus

    2012-03-01

    Analyzing, visualizing and communicating uncertainties are important issues as geological models can never be fully determined. To date, there exists no general approach to quantify uncertainties in geological modeling. We propose here to use information entropy as an objective measure to compare and evaluate model and observational results. Information entropy was introduced in the 50s and defines a scalar value at every location in the model for predictability. We show that this method not only provides a quantitative insight into model uncertainties but, due to the underlying concept of information entropy, can be related to questions of data integration (i.e. how is the model quality interconnected with the used input data) and model evolution (i.e. does new data - or a changed geological hypothesis - optimize the model). In other words information entropy is a powerful measure to be used for data assimilation and inversion. As a first test of feasibility, we present the application of the new method to the visualization of uncertainties in geological models, here understood as structural representations of the subsurface. Applying the concept of information entropy on a suite of simulated models, we can clearly identify (a) uncertain regions within the model, even for complex geometries; (b) the overall uncertainty of a geological unit, which is, for example, of great relevance in any type of resource estimation; (c) a mean entropy for the whole model, important to track model changes with one overall measure. These results cannot easily be obtained with existing standard methods. The results suggest that information entropy is a powerful method to visualize uncertainties in geological models, and to classify the indefiniteness of single units and the mean entropy of a model quantitatively. Due to the relationship of this measure to the missing information, we expect the method to have a great potential in many types of geoscientific data assimilation problems — beyond pure visualization.

  1. COSMOS: accurate detection of somatic structural variations through asymmetric comparison between tumor and normal samples.

    PubMed

    Yamagata, Koichi; Yamanishi, Ayako; Kokubu, Chikara; Takeda, Junji; Sese, Jun

    2016-05-05

    An important challenge in cancer genomics is precise detection of structural variations (SVs) by high-throughput short-read sequencing, which is hampered by the high false discovery rates of existing analysis tools. Here, we propose an accurate SV detection method named COSMOS, which compares the statistics of the mapped read pairs in tumor samples with isogenic normal control samples in a distinct asymmetric manner. COSMOS also prioritizes the candidate SVs using strand-specific read-depth information. Performance tests on modeled tumor genomes revealed that COSMOS outperformed existing methods in terms of F-measure. We also applied COSMOS to an experimental mouse cell-based model, in which SVs were induced by genome engineering and gamma-ray irradiation, followed by polymerase chain reaction-based confirmation. The precision of COSMOS was 84.5%, while the next best existing method was 70.4%. Moreover, the sensitivity of COSMOS was the highest, indicating that COSMOS has great potential for cancer genome analysis. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.

  2. Liquefaction assessment based on combined use of CPT and shear wave velocity measurements

    NASA Astrophysics Data System (ADS)

    Bán, Zoltán; Mahler, András; Győri, Erzsébet

    2017-04-01

    Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.

  3. A calibration-free electrode compensation method

    PubMed Central

    Rossant, Cyrille; Fontaine, Bertrand; Magnusson, Anna K.

    2012-01-01

    In a single-electrode current-clamp recording, the measured potential includes both the response of the membrane and that of the measuring electrode. The electrode response is traditionally removed using bridge balance, where the response of an ideal resistor representing the electrode is subtracted from the measurement. Because the electrode is not an ideal resistor, this procedure produces capacitive transients in response to fast or discontinuous currents. More sophisticated methods exist, but they all require a preliminary calibration phase, to estimate the properties of the electrode. If these properties change after calibration, the measurements are corrupted. We propose a compensation method that does not require preliminary calibration. Measurements are compensated offline by fitting a model of the neuron and electrode to the trace and subtracting the predicted electrode response. The error criterion is designed to avoid the distortion of compensated traces by spikes. The technique allows electrode properties to be tracked over time and can be extended to arbitrary models of electrode and neuron. We demonstrate the method using biophysical models and whole cell recordings in cortical and brain-stem neurons. PMID:22896724

  4. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    NASA Astrophysics Data System (ADS)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  5. Laser tracker orientation in confined space using on-board targets

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui

    2016-08-01

    This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.

  6. Hyperspectral laser-induced autofluorescence imaging of dental caries

    NASA Astrophysics Data System (ADS)

    Bürmen, Miran; Fidler, Aleš; Pernuš, Franjo; Likar, Boštjan

    2012-01-01

    Dental caries is a disease characterized by demineralization of enamel crystals leading to the penetration of bacteria into the dentine and pulp. Early detection of enamel demineralization resulting in increased enamel porosity, commonly known as white spots, is a difficult diagnostic task. Laser induced autofluorescence was shown to be a useful method for early detection of demineralization. The existing studies involved either a single point spectroscopic measurements or imaging at a single spectral band. In the case of spectroscopic measurements, very little or no spatial information is acquired and the measured autofluorescence signal strongly depends on the position and orientation of the probe. On the other hand, single-band spectral imaging can be substantially affected by local spectral artefacts. Such effects can significantly interfere with automated methods for detection of early caries lesions. In contrast, hyperspectral imaging effectively combines the spatial information of imaging methods with the spectral information of spectroscopic methods providing excellent basis for development of robust and reliable algorithms for automated classification and analysis of hard dental tissues. In this paper, we employ 405 nm laser excitation of natural caries lesions. The fluorescence signal is acquired by a state-of-the-art hyperspectral imaging system consisting of a high-resolution acousto-optic tunable filter (AOTF) and a highly sensitive Scientific CMOS camera in the spectral range from 550 nm to 800 nm. The results are compared to the contrast obtained by near-infrared hyperspectral imaging technique employed in the existing studies on early detection of dental caries.

  7. Management of fluid mud in estuaries, bays, and lakes. II: Measurement, modeling, and management

    USGS Publications Warehouse

    McAnally, W.H.; Teeter, A.; Schoellhamer, David H.; Friedrichs, C.; Hamilton, D.; Hayter, E.; Shrestha, P.; Rodriguez, H.; Sheremet, A.; Kirby, R.

    2007-01-01

    Techniques for measurement, modeling, and management of fluid mud are available, but research is needed to improve them. Fluid mud can be difficult to detect, measure, or sample, which has led to new instruments and new ways of using existing instruments. Multifrequency acoustic fathometers sense neither density nor viscosity and are, therefore, unreliable in measuring fluid mud. Nuclear density probes, towed sleds, seismic, and drop probes equipped with density meters offer the potential for accurate measurements. Numerical modeling of fluid mud requires solving governing equations for flow velocity, density, pressure, salinity, water surface, plus sediment submodels. A number of such models exist in one-, two-, and three-dimensional form, but they rely on empirical relationships that require substantial site-specific validation to observations. Management of fluid mud techniques can be classified as those that accomplish: Source control, formation control, and removal. Nautical depth, a fourth category, defines the channel bottom as a specific fluid mud density or alternative parameter as safe for navigation. Source control includes watershed management measures to keep fine sediment out of waterways and in-water measures such as structures and traps. Formation control methods include streamlined channels and structures plus other measures to reduce flocculation and structures that train currents. Removal methods include the traditional dredging and transport of dredged material plus agitation that contributes to formation control and/or nautical depth. Conditioning of fluid mud by dredging and aerating offers the possibility of improved navigability. Two examples—the Atchafalaya Bar Channel and Savannah Harbor—illustrate the use of measurements and management of fluid mud.

  8. Load Model Verification, Validation and Calibration Framework by Statistical Analysis on Field Data

    NASA Astrophysics Data System (ADS)

    Jiao, Xiangqing; Liao, Yuan; Nguyen, Thai

    2017-11-01

    Accurate load models are critical for power system analysis and operation. A large amount of research work has been done on load modeling. Most of the existing research focuses on developing load models, while little has been done on developing formal load model verification and validation (V&V) methodologies or procedures. Most of the existing load model validation is based on qualitative rather than quantitative analysis. In addition, not all aspects of model V&V problem have been addressed by the existing approaches. To complement the existing methods, this paper proposes a novel load model verification and validation framework that can systematically and more comprehensively examine load model's effectiveness and accuracy. Statistical analysis, instead of visual check, quantifies the load model's accuracy, and provides a confidence level of the developed load model for model users. The analysis results can also be used to calibrate load models. The proposed framework can be used as a guidance to systematically examine load models for utility engineers and researchers. The proposed method is demonstrated through analysis of field measurements collected from a utility system.

  9. Volume determination of irregularly-shaped quasi-spherical nanoparticles.

    PubMed

    Attota, Ravi Kiran; Liu, Eileen Cherry

    2016-11-01

    Nanoparticles (NPs) are widely used in diverse application areas, such as medicine, engineering, and cosmetics. The size (or volume) of NPs is one of the most important parameters for their successful application. It is relatively straightforward to determine the volume of regular NPs such as spheres and cubes from a one-dimensional or two-dimensional measurement. However, due to the three-dimensional nature of NPs, it is challenging to determine the proper physical size of many types of regularly and irregularly-shaped quasi-spherical NPs at high-throughput using a single tool. Here, we present a relatively simple method that determines a better volume estimate of NPs by combining measurements from their top-down projection areas and peak heights using two tools. The proposed method is significantly faster and more economical than the electron tomography method. We demonstrate the improved accuracy of the combined method over scanning electron microscopy (SEM) or atomic force microscopy (AFM) alone by using modeling, simulations, and measurements. This study also exposes the existence of inherent measurement biases for both SEM and AFM, which usually produce larger measured diameters with SEM than with AFM. However, in some cases SEM measured diameters appear to have less error compared to AFM measured diameters, especially for widely used IS-NPs such as of gold, and silver. The method provides a much needed, proper high-throughput volumetric measurement method useful for many applications. Graphical Abstract The combined method for volume determination of irregularly-shaped quasi-spherical nanoparticles.

  10. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  11. Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification

    DOE PAGES

    Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...

    2018-04-05

    File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less

  12. Consideration of measurement error when using commercial indoor radon determinations for selecting radon action levels

    USGS Publications Warehouse

    Reimer, G.M.; Szarzi, S.L.; Dolan, Michael P.

    1998-01-01

    An examination of year-long, in-home radon measurement in Colorado from commercial companies applying typical methods indicates that considerable variation in precision exists. This variation can have a substantial impact on any mitigation decisions, either voluntary or mandated by law, especially regarding property sale or exchange. Both long-term exposure (nuclear track greater than 90 days), and short-term (charcoal adsorption 4-7 days) exposure methods were used. In addition, periods of continuous monitoring with a highly calibrated alpha-scintillometer took place for accuracy calibration. The results of duplicate commercial analysis show that typical results are no better than ??25 percent with occasional outliers (up to 5 percent of all analyses) well beyond that limit. Differential seasonal measurements (winter/summer) by short-term methods provide equivalent information to single long-term measurements. Action levels in the U.S. for possible mitigation decisions should be selected so that they consider the measurement variability; specifically, they should reflect a concentration range similar to that adopted by the European Community.

  13. Dynamic Reconstruction Algorithm of Three-Dimensional Temperature Field Measurement by Acoustic Tomography

    PubMed Central

    Li, Yanqiu; Liu, Shi; Inaki, Schlaberg H.

    2017-01-01

    Accuracy and speed of algorithms play an important role in the reconstruction of temperature field measurements by acoustic tomography. Existing algorithms are based on static models which only consider the measurement information. A dynamic model of three-dimensional temperature reconstruction by acoustic tomography is established in this paper. A dynamic algorithm is proposed considering both acoustic measurement information and the dynamic evolution information of the temperature field. An objective function is built which fuses measurement information and the space constraint of the temperature field with its dynamic evolution information. Robust estimation is used to extend the objective function. The method combines a tunneling algorithm and a local minimization technique to solve the objective function. Numerical simulations show that the image quality and noise immunity of the dynamic reconstruction algorithm are better when compared with static algorithms such as least square method, algebraic reconstruction technique and standard Tikhonov regularization algorithms. An effective method is provided for temperature field reconstruction by acoustic tomography. PMID:28895930

  14. A General Symbolic Method with Physical Applications

    NASA Astrophysics Data System (ADS)

    Smith, Gregory M.

    2000-06-01

    A solution to the problem of unifying the General Relativistic and Quantum Theoretical formalisms is given which introduces a new non-axiomatic symbolic method and an algebraic generalization of the Calculus to non-finite symbolisms without reference to the concept of a limit. An essential feature of the non-axiomatic method is the inadequacy of any (finite) statements: Identifying this aspect of the theory with the "existence of an external physical reality" both allows for the consistency of the method with the results of experiments and avoids the so-called "measurement problem" of quantum theory.

  15. Fundamental limits of measurement in telecommunications: Experimental and modeling studies in a test optical network on proposal for the reform of telecommunication quantitations

    NASA Astrophysics Data System (ADS)

    Egan, James; McMillan, Normal; Denieffe, David

    2011-08-01

    Proposals for a review of the limits of measurement for telecommunications are made. The measures are based on adapting work from the area of chemical metrology for the field of telecommunications. Currie has introduced recommendations for defining the limits of measurement in chemical metrology and has identified three key fundamental limits of measurement. These are the critical level, the detection limit and the determination limit. Measurements on an optical system are used to illustrate the utility of these measures and discussion is given into the advantages of using these fundamental quantitations over existing methods.

  16. Optical multichannel room temperature magnetic field imaging system for clinical application

    PubMed Central

    Lembke, G.; Erné, S. N.; Nowak, H.; Menhorn, B.; Pasquarelli, A.

    2014-01-01

    Optically pumped magnetometers (OPM) are a very promising alternative to the superconducting quantum interference devices (SQUIDs) used nowadays for Magnetic Field Imaging (MFI), a new method of diagnosis based on the measurement of the magnetic field of the human heart. We present a first measurement combining a multichannel OPM-sensor with an existing MFI-system resulting in a fully functional room temperature MFI-system. PMID:24688820

  17. Use of X-ray diffraction to quantify amorphous supplementary cementitious materials in anhydrous and hydrated blended cements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Snellings, R., E-mail: ruben.snellings@epfl.ch; Salze, A.; Scrivener, K.L., E-mail: karen.scrivener@epfl.ch

    2014-10-15

    The content of individual amorphous supplementary cementitious materials (SCMs) in anhydrous and hydrated blended cements was quantified by the PONKCS [1] X-ray diffraction (XRD) method. The analytical precision and accuracy of the method were assessed through comparison to a series of mixes of known phase composition and of increasing complexity. A 2σ precision smaller than 2–3 wt.% and an accuracy better than 2 wt.% were achieved for SCMs in mixes with quartz, anhydrous Portland cement, and hydrated Portland cement. The extent of reaction of SCMs in hydrating binders measured by XRD was 1) internally consistent as confirmed through the standardmore » addition method and 2) showed a linear correlation to the cumulative heat release as measured independently by isothermal conduction calorimetry. The advantages, limitations and applicability of the method are discussed with reference to existing methods that measure the degree of reaction of SCMs in blended cements.« less

  18. Newborn Jaundice Technologies: Unbound Bilirubin and Bilirubin Binding Capacity In Neonates

    PubMed Central

    Amin, Sanjiv B.; Lamola, Angelo A.

    2011-01-01

    Neonatal jaundice (hyperbilirubinemia), extremely common in neonates, can be associated with neurotoxicity. A safe level of bilirubin has not been defined in either premature or term infants. Emerging evidence suggest that the level of unbound (or “free”) bilirubin has a better sensitivity and specificity than total serum bilirubin for bilirubin-induced neurotoxicity. Although recent studies suggest the usefulness of free bilirubin measurements in managing high-risk neonates including premature infants, there currently exists no widely available method to assay the serum free bilirubin concentration. To keep pace with the growing demand, in addition to reevaluation of old methods, several promising new methods are being developed for sensitive, accurate, and rapid measurement of free bilirubin and bilirubin binding capacity. These innovative methods need to be validated before adopting for clinical use. We provide an overview of some promising methods for free bilirubin and binding capacity measurements with the goal to enhance research in this area of active interest and apparent need. PMID:21641486

  19. A Numerical Theory for Impedance Education in Three-Dimensional Normal Incidence Tubes

    NASA Technical Reports Server (NTRS)

    Watson, Willie R.; Jones, Michael G.

    2016-01-01

    A method for educing the locally-reacting acoustic impedance of a test sample mounted in a 3-D normal incidence impedance tube is presented and validated. The unique feature of the method is that the excitation frequency (or duct geometry) may be such that high-order duct modes may exist. The method educes the impedance, iteratively, by minimizing an objective function consisting of the difference between the measured and numerically computed acoustic pressure at preselected measurement points in the duct. The method is validated on planar and high-order mode sources with data synthesized from exact mode theory. These data are then subjected to random jitter to simulate the effects of measurement uncertainties on the educed impedance spectrum. The primary conclusions of the study are 1) Without random jitter the method is in excellent agreement with that for known impedance samples, and 2) Random jitter that is compatible to that found in a typical experiment has minimal impact on the accuracy of the educed impedance.

  20. (1)H nuclear magnetic resonance (NMR) as a tool to measure dehydration in mice.

    PubMed

    Li, Matthew; Vassiliou, Christophoros C; Colucci, Lina A; Cima, Michael J

    2015-08-01

    Dehydration is a prevalent pathology, where loss of bodily water can result in variable symptoms. Symptoms can range from simple thirst to dire scenarios involving loss of consciousness. Clinical methods exist that assess dehydration from qualitative weight changes to more quantitative osmolality measurements. These methods are imprecise, invasive, and/or easily confounded, despite being practiced clinically. We investigate a non-invasive, non-imaging (1)H NMR method of assessing dehydration that attempts to address issues with existing clinical methods. Dehydration was achieved by exposing mice (n = 16) to a thermally elevated environment (37 °C) for up to 7.5 h (0.11-13% weight loss). Whole body NMR measurements were made using a Bruker LF50 BCA-Analyzer before and after dehydration. Physical lean tissue, adipose, and free water compartment approximations had NMR values extracted from relaxation data through a multi-exponential fitting method. Changes in before/after NMR values were compared with clinically practiced metrics of weight loss (percent dehydration) as well as blood and urine osmolality. A linear correlation between tissue relaxometry and both animal percent dehydration and urine osmolality was observed in lean tissue, but not adipose or free fluids. Calculated R(2) values for percent dehydration were 0.8619 (lean, P < 0.0001), 0.5609 (adipose, P = 0.0008), and 0.0644 (free fluids, P = 0.3445). R(2) values for urine osmolality were 0.7760 (lean, P < 0.0001), 0.5005 (adipose, P = 0.0022), and 0.0568 (free fluids, P = 0.3739). These results suggest that non-imaging (1)H NMR methods are capable of non-invasively assessing dehydration in live animals. Copyright © 2015 John Wiley & Sons, Ltd.

  1. Undoing measurement-induced dephasing in circuit QED

    NASA Astrophysics Data System (ADS)

    Frisk Kockum, A.; Tornberg, L.; Johansson, G.

    2012-05-01

    We analyze the backaction of homodyne detection and photodetection on superconducting qubits in circuit quantum electrodynamics. Although both measurement schemes give rise to backaction in the form of stochastic phase rotations, which leads to dephasing, we show that this can be perfectly undone provided that the measurement signal is fully accounted for. This result improves on an earlier one [Phys. Rev. APLRAAN1050-294710.1103/PhysRevA.82.012329 82, 012329 (2010)], showing that the method suggested can be made to realize a perfect two-qubit parity measurement. We propose a benchmarking experiment on a single qubit to demonstrate the method using homodyne detection. By analyzing the limited measurement efficiency of the detector and bandwidth of the amplifier, we show that the parameter values necessary to see the effect are within the limits of existing technology.

  2. Effect of monochromatic aberrations on photorefractive patterns

    NASA Astrophysics Data System (ADS)

    Campbell, Melanie C. W.; Bobier, W. R.; Roorda, A.

    1995-08-01

    Photorefractive methods have become popular in the measurement of refractive and accommodative states of infants and children owing to their photographic nature and rapid speed of measurement. As in the case of any method that measures the refractive state of the human eye, monochromatic aberrations will reduce the accuracy of the measurement. Monochromatic aberrations cannot be as easily predicted or controlled as chromatic aberrations during the measurement, and accordingly they will introduce measurement errors. This study defines this error or uncertainty by extending the existing paraxial optical analyses of coaxial and eccentric photorefraction. This new optical analysis predicts that, for the amounts of spherical aberration (SA) reported for the human eye, there will be a significant degree of measurement uncertainty introduced for all photorefractive methods. The dioptric amount of this uncertainty may exceed the maximum amount of SA present in the eye. The calculated effects on photorefractive measurement of a real eye with a mixture of spherical aberration and coma are shown to be significant. The ability, developed here, to predict photorefractive patterns corresponding to different amounts and types of monochromatic aberration may in the future lead to an extension of photorefractive methods to the dual measurement of refractive states and aberrations of individual eyes. aberration, retinal image quality,

  3. Gravity Compensation Method for Combined Accelerometer and Gyro Sensors Used in Cardiac Motion Measurements.

    PubMed

    Krogh, Magnus Reinsfelt; Nghiem, Giang M; Halvorsen, Per Steinar; Elle, Ole Jakob; Grymyr, Ole-Johannes; Hoff, Lars; Remme, Espen W

    2017-05-01

    A miniaturized accelerometer fixed to the heart can be used for monitoring of cardiac function. However, an accelerometer cannot differentiate between acceleration caused by motion and acceleration due to gravity. The accuracy of motion measurements is therefore dependent on how well the gravity component can be estimated and filtered from the measured signal. In this study we propose a new method for estimating the gravity, based on strapdown inertial navigation, using a combined accelerometer and gyro. The gyro was used to estimate the orientation of the gravity field and thereby remove it. We compared this method with two previously proposed gravity filtering methods in three experimental models using: (1) in silico computer simulated heart motion; (2) robot mimicked heart motion; and (3) in vivo measured motion on the heart in an animal model. The new method correlated excellently with the reference (r 2  > 0.93) and had a deviation from reference peak systolic displacement (6.3 ± 3.9 mm) below 0.2 ± 0.5 mm for the robot experiment model. The new method performed significantly better than the two previously proposed methods (p < 0.001). The results show that the proposed method using gyro can measure cardiac motion with high accuracy and performs better than existing methods for filtering the gravity component from the accelerometer signal.

  4. Experimental Demonstration of In-Place Calibration for Time Domain Microwave Imaging System

    NASA Astrophysics Data System (ADS)

    Kwon, S.; Son, S.; Lee, K.

    2018-04-01

    In this study, the experimental demonstration of in-place calibration was conducted using the developed time domain measurement system. Experiments were conducted using three calibration methods—in-place calibration and two existing calibrations, that is, array rotation and differential calibration. The in-place calibration uses dual receivers located at an equal distance from the transmitter. The received signals at the dual receivers contain similar unwanted signals, that is, the directly received signal and antenna coupling. In contrast to the simulations, the antennas are not perfectly matched and there might be unexpected environmental errors. Thus, we experimented with the developed experimental system to demonstrate the proposed method. The possible problems with low signal-to-noise ratio and clock jitter, which may exist in time domain systems, were rectified by averaging repeatedly measured signals. The tumor was successfully detected using the three calibration methods according to the experimental results. The cross correlation was calculated using the reconstructed image of the ideal differential calibration for a quantitative comparison between the existing rotation calibration and the proposed in-place calibration. The mean value of cross correlation between the in-place calibration and ideal differential calibration was 0.80, and the mean value of cross correlation of the rotation calibration was 0.55. Furthermore, the results of simulation were compared with the experimental results to verify the in-place calibration method. A quantitative analysis was also performed, and the experimental results show a tendency similar to the simulation.

  5. Development of Novel Noninvasive Methods of Stress Assessment in Baleen Whales

    DTIC Science & Technology

    2014-09-30

    large whales. Few methods exist for assessment of physiological stress levels of free-swimming cetaceans (Amaral 2010, ONR 2010, Hunt et al. 2013...hormone aldosterone . Our aim in this project is to further develop both techniques - respiratory hormone analysis and fecal hormone analysis - for use...noninvasive aldosterone assay (for both feces and blow) that can be used as an alternative measure of adrenal gland activation relative to stress

  6. The Capability of Virtual Reality to Meet Military Requirements (la Capacite de la rea1ite virtuelle a repondre aux besoins militaires)

    DTIC Science & Technology

    2000-11-01

    importance of the sensation of presence, and cybersickness . The third day reviewed assessment methods and applications research. Speakers reviewed...of the sensation of presence, and cybersickness . The third day reviewed assessment methods and applications research. Speakers reviewed existing or...Reality technology. Presentations discussed sensory interfaces, measures of effectiveness, importance of the sensation of presence, and cybersickness

  7. Estimation of aboveground forest carbon flux in Oregon: adding components of change to stock-difference assessments

    Treesearch

    Andrew N. Gray; Thomas R. Whittier; David L. Azuma

    2014-01-01

    A substantial portion of the carbon (C) emitted by human activity is apparently being stored in forest ecosystems in the Northern Hemisphere, but the magnitude and cause are not precisely understood. Current official estimates of forest C flux are based on a combination of field measurements and other methods. The goal of this study was to improve on existing methods...

  8. The pointillism method for creating stimuli suitable for use in computer-based visual contrast sensitivity testing.

    PubMed

    Turner, Travis H

    2005-03-30

    An increasingly large corpus of clinical and experimental neuropsychological research has demonstrated the utility of measuring visual contrast sensitivity. Unfortunately, existing means of measuring contrast sensitivity can be prohibitively expensive, difficult to standardize, or lack reliability. Additionally, most existing tests do not allow full control over important characteristics, such as off-angle rotations, waveform, contrast, and spatial frequency. Ideally, researchers could manipulate characteristics and display stimuli in a computerized task designed to meet experimental needs. Thus far, 256-bit color limitation in standard cathode ray tube (CRT) monitors has been preclusive. To this end, the pointillism method (PM) was developed. Using MATLAB software, stimuli are created based on both mathematical and stochastic components, such that differences in regional luminance values of the gradient field closely approximate the desired contrast. This paper describes the method and examines its performance in sine and square-wave image sets from a range of contrast values. Results suggest the utility of the method for most experimental applications. Weaknesses in the current version, the need for validation and reliability studies, and considerations regarding applications are discussed. Syntax for the program is provided in an appendix, and a version of the program independent of MATLAB is available from the author.

  9. Measuring geographical accessibility to palliative and end of life (PEoLC) related facilities: a comparative study in an area with well-developed specialist palliative care (SPC) provision.

    PubMed

    Pearson, Clare; Verne, Julia; Wells, Claudia; Polato, Giovanna M; Higginson, Irene J; Gao, Wei

    2017-01-26

    Geographical accessibility is important in accessing healthcare services. Measuring it has evolved alongside technological and data analysis advances. High correlations between different methods have been detected, but no comparisons exist in the context of palliative and end of life care (PEoLC) studies. To assess how geographical accessibility can affect PEoLC, selection of an appropriate method to capture it is crucial. We therefore aimed to compare methods of measuring geographical accessibility of decedents to PEoLC-related facilities in South London, an area with well-developed SPC provision. Individual-level death registration data in 2012 (n = 18,165), from the Office for National Statistics (ONS) were linked to area-level PEoLC-related facilities from various sources. Simple and more complex measures of geographical accessibility were calculated using the residential postcodes of the decedents and postcodes of the nearest hospital, care home and hospice. Distance measures (straight-line, travel network) and travel times along the road network were compared using geographic information system (GIS) mapping and correlation analysis (Spearman rho). Borough-level maps demonstrate similarities in geographical accessibility measures. Strong positive correlation exist between straight-line and travel distances to the nearest hospital (rho = 0.97), care home (rho = 0.94) and hospice (rho = 0.99). Travel times were also highly correlated with distance measures to the nearest hospital (rho range = 0.84-0.88), care home (rho = 0.88-0.95) and hospice (rho = 0.93-0.95). All correlations were significant at p < 0.001 level. Distance-based and travel-time measures of geographical accessibility to PEoLC-related facilities in South London are similar, suggesting the choice of measure can be based on the ease of calculation.

  10. On the uncertainty of interdisciplinarity measurements due to incomplete bibliographic data.

    PubMed

    Calatrava Moreno, María Del Carmen; Auzinger, Thomas; Werthner, Hannes

    The accuracy of interdisciplinarity measurements is directly related to the quality of the underlying bibliographic data. Existing indicators of interdisciplinarity are not capable of reflecting the inaccuracies introduced by incorrect and incomplete records because correct and complete bibliographic data can rarely be obtained. This is the case for the Rao-Stirling index, which cannot handle references that are not categorized into disciplinary fields. We introduce a method that addresses this problem. It extends the Rao-Stirling index to acknowledge missing data by calculating its interval of uncertainty using computational optimization. The evaluation of our method indicates that the uncertainty interval is not only useful for estimating the inaccuracy of interdisciplinarity measurements, but it also delivers slightly more accurate aggregated interdisciplinarity measurements than the Rao-Stirling index.

  11. Investigating the application of Rasch theory in measuring change in middle school student performance in physical science

    NASA Astrophysics Data System (ADS)

    Cunningham, Jessica D.

    Newton's Universe (NU), an innovative teacher training program, strives to obtain measures from rural, middle school science teachers and their students to determine the impact of its distance learning course on understanding of temperature. No consensus exists on the most appropriate and useful method of analysis to measure change in psychological constructs over time. Several item response theory (IRT) models have been deemed useful in measuring change, which makes the choice of an IRT model not obvious. The appropriateness and utility of each model, including a comparison to a traditional analysis of variance approach, was investigated using middle school science student performance on an assessment over an instructional period. Predetermined criteria were outlined to guide model selection based on several factors including research questions, data properties, and meaningful interpretations to determine the most appropriate model for this study. All methods employed in this study reiterated one common interpretation of the data -- specifically, that the students of teachers with any NU course experience had significantly greater gains in performance over the instructional period. However, clear distinctions were made between an analysis of variance and the racked and stacked analysis using the Rasch model. Although limited research exists examining the usefulness of the Rasch model in measuring change in understanding over time, this study applied these methods and detailed plausible implications for data-driven decisions based upon results for NU and others. Being mindful of the advantages and usefulness of each method of analysis may help others make informed decisions about choosing an appropriate model to depict changes to evaluate other programs. Results may encourage other researchers to consider the meaningfulness of using IRT for this purpose. Results have implications for data-driven decisions for future professional development courses, in science education and other disciplines. KEYWORDS: Item Response Theory, Rasch Model, Racking and Stacking, Measuring Change in Student Performance, Newton's Universe teacher training

  12. Accurate mass replacement method for the sediment concentration measurement with a constant volume container

    NASA Astrophysics Data System (ADS)

    Ban, Yunyun; Chen, Tianqin; Yan, Jun; Lei, Tingwu

    2017-04-01

    The measurement of sediment concentration in water is of great importance in soil erosion research and soil and water loss monitoring systems. The traditional weighing method has long been the foundation of all the other measuring methods and instrument calibration. The development of a new method to replace the traditional oven-drying method is of interest in research and practice for the quick and efficient measurement of sediment concentration, especially field measurements. A new method is advanced in this study for accurately measuring the sediment concentration based on the accurate measurement of the mass of the sediment-water mixture in the confined constant volume container (CVC). A sediment-laden water sample is put into the CVC to determine its mass before the CVC is filled with water and weighed again for the total mass of the water and sediments in the container. The known volume of the CVC, the mass of sediment-laden water, and sediment particle density are used to calculate the mass of water, which is replaced by sediments, therefore sediment concentration of the sample is calculated. The influence of water temperature was corrected by measuring water density to determine the temperature of water before measurements were conducted. The CVC was used to eliminate the surface tension effect so as to obtain the accurate volume of water and sediment mixture. Experimental results showed that the method was capable of measuring the sediment concentration from 0.5 up to 1200 kg m-3. A good liner relationship existed between the designed and measured sediment concentrations with all the coefficients of determination greater than 0.999 and the averaged relative error less than 0.2%. All of these seem to indicate that the new method is capable of measuring a full range of sediment concentration above 0.5 kg m-3 to replace the traditional oven-drying method as a standard method for evaluating and calibrating other methods.

  13. OPTIMIZING POTENTIAL GREEN REPLACEMENT CHEMICALS – BALANCING FUNCTION AND RISK

    EPA Science Inventory

    An important focus of green chemistry is the design of new chemicals that are inherently less toxic than the ones they might replace, but still retain required functional properties. A variety of methods exist to measure or model both functional and toxicity surrogates that could...

  14. Energy Efficiency for Building Construction Technology.

    ERIC Educational Resources Information Center

    Scharmann, Larry, Ed.

    Intended primarily but not solely for use at the postsecondary level, this curriculum guide contains five units of materials on energy efficiency that were designed to be incorporated into an existing program in building construction. The following topics are examined: conservation measures (residential energy use and methods for reducing…

  15. Net global warming potential and greenhouse gas intensity

    USDA-ARS?s Scientific Manuscript database

    Various methods exist to calculate global warming potential (GWP) and greenhouse gas intensity (GHG) as measures of net greenhouse gas (GHG) emissions from agroecosystems. Little is, however, known about net GWP and GHGI that account for all sources and sinks of GHG emissions. Sources of GHG include...

  16. Compensating for Electrode Polarization in Dielectric Spectroscopy Studies of Colloidal Suspensions: Theoretical Assessment of Existing Methods

    PubMed Central

    Chassagne, Claire; Dubois, Emmanuelle; Jiménez, María L.; van der Ploeg, J. P. M; van Turnhout, Jan

    2016-01-01

    Dielectric spectroscopy can be used to determine the dipole moment of colloidal particles from which important interfacial electrokinetic properties, for instance their zeta potential, can be deduced. Unfortunately, dielectric spectroscopy measurements are hampered by electrode polarization (EP). In this article, we review several procedures to compensate for this effect. First EP in electrolyte solutions is described: the complex conductivity is derived as function of frequency, for two cell geometries (planar and cylindrical) with blocking electrodes. The corresponding equivalent circuit for the electrolyte solution is given for each geometry. This equivalent circuit model is extended to suspensions. The complex conductivity of a suspension, in the presence of EP, is then calculated from the impedance. Different methods for compensating for EP are critically assessed, with the help of the theoretical findings. Their limit of validity is given in terms of characteristic frequencies. We can identify with one of these frequencies the frequency range within which data uncorrected for EP may be used to assess the dipole moment of colloidal particles. In order to extract this dipole moment from the measured data, two methods are reviewed: one is based on the use of existing models for the complex conductivity of suspensions, the other is the logarithmic derivative method. An extension to multiple relaxations of the logarithmic derivative method is proposed. PMID:27486575

  17. Invariant Feature Matching for Image Registration Application Based on New Dissimilarity of Spatial Features

    PubMed Central

    Mousavi Kahaki, Seyed Mostafa; Nordin, Md Jan; Ashtari, Amir H.; J. Zahra, Sophia

    2016-01-01

    An invariant feature matching method is proposed as a spatially invariant feature matching approach. Deformation effects, such as affine and homography, change the local information within the image and can result in ambiguous local information pertaining to image points. New method based on dissimilarity values, which measures the dissimilarity of the features through the path based on Eigenvector properties, is proposed. Evidence shows that existing matching techniques using similarity metrics—such as normalized cross-correlation, squared sum of intensity differences and correlation coefficient—are insufficient for achieving adequate results under different image deformations. Thus, new descriptor’s similarity metrics based on normalized Eigenvector correlation and signal directional differences, which are robust under local variation of the image information, are proposed to establish an efficient feature matching technique. The method proposed in this study measures the dissimilarity in the signal frequency along the path between two features. Moreover, these dissimilarity values are accumulated in a 2D dissimilarity space, allowing accurate corresponding features to be extracted based on the cumulative space using a voting strategy. This method can be used in image registration applications, as it overcomes the limitations of the existing approaches. The output results demonstrate that the proposed technique outperforms the other methods when evaluated using a standard dataset, in terms of precision-recall and corner correspondence. PMID:26985996

  18. Velocity profile, water-surface slope, and bed-material size for selected streams in Colorado

    USGS Publications Warehouse

    Marchand, J.P.; Jarrett, R.D.; Jones, L.L.

    1984-01-01

    Existing methods for determining the mean velocity in a vertical sampling section do not address the conditions present in high-gradient, shallow-depth streams common to mountainous regions such as Colorado. The report presents velocity-profile data that were collected for 11 streamflow-gaging stations in Colorado using both a standard Price type AA current meter and a prototype Price Model PAA current meter. Computational results are compiled that will enable mean velocities calculated from measurements by the two current meters to be compared with each other and with existing methods for determining mean velocity. Water-surface slope, bed-material size, and flow-characteristic data for the 11 sites studied also are presented. (USGS)

  19. Evaluation of the new respiratory gating system

    PubMed Central

    Shi, Chengyu; Tang, Xiaoli; Chan, Maria

    2018-01-01

    Objective The newly released Respiratory Gating for Scanners (RGSC; Varian Medical Systems, Palo Alto, CA, USA) system has limited existing quality assurance (QA) protocols and pertinent publications. Herein, we report our experiences of the RGSC system acceptance and QA. Methods The RGSC system integration was tested with peripheral equipment, spatial reproducibility, and dynamic localization accuracy for regular and irregular breathing patterns, respectively. A QUASAR Respiratory Motion Phantom and a mathematical fitting method were used for data acquisition and analysis. Results The results showed that the RGSC system could accurately measure regular motion periods of 3–10 s. For irregular breathing patterns, differences from the existing Real-time Position Management (RPM; Varian Medical Systems, Palo Alto, CA) system were observed. For dynamic localization measurements, the RGSC system showed 76% agreement with the programmed test data within ±5% tolerance in terms of fitting period. As s comparison, the RPM system showed 66% agreement within ±5% tolerance, and 65% for the RGSC versus RPM measurements. Conclusions New functions and positioning accuracy improve the RGSC system’s ability to achieve higher dynamic treatment precision. A 4D phantom is helpful for the QA tests. Further investigation is required for the whole RGSC system performance QA. PMID:29722356

  20. A brief measure of attitudes toward mixed methods research in psychology

    PubMed Central

    Roberts, Lynne D.; Povee, Kate

    2014-01-01

    The adoption of mixed methods research in psychology has trailed behind other social science disciplines. Teaching psychology students, academics, and practitioners about mixed methodologies may increase the use of mixed methods within the discipline. However, tailoring and evaluating education and training in mixed methodologies requires an understanding of, and way of measuring, attitudes toward mixed methods research in psychology. To date, no such measure exists. In this article we present the development and initial validation of a new measure: Attitudes toward Mixed Methods Research in Psychology. A pool of 42 items developed from previous qualitative research on attitudes toward mixed methods research along with validation measures was administered via an online survey to a convenience sample of 274 psychology students, academics and psychologists. Principal axis factoring with varimax rotation on a subset of the sample produced a four-factor, 12-item solution. Confirmatory factor analysis on a separate subset of the sample indicated that a higher order four factor model provided the best fit to the data. The four factors; ‘Limited Exposure,’ ‘(in)Compatibility,’ ‘Validity,’ and ‘Tokenistic Qualitative Component’; each have acceptable internal reliability. Known groups validity analyses based on preferred research orientation and self-rated mixed methods research skills, and convergent and divergent validity analyses based on measures of attitudes toward psychology as a science and scientist and practitioner orientation, provide initial validation of the measure. This brief, internally reliable measure can be used in assessing attitudes toward mixed methods research in psychology, measuring change in attitudes as part of the evaluation of mixed methods education, and in larger research programs. PMID:25429281

  1. Backscatter Modeling at 2.1 Micron Wavelength for Space-Based and Airborne Lidars Using Aerosol Physico-Chemical and Lidar Datasets

    NASA Technical Reports Server (NTRS)

    Srivastava, V.; Rothermel, J.; Jarzembski, M. A.; Clarke, A. D.; Cutten, D. R.; Bowdle, D. A.; Spinhirne, J. D.; Menzies, R. T.

    1999-01-01

    Space-based and airborne coherent Doppler lidars designed for measuring global tropospheric wind profiles in cloud-free air rely on backscatter, beta from aerosols acting as passive wind tracers. Aerosol beta distribution in the vertical can vary over as much as 5-6 orders of magnitude. Thus, the design of a wave length-specific, space-borne or airborne lidar must account for the magnitude of 8 in the region or features of interest. The SPAce Readiness Coherent Lidar Experiment under development by the National Aeronautics and Space Administration (NASA) and scheduled for launch on the Space Shuttle in 2001, will demonstrate wind measurements from space using a solid-state 2 micrometer coherent Doppler lidar. Consequently, there is a critical need to understand variability of aerosol beta at 2.1 micrometers, to evaluate signal detection under varying aerosol loading conditions. Although few direct measurements of beta at 2.1 micrometers exist, extensive datasets, including climatologies in widely-separated locations, do exist for other wavelengths based on CO2 and Nd:YAG lidars. Datasets also exist for the associated microphysical and chemical properties. An example of a multi-parametric dataset is that of the NASA GLObal Backscatter Experiment (GLOBE) in 1990 in which aerosol chemistry and size distributions were measured concurrently with multi-wavelength lidar backscatter observations. More recently, continuous-wave (CW) lidar backscatter measurements at mid-infrared wavelengths have been made during the Multicenter Airborne Coherent Atmospheric Wind Sensor (MACAWS) experiment in 1995. Using Lorenz-Mie theory, these datasets have been used to develop a method to convert lidar backscatter to the 2.1 micrometer wavelength. This paper presents comparison of modeled backscatter at wavelengths for which backscatter measurements exist including converted beta (sub 2.1).

  2. Measuring what matters to rare disease patients - reflections on the work by the IRDiRC taskforce on patient-centered outcome measures.

    PubMed

    Morel, Thomas; Cano, Stefan J

    2017-11-02

    Our ability to evaluate outcomes which genuinely reflect patients' unmet needs, hopes and concerns is of pivotal importance. However, much current clinical research and practice falls short of this objective by selecting outcome measures which do not capture patient value to the fullest. In this Opinion, we discuss Patient-Centered Outcomes Measures (PCOMs), which have the potential to systematically incorporate patient perspectives to measure those outcomes that matter most to patients. We argue for greater multi-stakeholder collaboration to develop PCOMs, with rare disease patients and families at the center. Beyond advancing the science of patient input, PCOMs are powerful tools to translate care or observed treatment benefit into an 'interpretable' measure of patient benefit, and thereby help demonstrate clinical effectiveness. We propose mixed methods psychometric research as the best route to deliver fit-for-purpose PCOMs in rare diseases, as this methodology brings together qualitative and quantitative research methods in tandem with the explicit aim to efficiently utilise data from small samples. And, whether one opts to develop a brand-new PCOM or to select or adapt an existing outcome measure for use in a rare disease, the anchors remain the same: patients, their daily experience of the rare disease, their preferences, core concepts and values. Ultimately, existing value frameworks, registries, and outcomes-based contracts largely fall short of consistently measuring the full range of outcomes that matter to patients. We argue that greater use of PCOMs in rare diseases would enable a fast track to Patient-Centered Care.

  3. New Developments in Observer Performance Methodology in Medical Imaging

    PubMed Central

    Chakraborty, Dev P.

    2011-01-01

    A common task in medical imaging is assessing whether a new imaging system, or a variant of an existing one, is an improvement over an existing imaging technology. Imaging systems are generally quite complex, consisting of several components – e.g., image acquisition hardware, image processing and display hardware and software, and image interpretation by radiologists– each of which can affect performance. While it may appear odd to include the radiologist as a “component” of the imaging chain, since the radiologist’s decision determines subsequent patient care, the effect of the human interpretation has to be included. Physical measurements like modulation transfer function, signal to noise ratio, etc., are useful for characterizing the non-human parts of the imaging chain under idealized and often unrealistic conditions, such as uniform background phantoms, target objects with sharp edges, etc. Measuring the effect on performance of the entire imaging chain, including the radiologist, and using real clinical images, requires different methods that fall under the rubric of observer performance methods or “ROC analysis”. The purpose of this paper is to review recent developments in this field, particularly with respect to the free-response method. PMID:21978444

  4. Vortex Analysis of Intra-Aneurismal Flow in Cerebral Aneurysms

    PubMed Central

    Sunderland, Kevin; Haferman, Christopher; Chintalapani, Gouthami

    2016-01-01

    This study aims to develop an alternative vortex analysis method by measuring structure ofIntracranial aneurysm (IA) flow vortexes across the cardiac cycle, to quantify temporal stability of aneurismal flow. Hemodynamics were modeled in “patient-specific” geometries, using computational fluid dynamics (CFD) simulations. Modified versions of known λ 2 and Q-criterion methods identified vortex regions; then regions were segmented out using the classical marching cube algorithm. Temporal stability was measured by the degree of vortex overlap (DVO) at each step of a cardiac cycle against a cycle-averaged vortex and by the change in number of cores over the cycle. No statistical differences exist in DVO or number of vortex cores between 5 terminal IAs and 5 sidewall IAs. No strong correlation exists between vortex core characteristics and geometric or hemodynamic characteristics of IAs. Statistical independence suggests this proposed method may provide novel IA information. However, threshold values used to determine the vortex core regions and resolution of velocity data influenced analysis outcomes and have to be addressed in future studies. In conclusions, preliminary results show that the proposed methodology may help give novel insight toward aneurismal flow characteristic and help in future risk assessment given more developments. PMID:27891172

  5. Vortex Analysis of Intra-Aneurismal Flow in Cerebral Aneurysms.

    PubMed

    Sunderland, Kevin; Haferman, Christopher; Chintalapani, Gouthami; Jiang, Jingfeng

    2016-01-01

    This study aims to develop an alternative vortex analysis method by measuring structure ofIntracranial aneurysm (IA) flow vortexes across the cardiac cycle, to quantify temporal stability of aneurismal flow. Hemodynamics were modeled in "patient-specific" geometries, using computational fluid dynamics (CFD) simulations. Modified versions of known λ 2 and Q -criterion methods identified vortex regions; then regions were segmented out using the classical marching cube algorithm. Temporal stability was measured by the degree of vortex overlap (DVO) at each step of a cardiac cycle against a cycle-averaged vortex and by the change in number of cores over the cycle. No statistical differences exist in DVO or number of vortex cores between 5 terminal IAs and 5 sidewall IAs. No strong correlation exists between vortex core characteristics and geometric or hemodynamic characteristics of IAs. Statistical independence suggests this proposed method may provide novel IA information. However, threshold values used to determine the vortex core regions and resolution of velocity data influenced analysis outcomes and have to be addressed in future studies. In conclusions, preliminary results show that the proposed methodology may help give novel insight toward aneurismal flow characteristic and help in future risk assessment given more developments.

  6. Non-Linear Structural Dynamics Characterization using a Scanning Laser Vibrometer

    NASA Technical Reports Server (NTRS)

    Pai, P. F.; Lee, S.-Y.

    2003-01-01

    This paper presents the use of a scanning laser vibrometer and a signal decomposition method to characterize non-linear dynamics of highly flexible structures. A Polytec PI PSV-200 scanning laser vibrometer is used to measure transverse velocities of points on a structure subjected to a harmonic excitation. Velocity profiles at different times are constructed using the measured velocities, and then each velocity profile is decomposed using the first four linear mode shapes and a least-squares curve-fitting method. From the variations of the obtained modal \\ielocities with time we search for possible non-linear phenomena. A cantilevered titanium alloy beam subjected to harmonic base-excitations around the second. third, and fourth natural frequencies are examined in detail. Influences of the fixture mass. gravity. mass centers of mode shapes. and non-linearities are evaluated. Geometrically exact equations governing the planar, harmonic large-amplitude vibrations of beams are solved for operational deflection shapes using the multiple shooting method. Experimental results show the existence of 1:3 and 1:2:3 external and internal resonances. energy transfer from high-frequency modes to the first mode. and amplitude- and phase- modulation among several modes. Moreover, the existence of non-linear normal modes is found to be questionable.

  7. Measuring the degree of integration for an integrated service network

    PubMed Central

    Ye, Chenglin; Browne, Gina; Grdisa, Valerie S; Beyene, Joseph; Thabane, Lehana

    2012-01-01

    Background Integration involves the coordination of services provided by autonomous agencies and improves the organization and delivery of multiple services for target patients. Current measures generally do not distinguish between agencies’ perception and expectation. We propose a method for quantifying the agencies’ service integration. Using the data from the Children’s Treatment Network (CTN), we aimed to measure the degree of integration for the CTN agencies in York and Simcoe. Theory and methods We quantified the integration by the agreement between perceived and expected levels of involvement and calculated four scores from different perspectives for each agency. We used the average score to measure the global network integration and examined the sensitivity of the global score. Results Most agencies’ integration scores were <65%. As measured by the agreement between every other agency’s perception and expectation, the overall integration of CTN in Simcoe and York was 44% (95% CI: 39%–49%) and 52% (95% CI: 48%–56%), respectively. The sensitivity analysis showed that the global scores were robust. Conclusion Our method extends existing measures of integration and possesses a good extent of validity. We can also apply the method in monitoring improvement and linking integration with other outcomes. PMID:23593050

  8. Compression Frequency Choice for Compression Mass Gauge Method and Effect on Measurement Accuracy

    NASA Astrophysics Data System (ADS)

    Fu, Juan; Chen, Xiaoqian; Huang, Yiyong

    2013-12-01

    It is a difficult job to gauge the liquid fuel mass in a tank on spacecrafts under microgravity condition. Without the presence of strong buoyancy, the configuration of the liquid and gas in the tank is uncertain and more than one bubble may exist in the liquid part. All these will affect the measure accuracy of liquid mass gauge, especially for a method called Compression Mass Gauge (CMG). Four resonance resources affect the choice of compression frequency for CMG method. There are the structure resonance, liquid sloshing, transducer resonance and bubble resonance. Ground experimental apparatus are designed and built to validate the gauging method and the influence of different compression frequencies at different fill levels on the measurement accuracy. Harmonic phenomenon should be considered during filter design when processing test data. Results demonstrate the ground experiment system performances well with high accuracy and the measurement accuracy increases as the compression frequency climbs in low fill levels. But low compression frequencies should be the better choice for high fill levels. Liquid sloshing induces the measurement accuracy to degrade when the surface is excited to wave by external disturbance at the liquid natural frequency. The measurement accuracy is still acceptable at small amplitude vibration.

  9. Monitoring of catalyst performance in CO2 lasers using frequency modulation spectroscopy with diode lasers

    NASA Technical Reports Server (NTRS)

    Wang, Liang-Guo; Sachse, Glen

    1990-01-01

    Closed-cycle CO2 laser operation with removal of O2 and regeneration of CO2 can be achieved by catalytic CO-O2 recombination. Both parametric studies of the optimum catalyst formulation and long-term performance tests require on line monitoring of CO, O2 and CO2 concentrations. There are several existing methods for molecular oxygen detection. These methods are either intrusive (such as electrochemical method or mass spectrometry) or very expensive (such as CARS, UV laser absorption). Researchers demonstrated a high-sensitivity spectroscopic measurement of O2 using the two-tone frequency modulation spectroscopy (FMS) technique with a near infrared GaAlAs diode laser. Besides its inexpensive cost, fast response time, nonintrusive measurements and high sensitivity, this technique may also be used to differentiate between isotopes due to its high spectroscopic resolution. This frequency modulation spectroscopy technique could also be applied for the on-line monitoring of CO and CO2 using InGaAsP diode lasers operation in the 1.55 microns region and H2O in the 1.3 microns region. The existence of single mode optical fibers at the near infrared region makes it possible to combine FMS with optical fiber technology. Optical fiber FMS is particularly suitable for making point-measurements at one or more locations in the CO2 laser/catalyst system.

  10. An easy tool to assess ventilation in health facilities as part of air-borne transmission prevention: a cross-sectional survey from Uganda.

    PubMed

    Brouwer, Miranda; Katamba, Achilles; Katabira, Elly Tebasoboke; van Leth, Frank

    2017-05-03

    No guidelines exist on assessing ventilation through air changes per hour (ACH) using a vaneometer. The objective of the study was to evaluate the position and frequency for measuring air velocity using a vaneometer to assess ventilation with ACH; and to assess influence of ambient temperature and weather on ACH. Cross-sectional survey in six urban health facilities in Kampala, Uganda. Measurements consisted of taking air velocity on nine separate moments in five positions in each opening of the TB clinic, laboratory, outpatient consultation and outpatient waiting room using a vaneometer. We assessed in addition the ventilation with the "20% rule", and compared this estimation with the ventilation in ACH assessed using the vaneometer. A total of 189 measurements showed no influence on air velocity of the position and moment of the measurement. No significant influence existed of ambient temperature and a small but significant influence of sunny weather. Ventilation was adequate in 17/24 (71%) of all measurements. Using the "20% rule", ventilation was adequate in 50% of rooms assessed. Agreement between both methods existed in 13/23 (56%) of the rooms assessed. Most rooms had adequate ventilation when assessed using a vaneometer for measuring air velocity. A single vaneometer measurement of air velocity is adequate to assess ventilation in this setting. These findings provide practical input for clear guidelines on assessing ventilation using a vaneometer. Assessing ventilation with a vaneometer differs substantially from applying the "20% rule".

  11. Is the societal approach wide enough to include relatives? Incorporating relatives' costs and effects in a cost-effectiveness analysis.

    PubMed

    Davidson, Thomas; Levin, Lars-Ake

    2010-01-01

    It is important for economic evaluations in healthcare to cover all relevant information. However, many existing evaluations fall short of this goal, as they fail to include all the costs and effects for the relatives of a disabled or sick individual. The objective of this study was to analyse how relatives' costs and effects could be measured, valued and incorporated into a cost-effectiveness analysis. In this article, we discuss the theories underlying cost-effectiveness analyses in the healthcare arena; the general conclusion is that it is hard to find theoretical arguments for excluding relatives' costs and effects if a societal perspective is used. We argue that the cost of informal care should be calculated according to the opportunity cost method. To capture relatives' effects, we construct a new term, the R-QALY weight, which is defined as the effect on relatives' QALY weight of being related to a disabled or sick individual. We examine methods for measuring, valuing and incorporating the R-QALY weights. One suggested method is to estimate R-QALYs and incorporate them together with the patient's QALY in the analysis. However, there is no well established method as yet that can create R-QALY weights. One difficulty with measuring R-QALY weights using existing instruments is that these instruments are rarely focused on relative-related aspects. Even if generic quality-of-life instruments do cover some aspects relevant to relatives and caregivers, they may miss important aspects and potential altruistic preferences. A further development and validation of the existing caregiving instruments used for eliciting utility weights would therefore be beneficial for this area, as would further studies on the use of time trade-off or Standard Gamble methods for valuing R-QALY weights. Another potential method is to use the contingent valuation method to find a monetary value for all the relatives' costs and effects. Because cost-effectiveness analyses are used for decision making, and this is often achieved by comparing different cost-effectiveness ratios, we argue that it is important to find ways of incorporating all relatives' costs and effects into the analysis. This may not be necessary for every analysis of every intervention, but for treatments where relatives' costs and effects are substantial there may be some associated influence on the cost-effectiveness ratio.

  12. Improving Classification of Protein Interaction Articles Using Context Similarity-Based Feature Selection.

    PubMed

    Chen, Yifei; Sun, Yuxing; Han, Bing-Qing

    2015-01-01

    Protein interaction article classification is a text classification task in the biological domain to determine which articles describe protein-protein interactions. Since the feature space in text classification is high-dimensional, feature selection is widely used for reducing the dimensionality of features to speed up computation without sacrificing classification performance. Many existing feature selection methods are based on the statistical measure of document frequency and term frequency. One potential drawback of these methods is that they treat features separately. Hence, first we design a similarity measure between the context information to take word cooccurrences and phrase chunks around the features into account. Then we introduce the similarity of context information to the importance measure of the features to substitute the document and term frequency. Hence we propose new context similarity-based feature selection methods. Their performance is evaluated on two protein interaction article collections and compared against the frequency-based methods. The experimental results reveal that the context similarity-based methods perform better in terms of the F1 measure and the dimension reduction rate. Benefiting from the context information surrounding the features, the proposed methods can select distinctive features effectively for protein interaction article classification.

  13. simDEF: definition-based semantic similarity measure of gene ontology terms for functional similarity analysis of genes.

    PubMed

    Pesaranghader, Ahmad; Matwin, Stan; Sokolova, Marina; Beiko, Robert G

    2016-05-01

    Measures of protein functional similarity are essential tools for function prediction, evaluation of protein-protein interactions (PPIs) and other applications. Several existing methods perform comparisons between proteins based on the semantic similarity of their GO terms; however, these measures are highly sensitive to modifications in the topological structure of GO, tend to be focused on specific analytical tasks and concentrate on the GO terms themselves rather than considering their textual definitions. We introduce simDEF, an efficient method for measuring semantic similarity of GO terms using their GO definitions, which is based on the Gloss Vector measure commonly used in natural language processing. The simDEF approach builds optimized definition vectors for all relevant GO terms, and expresses the similarity of a pair of proteins as the cosine of the angle between their definition vectors. Relative to existing similarity measures, when validated on a yeast reference database, simDEF improves correlation with sequence homology by up to 50%, shows a correlation improvement >4% with gene expression in the biological process hierarchy of GO and increases PPI predictability by > 2.5% in F1 score for molecular function hierarchy. Datasets, results and source code are available at http://kiwi.cs.dal.ca/Software/simDEF CONTACT: ahmad.pgh@dal.ca or beiko@cs.dal.ca Supplementary data are available at Bioinformatics online. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  14. Design and Testing of a Tool for Evaluating the Quality of Diabetes Consumer-Information Web Sites

    PubMed Central

    Steinwachs, Donald; Rubin, Haya R

    2003-01-01

    Background Most existing tools for measuring the quality of Internet health information focus almost exclusively on structural criteria or other proxies for quality information rather than evaluating actual accuracy and comprehensiveness. Objective This research sought to develop a new performance-measurement tool for evaluating the quality of Internet health information, test the validity and reliability of the tool, and assess the variability in diabetes Web site quality. Methods An objective, systematic tool was developed to evaluate Internet diabetes information based on a quality-of-care measurement framework. The principal investigator developed an abstraction tool and trained an external reviewer on its use. The tool included 7 structural measures and 34 performance measures created by using evidence-based practice guidelines and experts' judgments of accuracy and comprehensiveness. Results Substantial variation existed in all categories, with overall scores following a normal distribution and ranging from 15% to 95% (mean was 50% and median was 51%). Lin's concordance correlation coefficient to assess agreement between raters produced a rho of 0.761 (Pearson's r of 0.769), suggesting moderate to high agreement. The average agreement between raters for the performance measures was 0.80. Conclusions Diabetes Web site quality varies widely. Alpha testing of this new tool suggests that it could become a reliable and valid method for evaluating the quality of Internet health sites. Such an instrument could help lay people distinguish between beneficial and misleading information. PMID:14713658

  15. Computation and measurement of cell decision making errors using single cell data

    PubMed Central

    Habibi, Iman; Cheong, Raymond; Levchenko, Andre; Emamian, Effat S.; Abdi, Ali

    2017-01-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF—NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell’s inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves. PMID:28379950

  16. Computation and measurement of cell decision making errors using single cell data.

    PubMed

    Habibi, Iman; Cheong, Raymond; Lipniacki, Tomasz; Levchenko, Andre; Emamian, Effat S; Abdi, Ali

    2017-04-01

    In this study a new computational method is developed to quantify decision making errors in cells, caused by noise and signaling failures. Analysis of tumor necrosis factor (TNF) signaling pathway which regulates the transcription factor Nuclear Factor κB (NF-κB) using this method identifies two types of incorrect cell decisions called false alarm and miss. These two events represent, respectively, declaring a signal which is not present and missing a signal that does exist. Using single cell experimental data and the developed method, we compute false alarm and miss error probabilities in wild-type cells and provide a formulation which shows how these metrics depend on the signal transduction noise level. We also show that in the presence of abnormalities in a cell, decision making processes can be significantly affected, compared to a wild-type cell, and the method is able to model and measure such effects. In the TNF-NF-κB pathway, the method computes and reveals changes in false alarm and miss probabilities in A20-deficient cells, caused by cell's inability to inhibit TNF-induced NF-κB response. In biological terms, a higher false alarm metric in this abnormal TNF signaling system indicates perceiving more cytokine signals which in fact do not exist at the system input, whereas a higher miss metric indicates that it is highly likely to miss signals that actually exist. Overall, this study demonstrates the ability of the developed method for modeling cell decision making errors under normal and abnormal conditions, and in the presence of transduction noise uncertainty. Compared to the previously reported pathway capacity metric, our results suggest that the introduced decision error metrics characterize signaling failures more accurately. This is mainly because while capacity is a useful metric to study information transmission in signaling pathways, it does not capture the overlap between TNF-induced noisy response curves.

  17. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  18. Subcopula-based measure of asymmetric association for contingency tables.

    PubMed

    Wei, Zheng; Kim, Daeyoung

    2017-10-30

    For the analysis of a two-way contingency table, a new asymmetric association measure is developed. The proposed method uses the subcopula-based regression between the discrete variables to measure the asymmetric predictive powers of the variables of interest. Unlike the existing measures of asymmetric association, the subcopula-based measure is insensitive to the number of categories in a variable, and thus, the magnitude of the proposed measure can be interpreted as the degree of asymmetric association in the contingency table. The theoretical properties of the proposed subcopula-based asymmetric association measure are investigated. We illustrate the performance and advantages of the proposed measure using simulation studies and real data examples. Copyright © 2017 John Wiley & Sons, Ltd.

  19. Experimental evidence for simultaneous relaxation processes in super spin glass γ-Fe2O3 nanoparticle system

    NASA Astrophysics Data System (ADS)

    Nikolic, V.; Perovic, M.; Kusigerski, V.; Boskovic, M.; Mrakovic, A.; Blanusa, J.; Spasojevic, V.

    2015-03-01

    Spherical γ-Fe2O3 nanoparticles with the narrow size distribution of (5 ± 1) nm were synthesized by the method of thermal decomposition from iron acetyl acetonate precursor. The existence of super spin-glass state at low temperatures and in low applied magnetic fields was confirmed by DC magnetization measurements on a SQUID magnetometer. The comprehensive investigation of magnetic relaxation dynamics in low-temperature region was conducted through the measurements of single-stop and multiple stop ZFC memory effects, ZFC magnetization relaxation, and AC susceptibility measurements. The experimental findings revealed the peculiar change of magnetic relaxation dynamics at T ≈ 10 K, which arose as a consequence of simultaneous existence of different relaxation processes in Fe2O3 nanoparticle system. Complementarity of the applied measurements was utilized in order to single out distinct relaxation processes as well as to elucidate complex relaxation mechanisms in the investigated interacting nanoparticle system.

  20. The Academic Ethic and College Grades: Does Hard Work Help Students To "Make the Grade"?

    ERIC Educational Resources Information Center

    Rau, William; Durand, Ann

    2000-01-01

    Demonstrates how "academic ethic" (a student world view that emphasizes diligent, daily, and sober study) can be operationalized and measured. Provides evidence for its existence among students at Illinois State University. Finds a relationship between methodical, disciplined study and academic performance. (Contains references.) (CMK)

  1. Financing Lifelong Learning for All: An International Perspective. Working Paper.

    ERIC Educational Resources Information Center

    Burke, Gerald

    Recent international discussions provide information on various countries' responses to lifelong learning, including the following: (1) existing unmet needs and emerging needs for education and training; (2) funds required compared with what was provided; and (3) methods for acquiring additional funds, among them efficiency measures leading to…

  2. 30 CFR 585.633 - How do I comply with my COP?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 585.633 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and... must make recommendations for new mitigation measures or monitoring methods. (c) As provided at § 585...

  3. 30 CFR 585.633 - How do I comply with my COP?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 585.633 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and... must make recommendations for new mitigation measures or monitoring methods. (c) As provided at § 585...

  4. 30 CFR 585.633 - How do I comply with my COP?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 585.633 Mineral Resources BUREAU OF OCEAN ENERGY MANAGEMENT, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY AND ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and... must make recommendations for new mitigation measures or monitoring methods. (c) As provided at § 585...

  5. 30 CFR 285.633 - How do I comply with my COP?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 285.633 Mineral Resources MINERALS MANAGEMENT SERVICE, DEPARTMENT OF THE INTERIOR OFFSHORE RENEWABLE ENERGY ALTERNATE USES OF EXISTING FACILITIES ON THE OUTER CONTINENTAL SHELF Plans and Information... recommendations for new mitigation measures or monitoring methods. (c) As provided at § 285.105(i), MMS may...

  6. Evidence-Based Clinical Voice Assessment: A Systematic Review

    ERIC Educational Resources Information Center

    Roy, Nelson; Barkmeier-Kraemer, Julie; Eadie, Tanya; Sivasankar, M. Preeti; Mehta, Daryush; Paul, Diane; Hillman, Robert

    2013-01-01

    Purpose: To determine what research evidence exists to support the use of voice measures in the clinical assessment of patients with voice disorders. Method: The American Speech-Language-Hearing Association (ASHA) National Center for Evidence-Based Practice in Communication Disorders staff searched 29 databases for peer-reviewed English-language…

  7. Using Qualitative Methods for Revising Items in the Hispanic Stress Inventory

    ERIC Educational Resources Information Center

    Cervantes, Richard C.; Goldbach, Jeremy T.; Padilla, Amado M.

    2012-01-01

    Despite progress in the development of measures to assess psychosocial stress experiences in the general population, a lack of culturally informed assessment instruments exist to enable clinicians and researchers to detect and accurately diagnosis mental health concerns among Hispanics. The Hispanic Stress Inventory (HSI) was developed…

  8. Self-Assessment and Peer-Assessment in an EFL Context

    ERIC Educational Resources Information Center

    Yamini, Morteza; Tahmasebi, Soheila

    2012-01-01

    Salient in an EFL teaching context is students' dissatisfaction with their final scores especially in oral courses. This study tried to bridge the gap between students' and teachers' rating system through alternatives to existing measurement methods. Task-based language assessment has stimulated language teachers to question the way through which…

  9. Solar oscillation time delay measurement assisted celestial navigation method

    NASA Astrophysics Data System (ADS)

    Ning, Xiaolin; Gui, Mingzhen; Zhang, Jie; Fang, Jiancheng; Liu, Gang

    2017-05-01

    Solar oscillation, which causes the sunlight intensity and spectrum frequency change, has been studied in great detail, both observationally and theoretically. In this paper, owing to the existence of solar oscillation, the time delay between the sunlight coming from the Sun directly and the sunlight reflected by the other celestial body such as the satellite of planet or asteroid can be obtained with two optical power meters. Because the solar oscillation time delay is determined by the relative positions of the spacecraft, reflective celestial body and the Sun, it can be adopted as the navigation measurement to estimate the spacecraft's position. The navigation accuracy of single solar oscillation time delay navigation system depends on the time delay measurement accuracy, and is influenced by the distance between spacecraft and reflective celestial body. In this paper, we combine it with the star angle measurement and propose a solar oscillation time delay measurement assisted celestial navigation method for deep space exploration. Since the measurement model of time delay is an implicit function, the Implicit Unscented Kalman Filter (IUKF) is applied. Simulations demonstrate the effectiveness and superiority of this method.

  10. A Developed Meta-model for Selection of Cotton Fabrics Using Design of Experiments and TOPSIS Method

    NASA Astrophysics Data System (ADS)

    Chakraborty, Shankar; Chatterjee, Prasenjit

    2017-12-01

    Selection of cotton fabrics for providing optimal clothing comfort is often considered as a multi-criteria decision making problem consisting of an array of candidate alternatives to be evaluated based of several conflicting properties. In this paper, design of experiments and technique for order preference by similarity to ideal solution (TOPSIS) are integrated so as to develop regression meta-models for identifying the most suitable cotton fabrics with respect to the computed TOPSIS scores. The applicability of the adopted method is demonstrated using two real time examples. These developed models can also identify the statistically significant fabric properties and their interactions affecting the measured TOPSIS scores and final selection decisions. There exists good degree of congruence between the ranking patterns as derived using these meta-models and the existing methods for cotton fabric ranking and subsequent selection.

  11. Exploring the Implications of N Measurement and Model Choice on Using Data for Policy and Land Management Decisions

    NASA Astrophysics Data System (ADS)

    Bell, M. D.; Walker, J. T.

    2017-12-01

    Atmospheric deposition of nitrogen compounds are determined using a variety of measurement and modeling methods. These values are then used to calculate fluxes to the ecosystem which can then be linked to ecological responses. But, for this data to be used outside of the system in which it is developed, it is necessary to understand how the deposition estimates relate to one another. Therefore, we first identified sources of "bulk" deposition data and compared methods, reliability of data, and consistency of results to one another. Then we looked at the variation within photochemical models that are used by Federal Agencies to evaluate national trends. Finally, we identified some best practices for researchers to consider if their assessment is intended for use at broader scales. Empirical measurements used in this assessment include passive collection of atmospheric molecules, throughfall deposition of precipitation, snowpack measurements, and using biomonitors such as lichen. The three most common photochemical models used to model deposition within the United States are CMAQ, CAMx, and TDep (which uses empirical data to refine modeled values). These models all use meteorological and emission data to estimate deposition at local, regional, or national scales. We identified the range of uncertainty that exists within the types of deposition measurements and how these vary over space and time. Uncertainty is assessed by comparing deposition estimates from differing collection methods and comparing modeled estimates to empirical deposition data. Each collection method has benefits and downfalls that need to be taken into account if the results are to be expanded outside of the research area. Comparing field measured values to modeled values highlight the importance of each in the greater goals of understanding current conditions and trends within deposition patterns in the US. While models work well on a larger scale, they cannot replicate the local heterogeneity that exists at a site. Often, each researcher has a favorite method of analysis, but if the data cannot be related to other efforts then it becomes harder to apply it to broader policy considerations.

  12. Strain gage measurement errors in the transient heating of structural components

    NASA Technical Reports Server (NTRS)

    Richards, W. Lance

    1993-01-01

    Significant strain-gage errors may exist in measurements acquired in transient thermal environments if conventional correction methods are applied. Conventional correction theory was modified and a new experimental method was developed to correct indicated strain data for errors created in radiant heating environments ranging from 0.6 C/sec (1 F/sec) to over 56 C/sec (100 F/sec). In some cases the new and conventional methods differed by as much as 30 percent. Experimental and analytical results were compared to demonstrate the new technique. For heating conditions greater than 6 C/sec (10 F/sec), the indicated strain data corrected with the developed technique compared much better to analysis than the same data corrected with the conventional technique.

  13. Terrain and refractivity effects on non-optical paths

    NASA Astrophysics Data System (ADS)

    Barrios, Amalia E.

    1994-07-01

    The split-step parabolic equation (SSPE) has been used extensively to model tropospheric propagation over the sea, but recent efforts have extended this method to propagation over arbitrary terrain. At the Naval Command, Control and Ocean Surveillance Center (NCCOSC), Research, Development, Test and Evaluation Division, a split-step Terrain Parabolic Equation Model (TPEM) has been developed that takes into account variable terrain and range-dependent refractivity profiles. While TPEM has been previously shown to compare favorably with measured data and other existing terrain models, two alternative methods to model radiowave propagation over terrain, implemented within TPEM, will be presented that give a two to ten-fold decrease in execution time. These two methods are also shown to agree well with measured data.

  14. Measurement of charge transport through organic semiconducting devices

    NASA Astrophysics Data System (ADS)

    Klenkler, Richard A.

    2007-12-01

    In this thesis, two important and unexplored areas of organic semiconductor device physics are investigated: The first area involves determining the effect of energy barriers and intermixing at the interfaces between hole transport layers (HTLs). This effect was discerned by first establishing a method of pressure-laminating successive solution coated HTLs to gether. It was found that in the range of 0.8--3.0 MPa a pressure-laminated interface between two identical HTLs causes no measurable perturbation to charge transport. By this method, 2 different HTLs can be sandwiched together to create a discrete interface, and by inserting a mixed HTL in the middle an intermixed interface between the 2 HTLs can be simulated. With these sandwiched devices, charge injection across discrete versus intermixed interfaces were compared using time-of-flight measurements. For the hole transport materials investigated, no perturbation to the overall charge transport was observed with the discrete interface, however in contrast the rate of charge transport was clearly reduced through the intermixed interface. The second area that was investigated pertains to the development of a bulk mobility measurement technique that has a higher resolution than existing methods. The approach that was used involved decoupling the charge carrier transient signal from the device charging circuit. With this approach, the RC time constant constraint that limits the resolution of existing methods is eliminated. The resulting method, termed the photoinduced electroluminescence (EL) mobility measurement technique, was then used to compare the electron mobility of the metal chelate, AlQ3 to that of the novel triazine material, BTB. Results showed that BTB demonstrated an order of magnitude higher mobility than AlQ3. Overall, these findings have broad implications regarding device design. The pressure-lamination method could be used, e.g., as a diagnostic tool to help in the design of multilayer xerographic photoreceptors, such as those that include an abrasion resistant overcoat. Further, the photoinduced EL technique could be use as a tool to help characterize charge flow and balance in organic light emitting devices amongst others.

  15. Improved non-destructive method for 90Sr activity determination in aqueous solutions using Monte Carlo simulation.

    PubMed

    Samardžić, Selena; Milošević, Miodrag; Todorović, Nataša; Lakatoš, Robert

    2018-04-04

    The development of new methods and improvements of existing methods for the specific activity determination of 90 Sr and other distinct beta emitters has been of considerable interest. The reason for this interest is that the notably small number of methods that are able to meet all the set criteria, such as reliability of the results, measurement uncertainty and time, and minimum production of radioactive waste, as well as applicability to various samples with reference to their nature, geometry and composition. In this paper, two methods for rapid 90 Sr activity determination based on Monte Carlo simulations are used, one for a Si semiconductor detector for beta spectrometric measurements and the other for the Geiger-Muller (GM) ionization probe. To improve the reliability of the measurement results, samples with high and low strontium activity solutions were prepared in the form of dry residues. The results of the proposed methodology were verified with a standard method using a liquid scintillation counter, and notably good agreements are achieved. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. The Noninvasive Measurement of X-Ray Tube Potential.

    NASA Astrophysics Data System (ADS)

    Ranallo, Frank Nunzio

    In this thesis I briefly describe the design of clinical x-ray imaging systems and also the various methods of measuring x-ray tube potential, both invasive and noninvasive. I also discuss the meaning and usage of the quantities tube potential (kV) and peak tube potential (kVp) with reference to x-ray systems used in medical imaging. I propose that there exist several quantities which describe different important aspects of the tube potential as a function of time. These quantities are measurable and can be well defined. I have developed a list of definitions of these quantities along with suggested names and symbols. I describe the development and physical principles of a superior noninvasive method of tube potential measurement along with the instrumentation used to implement this method. This thesis research resulted in the development of several commercial kVp test devices (or "kVp Meters") for which the actual measurement procedure is simple, rapid, and reliable compared to other methods, invasive or noninvasive. These kVp test devices provide measurements with a high level of accuracy and reliability over a wide range of test conditions. They provide results which are more reliable and clinically meaningful than many other, more primary and invasive methods. The errors inherent in these new kVp test devices were investigated and methods to minimize them are discussed.

  17. The Generation of Novel MR Imaging Techniques to Visualize Inflammatory/Degenerative Mechanisms and the Correlation of MR Data with 3D Microscopic Changes

    DTIC Science & Technology

    2013-09-01

    existing MR scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently...and unique system for analysis of affected brain regions and coupled with other imaging techniques and molecular measurements holds significant...scanning systems providing the ability to visualize structures that are impossible with current methods . Using techniques to concurrently stain

  18. Development of Novel Noninvasive Methods of Stress Assessment in Baleen Whales

    DTIC Science & Technology

    2015-09-30

    large whales. Few methods exist for assessment of physiological stress levels of free-swimming cetaceans (Amaral 2010, ONR 2010, Hunt et al. 2013...adrenal hormone aldosterone . Our aim in this project is to further develop both techniques - respiratory hormone analysis and fecal hormone analysis...development of a noninvasive aldosterone assay (for both feces and blow) that can be used as an alternative measure of adrenal gland activation relative to

  19. Fiber-Optic Strain-Gage Tank Level Measurement System for Cryogenic Propellants

    NASA Technical Reports Server (NTRS)

    Figueroa, Fernando; Mitchell, Mark; Langford, Lester

    2004-01-01

    Measurement of tank level, particularly for cryogenic propellants, has proven to be a difficult problem. Current methods based on differential pressure, capacitance sensors, temperature sensors, etc.; do not provide sufficiently accurate or robust measurements, especially at run time. These methods are designed to measure tank-level, but when the fluids are in supercritical state, the liquid-gas interface disappears. Furthermore, there is a need for a non-intrusive measurement system; that is, the sensors should not require tank modifications and/or disturb the fluids. This paper describes a simple, but effective method to determine propellant mass by measuring very small deformations of the structure supporting the tank. Results of a laboratory study to validate the method, and experimental data from a deployed system are presented. A comparison with an existing differential pressure sensor shows that the strain gage system provides a much better quality signal across all regimes during an engine test. Experimental results also show that the use of fiber optic strain gages (FOSG) over classic foil strain gages extends the operation time (before the system becomes uncalibrated), and increases accuracy. Finally, a procedure is defined whereby measurements from the FOSG mounted on the tank supporting structure are compensated using measurements of a FOSG mounted on a reference plate and temperature measurements of the structure. Results describing the performance of a deployed system that measures tank level during propulsion tests are included.

  20. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  1. Covariance analysis for evaluating head trackers

    NASA Astrophysics Data System (ADS)

    Kang, Donghoon

    2017-10-01

    Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.

  2. CS_TOTR: A new vertex centrality method for directed signed networks based on status theory

    NASA Astrophysics Data System (ADS)

    Ma, Yue; Liu, Min; Zhang, Peng; Qi, Xingqin

    Measuring the importance (or centrality) of vertices in a network is a significant topic in complex network analysis, which has significant applications in diverse domains, for example, disease control, spread of rumors, viral marketing and so on. Existing studies mainly focus on social networks with only positive (or friendship) relations, while signed networks with also negative (or enemy) relations are seldom studied. Various signed networks commonly exist in real world, e.g. a network indicating friendship/enmity, love/hate or trust/mistrust relationships. In this paper, we propose a new centrality method named CS_TOTR to give a ranking of vertices in directed signed networks. To design this new method, we use the “status theory” for signed networks, and also adopt the vertex ranking algorithm for a tournament and the topological sorting algorithm for a general directed graph. We apply this new centrality method on the famous Sampson Monastery dataset and obtain a convincing result which shows its validity.

  3. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    PubMed Central

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  4. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    PubMed

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  5. A conjugate gradient method with descent properties under strong Wolfe line search

    NASA Astrophysics Data System (ADS)

    Zull, N.; ‘Aini, N.; Shoid, S.; Ghani, N. H. A.; Mohamed, N. S.; Rivaie, M.; Mamat, M.

    2017-09-01

    The conjugate gradient (CG) method is one of the optimization methods that are often used in practical applications. The continuous and numerous studies conducted on the CG method have led to vast improvements in its convergence properties and efficiency. In this paper, a new CG method possessing the sufficient descent and global convergence properties is proposed. The efficiency of the new CG algorithm relative to the existing CG methods is evaluated by testing them all on a set of test functions using MATLAB. The tests are measured in terms of iteration numbers and CPU time under strong Wolfe line search. Overall, this new method performs efficiently and comparable to the other famous methods.

  6. Global antioxidant response of meat.

    PubMed

    Carrillo, Celia; Barrio, Ángela; Del Mar Cavia, María; Alonso-Torre, Sara

    2017-06-01

    The global antioxidant response (GAR) method uses an enzymatic digestion to release antioxidants from foods. Owing to the importance of digestion for protein breakdown and subsequent release of bioactive compounds, the aim of the present study was to compare the GAR method for meat with the existing methodologies: the extraction-based method and QUENCHER. Seven fresh meats were analyzed using ABTS and FRAP assays. Our results indicated that the GAR of meat was higher than the total antioxidant capacity (TAC) assessed with the traditional extraction-based method. When evaluated with GAR, the thermal treatment led to an increase in the TAC of the soluble fraction, contrasting with a decreased TAC after cooking measured using the extraction-based method. The effect of thermal treatment on the TAC assessed by the QUENCHER method seemed to be dependent on the assay applied, since results from ABTS differed from FRAP. Our results allow us to hypothesize that the activation of latent bioactive peptides along the gastrointestinal tract should be taken into consideration when evaluating the TAC of meat. Therefore, we conclude that the GAR method may be more appropriate for assessing the TAC of meat than the existing, most commonly used methods. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  7. All-Versus-Nothing Proof of Einstein-Podolsky-Rosen Steering

    PubMed Central

    Chen, Jing-Ling; Ye, Xiang-Jun; Wu, Chunfeng; Su, Hong-Yi; Cabello, Adán; Kwek, L. C.; Oh, C. H.

    2013-01-01

    Einstein-Podolsky-Rosen steering is a form of quantum nonlocality intermediate between entanglement and Bell nonlocality. Although Schrödinger already mooted the idea in 1935, steering still defies a complete understanding. In analogy to “all-versus-nothing” proofs of Bell nonlocality, here we present a proof of steering without inequalities rendering the detection of correlations leading to a violation of steering inequalities unnecessary. We show that, given any two-qubit entangled state, the existence of certain projective measurement by Alice so that Bob's normalized conditional states can be regarded as two different pure states provides a criterion for Alice-to-Bob steerability. A steering inequality equivalent to the all-versus-nothing proof is also obtained. Our result clearly demonstrates that there exist many quantum states which do not violate any previously known steering inequality but are indeed steerable. Our method offers advantages over the existing methods for experimentally testing steerability, and sheds new light on the asymmetric steering problem. PMID:23828242

  8. Predicted and measured boundary layer refraction for advanced turboprop propeller noise

    NASA Technical Reports Server (NTRS)

    Dittmar, James H.; Krejsa, Eugene A.

    1990-01-01

    Currently, boundary layer refraction presents a limitation to the measurement of forward arc propeller noise measured on an acoustic plate in the NASA Lewis 8- by 6-Foot Supersonic Wind Tunnel. The use of a validated boundary layer refraction model to adjust the data could remove this limitation. An existing boundary layer refraction model is used to predict the refraction for cases where boundary layer refraction was measured. In general, the model exhibits the same qualitative behavior as the measured refraction. However, the prediction method does not show quantitative agreement with the data. In general, it overpredicts the amount of refraction for the far forward angles at axial Mach number of 0.85 and 0.80 and underpredicts the refraction at axial Mach numbers of 0.75 and 0.70. A more complete propeller source description is suggested as a way to improve the prediction method.

  9. Measurement of lung volumes from supine portable chest radiographs.

    PubMed

    Ries, A L; Clausen, J L; Friedman, P J

    1979-12-01

    Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.

  10. Unavoidable electric current caused by inhomogeneities and its influence on measured material parameters of thermoelectric materials

    NASA Astrophysics Data System (ADS)

    Song, K.; Song, H. P.; Gao, C. F.

    2018-03-01

    It is well known that the key factor determining the performance of thermoelectric materials is the figure of merit, which depends on the thermal conductivity (TC), electrical conductivity, and Seebeck coefficient (SC). The electric current must be zero when measuring the TC and SC to avoid the occurrence of measurement errors. In this study, the complex-variable method is used to analyze the thermoelectric field near an elliptic inhomogeneity in an open circuit, and the field distributions are obtained in closed form. Our analysis shows that an electric current inevitably exists in both the matrix and the inhomogeneity even though the circuit is open. This unexpected electric current seriously affects the accuracy with which the TC and SC are measured. These measurement errors, both overall and local, are analyzed in detail. In addition, an error correction method is proposed based on the analytical results.

  11. Measuring Joule heating and strain induced by electrical current with Moire interferometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen Bicheng; Basaran, Cemal

    2011-04-01

    This study proposes a new method to locate and measure the temperature of the hot spots caused by Joule Heating by measuring the free thermal expansion in-plane strain. It is demonstrated that the hotspot caused by the Joule heating in a thin metal film/plate structure can be measured by Phase shifting Moire interferometry with continuous wavelet transform (PSMI/CWT) at the microscopic scale. A demonstration on a copper film is conducted to verify the theory under different current densities. A correlation between the current density and strain in two orthogonal directions (one in the direction of the current flow) is proposed.more » The method can also be used for the measurement of the Joule heating in the microscopic solid structures in the electronic packaging devices. It is shown that a linear relationship exists between current density squared and normal strains.« less

  12. A Rapid Method for Measuring Strontium-90 Activity in Crops in China

    NASA Astrophysics Data System (ADS)

    Pan, Lingjing Pan; Yu, Guobing; Wen, Deyun; Chen, Zhi; Sheng, Liusi; Liu, Chung-King; Xu, X. George

    2017-09-01

    A rapid method for measuring Sr-90 activity in crop ashes is presented. Liquid scintillation counting, combined with ion exchange columns 4`, 4"(5")-di-t-butylcyclohexane-18-crown-6, is used to determine the activity of Sr-90 in crops. The yields of chemical procedure are quantified using gravimetric analysis. The conventional method that uses ion-exchange resin with HDEHP could not completely remove all the bismuth when comparatively large lead and bismuth exist in the samples. This is overcome by the rapid method. The chemical yield of this method is about 60% and the MDA for Sr-90 is found to be 2:32 Bq/kg. The whole procedure together with using spectrum analysis to determine the activity only takes about one day, which is really a large improvement compared with the conventional method. A modified conventional method is also described here to verify the value of the rapid one. These two methods can meet di_erent needs of daily monitoring and emergency situation.

  13. When brain neuroscience meets hydrology: timeseries analysis methods for capturing structural and functional aspects of hydrologic connectivity

    NASA Astrophysics Data System (ADS)

    Ali, G.; Rinderer, M.

    2016-12-01

    In hydrology, several connectivity definitions exist that hinder intercomparison between different studies. Yet, consensus exists on the distinction between structural connectivity (i.e., physical adjacency of landscape elements that is thought to influence material transfer) and functional or effective connectivity (i.e., interaction or causality between spatial adjacency characteristics and temporally varying factors, leading to the connected flow of material). While hydrologists have succeeded in deriving measures of structural connectivity (SC), the quantification of functional (FC) or effective connectivity (EC) is elusive. Here we borrowed timeseries analysis methods from brain neuroscience to quantify EC and FC among groundwater (n = 34) and stream discharge (n = 1) monitoring sites in a 20-ha Swiss catchment where topography is assumed to be a major driver of connectivity. Influence maps created from elevation data were used to assess SC. FC was assessed by cross-correlation, total and partial mutual information and EC quantified via total and partial entropy, Granger causality and a phase slope index. Results show that generally, a fair percentage of structural connections were also expressed as functional or effective connections. Some FC and EC measures had clear advantages over others, for instance in terms of making a distinction between Darcian fluxes of water and pressure wave-driven processes. False-positive estimations, i.e., the detection of FC and EC despite the absence of SC, were also encountered and used to invalidate the applicability of some brain-connectivity measures in a hydrological context. While our goal was not to identify the best measure of FC or EC, our study showed that the application of brain neuroscience methods for assessing FC and EC in hydrology was possible as long as SC measures were used as constraints for (or prior beliefs about) the establishment of FC and EC.

  14. Using Rate of Divergence as an Objective Measure to Differentiate between Voice Signal Types Based on the Amount of Disorder in the Signal.

    PubMed

    Calawerts, William M; Lin, Liyu; Sprott, J C; Jiang, Jack J

    2017-01-01

    The purpose of this paper is to introduce the rate of divergence as an objective measure to differentiate between the four voice types based on the amount of disorder present in a signal. We hypothesized that rate of divergence would provide an objective measure that can quantify all four voice types. A total of 150 acoustic voice recordings were randomly selected and analyzed using traditional perturbation, nonlinear, and rate of divergence analysis methods. We developed a new parameter, rate of divergence, which uses a modified version of Wolf's algorithm for calculating Lyapunov exponents of a system. The outcome of this calculation is not a Lyapunov exponent, but rather a description of the divergence of two nearby data points for the next three points in the time series, followed in three time-delayed embedding dimensions. This measure was compared to currently existing perturbation and nonlinear dynamic methods of distinguishing between voice signals. There was a direct relationship between voice type and rate of divergence. This calculation is especially effective at differentiating between type 3 and type 4 voices (P < 0.001) and is equally effective at differentiating type 1, type 2, and type 3 signals as currently existing methods. The rate of divergence calculation introduced is an objective measure that can be used to distinguish between all four voice types based on the amount of disorder present, leading to quicker and more accurate voice typing as well as an improved understanding of the nonlinear dynamics involved in phonation. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  15. Quantitative Tools for Examining the Vocalizations of Juvenile Songbirds

    PubMed Central

    Wellock, Cameron D.; Reeke, George N.

    2012-01-01

    The singing of juvenile songbirds is highly variable and not well stereotyped, a feature that makes it difficult to analyze with existing computational techniques. We present here a method suitable for analyzing such vocalizations, windowed spectral pattern recognition (WSPR). Rather than performing pairwise sample comparisons, WSPR measures the typicality of a sample against a large sample set. We also illustrate how WSPR can be used to perform a variety of tasks, such as sample classification, song ontogeny measurement, and song variability measurement. Finally, we present a novel measure, based on WSPR, for quantifying the apparent complexity of a bird's singing. PMID:22701474

  16. Scenario Analysis of Soil and Water Conservation in Xiejia Watershed Based on Improved CSLE Model

    NASA Astrophysics Data System (ADS)

    Liu, Jieying; Yu, Ming; Wu, Yong; Huang, Yao; Nie, Yawen

    2018-01-01

    According to the existing research results and related data, use the scenario analysis method, to evaluate the effects of different soil and water conservation measures on soil erosion in a small watershed. Based on the analysis of soil erosion scenarios and model simulation budgets in the study area, it is found that all scenarios simulated soil erosion rates are lower than the present situation of soil erosion in 2013. Soil and water conservation measures are more effective in reducing soil erosion than soil and water conservation biological measures and soil and water conservation tillage measures.

  17. Comparison of Body Composition Measurements Using a New Caliper, Two Established Calipers, Hydrostatic Weighing, and BodPod

    PubMed Central

    TALBERT, ERIN E.; FLYNN, MICHAEL G.; BELL, JEFFREY W.; CARRILLO, ANDRES E.; DILL, MARQUITA D.; CHRISTENSEN, CHRISTIANIA N.; THOMPSON, COLLEEN M.

    2009-01-01

    Purposes (1) To compare the Lafayette Instruments (LI) skinfold caliper to the Lange (L) and Harpenden (H) calipers using a diverse subject population. (2) To determine the validity of the LI caliper in a subset of subjects by comparing body compositions from skinfold thicknesses to those measured by hydrostatic weighing (HW) and air displacement plethysmography (ADP). (3) To compare measurements obtained by experienced (EX) and inexperienced (IX) technicians using all three calipers. Methods Skinfold measurements were performed by both EX and IX technicians using three different calipers on 21 younger (21.2 ± 1.5 yrs) and 20 older (59.2 ± 4 yrs) subjects. Body compositions were calculated using the Jackson-Pollock seven-site and three-site formulas. HW and ADP tests were performed on a subset of subjects (10 younger, 10 older). Results No significant differences existed between LI and L or H when measurements were made by EX. Further, the LI-EX measurements were highly correlated to both H-EX and L-EX. No significant differences existed in the subgroup between LI-EX and HW or ADP. Skinfold determinations made by EX and IX were similar. Conclusions Similar body compositions determined using LI, H, and L suggest that LI determines body composition as effectively as H and L. High correlations between the three calipers support this notion. Similar results between LI and HW/ADP subgroup suggest that the LI caliper may be a valid method of measuring body composition. Overall, performance by IX was similar to EX and suggests similar ease of use for all three calipers. PMID:28572871

  18. Automated inspection of gaps on the free-form shape parts by laser scanning technologies

    NASA Astrophysics Data System (ADS)

    Zhou, Sen; Xu, Jian; Tao, Lei; An, Lu; Yu, Yan

    2018-01-01

    In industrial manufacturing processes, the dimensional inspection of the gaps on the free-form shape parts is critical and challenging, and is directly associated with subsequent assembly and terminal product quality. In this paper, a fast measuring method for automated gap inspection based on laser scanning technologies is presented. The proposed measuring method consists of three steps: firstly, the relative position is determined according to the geometric feature of measuring gap, which considers constraints existing in a laser scanning operation. Secondly, in order to acquire a complete gap profile, a fast and effective scanning path is designed. Finally, the range dimension of the gaps on the free-form shape parts including width, depth and flush, correspondingly, is described in a virtual environment. In the future, an appliance machine based on the proposed method will be developed for the on-line dimensional inspection of gaps on the automobile or aerospace production line.

  19. Accurate reconstruction in digital holographic microscopy using antialiasing shift-invariant contourlet transform

    NASA Astrophysics Data System (ADS)

    Zhang, Xiaolei; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian

    2018-03-01

    The measurement of microstructured components is a challenging task in optical engineering. Digital holographic microscopy has attracted intensive attention due to its remarkable capability of measuring complex surfaces. However, speckles arise in the recorded interferometric holograms, and they will degrade the reconstructed wavefronts. Existing speckle removal methods suffer from the problems of frequency aliasing and phase distortions. A reconstruction method based on the antialiasing shift-invariant contourlet transform (ASCT) is developed. Salient edges and corners have sparse representations in the transform domain of ASCT, and speckles can be recognized and removed effectively. As subsampling in the scale and directional filtering schemes is avoided, the problems of frequency aliasing and phase distortions occurring in the conventional multiscale transforms can be effectively overcome, thereby improving the accuracy of wavefront reconstruction. As a result, the proposed method is promising for the digital holographic measurement of complex structures.

  20. Characterization of a Laser-Generated Perturbation in High-Speed Flow for Receptivity Studies

    NASA Technical Reports Server (NTRS)

    Chou, Amanda; Schneider, Steven P.; Kegerise, Michael A.

    2014-01-01

    A better understanding of receptivity can contribute to the development of an amplitude-based method of transition prediction. This type of prediction model would incorporate more physics than the semi-empirical methods, which are widely used. The experimental study of receptivity requires a characterization of the external disturbances and a study of their effect on the boundary layer instabilities. Characterization measurements for a laser-generated perturbation were made in two different wind tunnels. These measurements were made with hot-wire probes, optical techniques, and pressure transducer probes. Existing methods all have their limitations, so better measurements will require the development of new instrumentation. Nevertheless, the freestream laser-generated perturbation has been shown to be about 6 mm in diameter at a static density of about 0.045 kg/cubic m. The amplitude of the perturbation is large, which may be unsuitable for the study of linear growth.

  1. Investigation of the interpolation method to improve the distributed strain measurement accuracy in optical frequency domain reflectometry systems.

    PubMed

    Cui, Jiwen; Zhao, Shiyuan; Yang, Di; Ding, Zhenyang

    2018-02-20

    We use a spectrum interpolation technique to improve the distributed strain measurement accuracy in a Rayleigh-scatter-based optical frequency domain reflectometry sensing system. We demonstrate that strain accuracy is not limited by the "uncertainty principle" that exists in the time-frequency analysis. Different interpolation methods are investigated and used to improve the accuracy of peak position of the cross-correlation and, therefore, improve the accuracy of the strain. Interpolation implemented by padding zeros on one side of the windowed data in the spatial domain, before the inverse fast Fourier transform, is found to have the best accuracy. Using this method, the strain accuracy and resolution are both improved without decreasing the spatial resolution. The strain of 3 μϵ within the spatial resolution of 1 cm at the position of 21.4 m is distinguished, and the measurement uncertainty is 3.3 μϵ.

  2. Recent Progress In Optical Oxygen Sensing

    NASA Astrophysics Data System (ADS)

    Wolfbeis, Otto S.; Leiner, Marc J. P.

    1988-06-01

    Following a brief review on the history of optical oxygen sensing (which shows that a variety of ideas exists in the literature that awaits the extension to fiber optic sensing schemes), the present state of probing oxygen by optical methods is discussed in terms of new methods and materials for sensor construction. Promising sensing schemes include simultaneous measurement of parameters such as oxygen and carbon dioxide with one fiber, measurement of fluorescence lifetimes and radiative energy transfer efficiency as well as phosphorescence quenching. New longwave-excitable fluorophores have been introduced recently, two-band emit-ting indicators can help to eliminate drift problems, and new methods have been found by which both indicators and enzymes may be entrapped in silicone rubber, which opens the way for the design of new biosensors. In a final chapter, the application of fiber optic oxygen sensors for blood gas measurement and as transducers in biosensors are presented.

  3. Radiative properties of flame-generated soot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeylue, U.O.; Faeth, G.M.

    1993-05-01

    Approximate methods for estimating the optical properties of flame-generated soot aggregates were evaluated using existing computer simulations and measurements in the visible and near-infrared portions of the spectrum. The following approximate methods were evaluated for both individual aggregates and polydisperse aggregate populations: the Rayleigh scattering approximation, Mie scattering for an equivalent sphere, and Rayleigh-Debye-Gans (R-D-G) scattering for both given and fractal aggregates. Results of computer simulations involved both prescribed aggregate geometry and numerically generated aggregates by cluster-cluster aggregation; multiple scattering was considered exactly using the mean-field approximation, and ignored using the R-D-G approximation. Measurements involved the angular scattering properties ofmore » soot in the postflame regions of both premixed and nonpremixed flames. The results show that available computer simulations and measurements of soot aggregate optical properties are not adequate to provide a definitive evaluation of the approximate prediction methods. 40 refs., 7 figs., 1 tab.« less

  4. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  5. Assessment of a simple, novel endoluminal method for gastrotomy closure in NOTES.

    PubMed

    Lee, Sang Soo; Oelschlager, Brant K; Wright, Andrew S; Soares, Renato V; Sinan, Huseyin; Montenovo, Martin I; Hwang, Joo Ha

    2011-10-01

    A reliable method for gastrotomy closure in NOTES will be essential for NOTES to become viable clinically. However, methods using existing and widely available endoscopic accessories have been ineffective. The objective of this study was to evaluate the feasibility and safety of a new simple method for gastric closure (retracted clip-assisted loop closure) that uses existing endoscopic accessories with minor modifications. The retracted clip-assisted loop closure technique involves deploying 3-4 Resolution(®) clips (modified by attaching a 90-cm length of suture to the end of each clip) along the margin of the gastrotomy with one jaw on the serosal surface and the other jaw on the mucosal surface. The suture strings are threaded through an endoloop. Traction is then applied to the strings causing the gastric wall to tent. The endoloop is secured below the tip of the clips, completing a full-thickness gastrotomy closure. The main outcome measures were feasibility, efficacy, and safety of the new retracted clip-assisted loop closure technique for NOTES gastrotomy closure. An air-tight seal was achieved in 100% (n = 9) of stomachs. The mean leak pressure was 116.3 (±19.4) mmHg. The retracted clip-assisted loop closure technique can be used to perform NOTES gastrotomy closure by using existing endoscopic accessories with minor modifications.

  6. C(m)-History Method, a Novel Approach to Simultaneously Measure Source and Sink Parameters Important for Estimating Indoor Exposures to Phthalates.

    PubMed

    Cao, Jianping; Weschler, Charles J; Luo, Jiajun; Zhang, Yinping

    2016-01-19

    The concentration of a gas-phase semivolatile organic compound (SVOC) in equilibrium with its mass-fraction in the source material, y0, and the coefficient for partitioning of an SVOC between clothing and air, K, are key parameters for estimating emission and subsequent dermal exposure to SVOCs. Most of the available methods for their determination depend on achieving steady-state in ventilated chambers. This can be time-consuming and of variable accuracy. Additionally, no existing method simultaneously determines y0 and K in a single experiment. In this paper, we present a sealed-chamber method, using early-stage concentration measurements, to simultaneously determine y0 and K. The measurement error for the method is analyzed, and the optimization of experimental parameters is explored. Using this method, y0 for phthalates (DiBP, DnBP, and DEHP) emitted by two types of PVC flooring, coupled with K values for these phthalates partitioning between a cotton T-shirt and air, were measured at 25 and 32 °C (room and skin temperatures, respectively). The measured y0 values agree well with results obtained by alternate methods. The changes of y0 and K with temperature were used to approximate the changes in enthalpy, ΔH, associated with the relevant phase changes. We conclude with suggestions for further related research.

  7. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Lei; Zuo, Chao; Idir, Mourad

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  8. Phase retrieval with the transport-of-intensity equation in an arbitrarily-shaped aperture by iterative discrete cosine transforms

    DOE PAGES

    Huang, Lei; Zuo, Chao; Idir, Mourad; ...

    2015-04-21

    A novel transport-of-intensity equation (TIE) based phase retrieval method is proposed with putting an arbitrarily-shaped aperture into the optical wavefield. In this arbitrarily-shaped aperture, the TIE can be solved under non-uniform illuminations and even non-homogeneous boundary conditions by iterative discrete cosine transforms with a phase compensation mechanism. Simulation with arbitrary phase, arbitrary aperture shape, and non-uniform intensity distribution verifies the effective compensation and high accuracy of the proposed method. Experiment is also carried out to check the feasibility of the proposed method in real measurement. Comparing to the existing methods, the proposed method is applicable for any types of phasemore » distribution under non-uniform illumination and non-homogeneous boundary conditions within an arbitrarily-shaped aperture, which enables the technique of TIE with hard aperture become a more flexible phase retrieval tool in practical measurements.« less

  9. Method and apparatus for measuring irradiated fuel profiles

    DOEpatents

    Lee, David M.

    1982-01-01

    A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space). Using either of these two methods of determining burnup, one can reduce the required measurement time significantly (by more than an order of magnitude) over existing methods, yet retain equal or only slightly reduced accuracy.

  10. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  11. Calibration-free gaze tracking for automatic measurement of visual acuity in human infants.

    PubMed

    Xiong, Chunshui; Huang, Lei; Liu, Changping

    2014-01-01

    Most existing vision-based methods for gaze tracking need a tedious calibration process. In this process, subjects are required to fixate on a specific point or several specific points in space. However, it is hard to cooperate, especially for children and human infants. In this paper, a new calibration-free gaze tracking system and method is presented for automatic measurement of visual acuity in human infants. As far as I know, it is the first time to apply the vision-based gaze tracking in the measurement of visual acuity. Firstly, a polynomial of pupil center-cornea reflections (PCCR) vector is presented to be used as the gaze feature. Then, Gaussian mixture models (GMM) is employed for gaze behavior classification, which is trained offline using labeled data from subjects with healthy eyes. Experimental results on several subjects show that the proposed method is accurate, robust and sufficient for the application of measurement of visual acuity in human infants.

  12. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure.

    PubMed

    Zhou, Yang; Wu, Dewei

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT).

  13. A Model of Generating Visual Place Cells Based on Environment Perception and Similar Measure

    PubMed Central

    2016-01-01

    It is an important content to generate visual place cells (VPCs) in the field of bioinspired navigation. By analyzing the firing characteristic of biological place cells and the existing methods for generating VPCs, a model of generating visual place cells based on environment perception and similar measure is abstracted in this paper. VPCs' generation process is divided into three phases, including environment perception, similar measure, and recruiting of a new place cell. According to this process, a specific method for generating VPCs is presented. External reference landmarks are obtained based on local invariant characteristics of image and a similar measure function is designed based on Euclidean distance and Gaussian function. Simulation validates the proposed method is available. The firing characteristic of the generated VPCs is similar to that of biological place cells, and VPCs' firing fields can be adjusted flexibly by changing the adjustment factor of firing field (AFFF) and firing rate's threshold (FRT). PMID:27597859

  14. A study of active learning methods for named entity recognition in clinical text.

    PubMed

    Chen, Yukun; Lasko, Thomas A; Mei, Qiaozhu; Denny, Joshua C; Xu, Hua

    2015-12-01

    Named entity recognition (NER), a sequential labeling task, is one of the fundamental tasks for building clinical natural language processing (NLP) systems. Machine learning (ML) based approaches can achieve good performance, but they often require large amounts of annotated samples, which are expensive to build due to the requirement of domain experts in annotation. Active learning (AL), a sample selection approach integrated with supervised ML, aims to minimize the annotation cost while maximizing the performance of ML-based models. In this study, our goal was to develop and evaluate both existing and new AL methods for a clinical NER task to identify concepts of medical problems, treatments, and lab tests from the clinical notes. Using the annotated NER corpus from the 2010 i2b2/VA NLP challenge that contained 349 clinical documents with 20,423 unique sentences, we simulated AL experiments using a number of existing and novel algorithms in three different categories including uncertainty-based, diversity-based, and baseline sampling strategies. They were compared with the passive learning that uses random sampling. Learning curves that plot performance of the NER model against the estimated annotation cost (based on number of sentences or words in the training set) were generated to evaluate different active learning and the passive learning methods and the area under the learning curve (ALC) score was computed. Based on the learning curves of F-measure vs. number of sentences, uncertainty sampling algorithms outperformed all other methods in ALC. Most diversity-based methods also performed better than random sampling in ALC. To achieve an F-measure of 0.80, the best method based on uncertainty sampling could save 66% annotations in sentences, as compared to random sampling. For the learning curves of F-measure vs. number of words, uncertainty sampling methods again outperformed all other methods in ALC. To achieve 0.80 in F-measure, in comparison to random sampling, the best uncertainty based method saved 42% annotations in words. But the best diversity based method reduced only 7% annotation effort. In the simulated setting, AL methods, particularly uncertainty-sampling based approaches, seemed to significantly save annotation cost for the clinical NER task. The actual benefit of active learning in clinical NER should be further evaluated in a real-time setting. Copyright © 2015 Elsevier Inc. All rights reserved.

  15. Fiber optic light-scattering measurement system for evaluation of embryo viability: light-scattering characteristics from live mouse embryo

    NASA Astrophysics Data System (ADS)

    Itoh, Harumi; Arai, Tsunenori; Kikuchi, Makoto

    1997-06-01

    We measured angular distribution of the light scattering from live mouse embryo with 632.8nm in wavelength to evaluate the embryo viability. We aim to measure the mitochondrial density in human embryo which have relation to the embryo viability. We have constructed the light scattering measurement system to detect the mitochondrial density non-invasively. We have employed two optical fibers for the illumination and sensing to change the angle between these fibers. There were two dips on the scattering angular distribution from the embryo. These dips existed on 30 and 85 deg. We calculated the scattering angular pattern by Mie theory to fit the measured scattering estimated scattering size and density. The best fitting was obtained when the particle size and density were 0.9 micrometers and 1010 particles per ml, respectively. These values coincided with the approximated values of mitochondrial in the embryo. The measured light scattering may mainly originated from mitochondria in spite of the existence of the various scattering particles in the embryo. Since our simple scattering measurement may offer the mitochondrial density in the embryo, it might become the practical method of human embryo on in vitro fertilization-embryo transfer.

  16. Assessment of scoliosis by direct measurement of the curvature of the spine

    NASA Astrophysics Data System (ADS)

    Dougherty, Geoff; Johnson, Michael J.

    2009-02-01

    We present two novel metrics for assessing scoliosis, in which the geometric centers of all the affected vertebrae in an antero-posterior (A-P) radiographic image are used. This is in contradistinction to the existing methods of using selected vertebrae, and determining either their endplates or the intersections of their diagonals, to define a scoliotic angle. Our first metric delivers a scoliotic angle, comparable to the Cobb and Ferguson angles. It measures the sum of the angles between the centers of the affected vertebrae, and avoids the need for an observer to decide on the extent of component curvatures. Our second metric calculates the normalized root-mean-square curvature of the smoothest path comprising piece-wise polynomial splines fitted to the geometric centers of the vertebrae. The smoothest path is useful in modeling the spinal curvature. Our metrics were compared to existing methods using radiographs from a group of twenty subjects with spinal curvatures of varying severity. Their values were strongly correlated with those of the scoliotic angles (r = 0.850 - 0.886), indicating that they are valid surrogates for measuring the severity of scoliosis. Our direct use of positional data removes the vagaries of determining variably shaped endplates, and circumvented the significant interand intra-observer errors of the Cobb and Ferguson methods. Although we applied our metrics to two-dimensional (2- D) data in this paper, they are equally applicable to three-dimensional (3-D) data. We anticipate that they will prove to be the basis for a reliable 3-D measurement and classification system.

  17. Ultrasonics Equipped Crimp Tool: A New Technology for Aircraft Wiring Safety

    NASA Technical Reports Server (NTRS)

    Yost, William T.; Perey, Daniel F.; Cramer, Elliott

    2006-01-01

    We report on the development of a new measurement technique to quantitatively assess the condition of wire crimp connections. This ultrasonic (UT) method transmits high frequency sound waves through the joint under inspection. The wire-crimp region filters and scatters the ultrasonic energy as it passes through the crimp and wire. The resulting output (both time and frequency domains) provides a quantitative measure of the joint quality that is independent and unaffected by current. Crimps of poor mechanical and electrical quality will result in low temporal output and will distort the spectrum into unique and predictable patterns, depending on crimp "quality". This inexpensive, real-time measurement system can provide certification of crimps as they are made and recertification of existing wire crimps currently in service. The measurements for re-certification do not require that the wire be disconnected from its circuit. No other technology exists to measure in-situ the condition of wire joints (no electrical currents through the crimp are used in this analytical technique). We discuss the signals obtained from this instrument, and correlate these signals with destructive wire pull tests.

  18. Effect of analytical conditions in wavelength dispersive electron microprobe analysis on the measurement of strontium-to-calcium (Sr/Ca) ratios in otoliths of anadromous salmonids

    USGS Publications Warehouse

    Zimmerman, Christian E.; Nielsen, Roger L.

    2003-01-01

    The use of strontium-to-calcium (Sr/Ca) ratios in otoliths is becoming a standard method to describe life history type and the chronology of migrations between freshwater and seawater habitats in teleosts (e.g. Kalish, 1990; Radtke et al., 1990; Secor, 1992; Rieman et al., 1994; Radtke, 1995; Limburg, 1995; Tzeng et al. 1997; Volk et al., 2000; Zimmerman, 2000; Zimmerman and Reeves, 2000, 2002). This method provides critical information concerning the relationship and ecology of species exhibiting phenotypic variation in migratory behavior (Kalish, 1990; Secor, 1999). Methods and procedures, however, vary among laboratories because a standard method or protocol for measurement of Sr in otoliths does not exist. In this note, we examine the variations in analytical conditions in an effort to increase precision of Sr/Ca measurements. From these findings we argue that precision can be maximized with higher beam current (although there is specimen damage) than previously recommended by Gunn et al. (1992).

  19. Wireless acceleration sensor of moving elements for condition monitoring of mechanisms

    NASA Astrophysics Data System (ADS)

    Sinitsin, Vladimir V.; Shestakov, Aleksandr L.

    2017-09-01

    Comprehensive analysis of the angular and linear accelerations of moving elements (shafts, gears) allows an increase in the quality of the condition monitoring of mechanisms. However, existing tools and methods measure either linear or angular acceleration with postprocessing. This paper suggests a new construction design of an angular acceleration sensor for moving elements. The sensor is mounted on a moving element and, among other things, the data transfer and electric power supply are carried out wirelessly. In addition, the authors introduce a method for processing the received information which makes it possible to divide the measured acceleration into the angular and linear components. The design has been validated by the results of laboratory tests of an experimental model of the sensor. The study has shown that this method provides a definite separation of the measured acceleration into linear and angular components, even in noise. This research contributes an advance in the range of methods and tools for condition monitoring of mechanisms.

  20. Multi-Target State Extraction for the SMC-PHD Filter

    PubMed Central

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-01-01

    The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274

  1. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  2. An investigation of the marine boundary layer during cold air outbreak

    NASA Technical Reports Server (NTRS)

    Stage, S. A.

    1986-01-01

    Methods for use in the remote estimation of ocean surface sensible and latent heat fluxes were developed and evaluated. Three different techniques were developed for determining these fluxes. These methods are: (1) Obtaining surface sensible and latent heat fluxes from satellite measurements; (2)Obtaining surface sensible and latent heat fluxes from an MABL model; (3) A method using horizontal transfer coefficients. These techniques are not very sensitive to errors in the data and therefore appear to hold promise of producing useful answers. Questions remain about how closely the structure of the real atmosphere agrees with the assumptions made for each of these techniques, and, therefore about how well these techniques can perform in actual use. The value of these techniques is that they promise to provide methods for the determination of fluxes over regions where very few traditional measurement exist.

  3. Bridge Displacement Monitoring Method Based on Laser Projection-Sensing Technology

    PubMed Central

    Zhao, Xuefeng; Liu, Hao; Yu, Yan; Xu, Xiaodong; Hu, Weitong; Li, Mingchu; Ou, Jingping

    2015-01-01

    Bridge displacement is the most basic evaluation index of the health status of a bridge structure. The existing measurement methods for bridge displacement basically fail to realize long-term and real-time dynamic monitoring of bridge structures, because of the low degree of automation and the insufficient precision, causing bottlenecks and restriction. To solve this problem, we proposed a bridge displacement monitoring system based on laser projection-sensing technology. First, the laser spot recognition method was studied. Second, the software for the displacement monitoring system was developed. Finally, a series of experiments using this system were conducted, and the results show that such a system has high measurement accuracy and speed. We aim to develop a low-cost, high-accuracy and long-term monitoring method for bridge displacement based on these preliminary efforts. PMID:25871716

  4. RSS Fingerprint Based Indoor Localization Using Sparse Representation with Spatio-Temporal Constraint

    PubMed Central

    Piao, Xinglin; Zhang, Yong; Li, Tingshu; Hu, Yongli; Liu, Hao; Zhang, Ke; Ge, Yun

    2016-01-01

    The Received Signal Strength (RSS) fingerprint-based indoor localization is an important research topic in wireless network communications. Most current RSS fingerprint-based indoor localization methods do not explore and utilize the spatial or temporal correlation existing in fingerprint data and measurement data, which is helpful for improving localization accuracy. In this paper, we propose an RSS fingerprint-based indoor localization method by integrating the spatio-temporal constraints into the sparse representation model. The proposed model utilizes the inherent spatial correlation of fingerprint data in the fingerprint matching and uses the temporal continuity of the RSS measurement data in the localization phase. Experiments on the simulated data and the localization tests in the real scenes show that the proposed method improves the localization accuracy and stability effectively compared with state-of-the-art indoor localization methods. PMID:27827882

  5. An Introduction to Item Response Theory for Patient-Reported Outcome Measurement

    PubMed Central

    Nguyen, Tam H.; Han, Hae-Ra; Kim, Miyong T.

    2015-01-01

    The growing emphasis on patient-centered care has accelerated the demand for high-quality data from patient-reported outcome (PRO) measures. Traditionally, the development and validation of these measures has been guided by classical test theory. However, item response theory (IRT), an alternate measurement framework, offers promise for addressing practical measurement problems found in health-related research that have been difficult to solve through classical methods. This paper introduces foundational concepts in IRT, as well as commonly used models and their assumptions. Existing data on a combined sample (n = 636) of Korean American and Vietnamese American adults who responded to the High Blood Pressure Health Literacy Scale and the Patient Health Questionnaire-9 are used to exemplify typical applications of IRT. These examples illustrate how IRT can be used to improve the development, refinement, and evaluation of PRO measures. Greater use of methods based on this framework can increase the accuracy and efficiency with which PROs are measured. PMID:24403095

  6. An introduction to item response theory for patient-reported outcome measurement.

    PubMed

    Nguyen, Tam H; Han, Hae-Ra; Kim, Miyong T; Chan, Kitty S

    2014-01-01

    The growing emphasis on patient-centered care has accelerated the demand for high-quality data from patient-reported outcome (PRO) measures. Traditionally, the development and validation of these measures has been guided by classical test theory. However, item response theory (IRT), an alternate measurement framework, offers promise for addressing practical measurement problems found in health-related research that have been difficult to solve through classical methods. This paper introduces foundational concepts in IRT, as well as commonly used models and their assumptions. Existing data on a combined sample (n = 636) of Korean American and Vietnamese American adults who responded to the High Blood Pressure Health Literacy Scale and the Patient Health Questionnaire-9 are used to exemplify typical applications of IRT. These examples illustrate how IRT can be used to improve the development, refinement, and evaluation of PRO measures. Greater use of methods based on this framework can increase the accuracy and efficiency with which PROs are measured.

  7. Measuring strategic control in implicit learning: how and why?

    PubMed

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledge is conscious or unconscious, or which properties of knowledge are consciously available. In this paper I first summarize existing methods that have been developed for measuring strategic control in the SRT and AGL tasks. I then address some methodological and theoretical questions. Methodological questions concern choice of task, whether the measurement reflects inhibitory control or task switching, and whether or not strategic control should be measured on a trial-by-trial basis. Theoretical questions concern the rationale for including measurement of strategic control, what form of knowledge is strategically controlled, and how strategic control can be combined with subjective awareness measures.

  8. Measuring strategic control in implicit learning: how and why?

    PubMed Central

    Norman, Elisabeth

    2015-01-01

    Several methods have been developed for measuring the extent to which implicitly learned knowledge can be applied in a strategic, flexible manner. Examples include generation exclusion tasks in Serial Reaction Time (SRT) learning (Goschke, 1998; Destrebecqz and Cleeremans, 2001) and 2-grammar classification tasks in Artificial Grammar Learning (AGL; Dienes et al., 1995; Norman et al., 2011). Strategic control has traditionally been used as a criterion for determining whether acquired knowledge is conscious or unconscious, or which properties of knowledge are consciously available. In this paper I first summarize existing methods that have been developed for measuring strategic control in the SRT and AGL tasks. I then address some methodological and theoretical questions. Methodological questions concern choice of task, whether the measurement reflects inhibitory control or task switching, and whether or not strategic control should be measured on a trial-by-trial basis. Theoretical questions concern the rationale for including measurement of strategic control, what form of knowledge is strategically controlled, and how strategic control can be combined with subjective awareness measures. PMID:26441809

  9. Evaluating care from a care ethical perspective:: A pilot study.

    PubMed

    Kuis, Esther E; Goossensen, Anne

    2017-08-01

    Care ethical theories provide an excellent opening for evaluation of healthcare practices since searching for (moments of) good care from a moral perspective is central to care ethics. However, a fruitful way to translate care ethical insights into measurable criteria and how to measure these criteria has as yet been unexplored: this study describes one of the first attempts. To investigate whether the emotional touchpoint method is suitable for evaluating care from a care ethical perspective. An adapted version of the emotional touchpoint interview method was used. Touchpoints represent the key moments to the experience of receiving care, where the patient recalls being touched emotionally or cognitively. Participants and research context: Interviews were conducted at three different care settings: a hospital, mental healthcare institution and care facility for older people. A total of 31 participants (29 patients and 2 relatives) took part in the study. Ethical considerations: The research was found not to be subject to the (Dutch) Medical Research Involving Human Subjects Act. A three-step care ethical evaluation model was developed and described using two touchpoints as examples. A focus group meeting showed that the method was considered of great value for partaking institutions in comparison with existing methods. Reflection and discussion: Considering existing methods to evaluate quality of care, the touchpoint method belongs to the category of instruments which evaluate the patient experience. The touchpoint method distinguishes itself because no pre-defined categories are used but the values of patients are followed, which is an essential issue from a care ethical perspective. The method portrays the insider perspective of patients and thereby contributes to humanizing care. The touchpoint method is a valuable instrument for evaluating care; it generates evaluation data about the core care ethical principle of responsiveness.

  10. Data-Driven Benchmarking of Building Energy Efficiency Utilizing Statistical Frontier Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kavousian, A; Rajagopal, R

    2014-01-01

    Frontier methods quantify the energy efficiency of buildings by forming an efficient frontier (best-practice technology) and by comparing all buildings against that frontier. Because energy consumption fluctuates over time, the efficiency scores are stochastic random variables. Existing applications of frontier methods in energy efficiency either treat efficiency scores as deterministic values or estimate their uncertainty by resampling from one set of measurements. Availability of smart meter data (repeated measurements of energy consumption of buildings) enables using actual data to estimate the uncertainty in efficiency scores. Additionally, existing applications assume a linear form for an efficient frontier; i.e.,they assume that themore » best-practice technology scales up and down proportionally with building characteristics. However, previous research shows that buildings are nonlinear systems. This paper proposes a statistical method called stochastic energy efficiency frontier (SEEF) to estimate a bias-corrected efficiency score and its confidence intervals from measured data. The paper proposes an algorithm to specify the functional form of the frontier, identify the probability distribution of the efficiency score of each building using measured data, and rank buildings based on their energy efficiency. To illustrate the power of SEEF, this paper presents the results from applying SEEF on a smart meter data set of 307 residential buildings in the United States. SEEF efficiency scores are used to rank individual buildings based on energy efficiency, to compare subpopulations of buildings, and to identify irregular behavior of buildings across different time-of-use periods. SEEF is an improvement to the energy-intensity method (comparing kWh/sq.ft.): whereas SEEF identifies efficient buildings across the entire spectrum of building sizes, the energy-intensity method showed bias toward smaller buildings. The results of this research are expected to assist researchers and practitioners compare and rank (i.e.,benchmark) buildings more robustly and over a wider range of building types and sizes. Eventually, doing so is expected to result in improved resource allocation in energy-efficiency programs.« less

  11. Approaching sub-50 nanoradian measurements by reducing the saw-tooth deviation of the autocollimator in the Nano-Optic-Measuring Machine

    NASA Astrophysics Data System (ADS)

    Qian, Shinan; Geckeler, Ralf D.; Just, Andreas; Idir, Mourad; Wu, Xuehui

    2015-06-01

    Since the development of the Nano-Optic-Measuring Machine (NOM), the accuracy of measuring the profile of an optical surface has been enhanced to the 100-nrad rms level or better. However, to update the accuracy of the NOM system to sub-50 nrad rms, the large saw-tooth deviation (269 nrad rms) of an existing electronic autocollimator, the Elcomat 3000/8, must be resolved. We carried out simulations to assess the saw-tooth-like deviation. We developed a method for setting readings to reduce the deviation to sub-50 nrad rms, suitable for testing plane mirrors. With this method, we found that all the tests conducted in a slowly rising section of the saw-tooth show a small deviation of 28.8 to <40 nrad rms. We also developed a dense-measurement method and an integer-period method to lower the saw-tooth deviation during tests of sphere mirrors. Further research is necessary for formulating a precise test for a spherical mirror. We present a series of test results from our experiments that verify the value of the improvements we made.

  12. Detection of Extraterrestrial Life. Method II- Optical Rotatory Dispersion

    NASA Technical Reports Server (NTRS)

    1963-01-01

    The object of this study is to develop polarimetric methods to detect the presence of DNA (deoxyribonucleic acid) or its congeners in soil suspensions, and through these methods determine the existence of life (as known terrestrially) on other planets. The cotton region associated with optically active organic compounds is being used to detect and characterize the compounds above. An apparatus has been designed and assembled which can measure optical rotations in systems which strongly attenuate incident-polarized, monochromatic light. This instrument was used to measure the optical rotatory dispersion spectra of nucleosides, a polynucleotide, and proteins whose optical density at 260 microns approached 1.0. This work is discussed in the final report on Contract NASR-85, Detection of Extraterrestrial Life, Method II: Optical Rotatory Dispersion. Recent work in Melpar laboratories has reaffirmed these rotatory dispersion spectra. Based upon the analysis of the optical components associated with this apparatus, however, these measurements must be considered as qualitative rather than quantitative. The reason for this is discussed in greater detail subsequently in this report. In addition, an evaluation of the theoretical and instrumental aspects of making rotatory-dispersion measurements in the cotton region has resulted in a procedure for measuring optical rotation.

  13. Common method biases in behavioral research: a critical review of the literature and recommended remedies.

    PubMed

    Podsakoff, Philip M; MacKenzie, Scott B; Lee, Jeong-Yeon; Podsakoff, Nathan P

    2003-10-01

    Interest in the problem of method biases has a long history in the behavioral sciences. Despite this, a comprehensive summary of the potential sources of method biases and how to control for them does not exist. Therefore, the purpose of this article is to examine the extent to which method biases influence behavioral research results, identify potential sources of method biases, discuss the cognitive processes through which method biases influence responses to measures, evaluate the many different procedural and statistical techniques that can be used to control method biases, and provide recommendations for how to select appropriate procedural and statistical remedies for different types of research settings.

  14. Toward unbiased determination of the redshift evolution of Lyman-alpha forest clouds

    NASA Technical Reports Server (NTRS)

    Lu, Limin; Zuo, Lin

    1994-01-01

    The possibility of using D(sub A), the mean depression of a quasar spectrum due to Ly-alpha forest absorption, to study the number density evolution of the Ly-alpha forest clouds is examined in some detail. Current D(sub A) measurements are made against a continuum that is a power-law extrapolation from the continuum longward of Ly-alpha emission. Compared to the line-counting approach, the D(sub A)-method has the advantage that the D(sub A) measurements are not affected by line-blending effects. However, we find using low-redshift quasar spectra obtained with the Hubble Space Telescope (HST), where the true continuum in the Ly-alpha forest can be estimated fairly reliably because of the much lower density of the Ly-alpha forest lines, that the extrapolated continuum often deviates systematically from the true continuum in the forest region. Such systematic continuum errors introduce large errors in the D(sub A) measurements. The current D(sub A) measurements may also be significantly biased by the possible presence of the Gunn-Peterson absorption. We propose a modification to the existing D(sub A)-method, namely, to measure D(sub A) against a locally established continuum in the Ly-alpha forest. Under conditions that the quasar spectrum has good resolution and S/N to allow for a reliable estimate of the local continuum in the Ly-alpha forest, the modified D(sub A) measurements should be largely free of the systematic uncertainties suffered by the existing D(sub A) measurements. We also introduce a formalism based on the work of Zuo (1993) to simplify the application of the D(sub A)-method(s) to real data. We discuss the merits and limitations of the modified D(sub A)-method, and conclude that it is a useful alternative. Our findings that the extrapolated continuum from longward of Ly-alpha emission often deviates systematically from the true continuum in the Ly-alpha forest present a major problem in the study of the Gunn-Peterson absorption.

  15. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  16. Method and appartus for converting static in-ground vehicle scales into weigh-in-motion systems

    DOEpatents

    Muhs, Jeffrey D.; Scudiere, Matthew B.; Jordan, John K.

    2002-01-01

    An apparatus and method for converting in-ground static weighing scales for vehicles to weigh-in-motion systems. The apparatus upon conversion includes the existing in-ground static scale, peripheral switches and an electronic module for automatic computation of the weight. By monitoring the velocity, tire position, axle spacing, and real time output from existing static scales as a vehicle drives over the scales, the system determines when an axle of a vehicle is on the scale at a given time, monitors the combined weight output from any given axle combination on the scale(s) at any given time, and from these measurements automatically computes the weight of each individual axle and gross vehicle weight by an integration, integration approximation, and/or signal averaging technique.

  17. Capillary density: An important parameter in nailfold capillaroscopy.

    PubMed

    Emrani, Zahra; Karbalaie, Abdolamir; Fatemi, Alimohammad; Etehadtavakol, Mahnaz; Erlandsson, Björn-Erik

    2017-01-01

    Nailfold capillaroscopy is one of the various noninvasive bioengineering methods used to investigate skin microcirculation. It is an effective examination for assessing microvascular changes in the peripheral circulation; hence it has a significant role for the diagnosis of Systemic sclerosis with the classic changes of giant capillaries as well as the decline in capillary density with capillary dropout. The decline in capillary density is one of microangiopathic features existing in connective tissue disease. It is detectable with nailfold capillaroscopy. This parameter is assessed by applying quantitative measurement. In this article, we reviewed a common method for calculating the capillary density and the relation between the number of capillaries as well as the existence of digital ulcers, pulmonary arterial hypertension, autoantibodies, scleroderma patterns and different scoring system. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Big climate data analysis

    NASA Astrophysics Data System (ADS)

    Mudelsee, Manfred

    2015-04-01

    The Big Data era has begun also in the climate sciences, not only in economics or molecular biology. We measure climate at increasing spatial resolution by means of satellites and look farther back in time at increasing temporal resolution by means of natural archives and proxy data. We use powerful supercomputers to run climate models. The model output of the calculations made for the IPCC's Fifth Assessment Report amounts to ~650 TB. The 'scientific evolution' of grid computing has started, and the 'scientific revolution' of quantum computing is being prepared. This will increase computing power, and data amount, by several orders of magnitude in the future. However, more data does not automatically mean more knowledge. We need statisticians, who are at the core of transforming data into knowledge. Statisticians notably also explore the limits of our knowledge (uncertainties, that is, confidence intervals and P-values). Mudelsee (2014 Climate Time Series Analysis: Classical Statistical and Bootstrap Methods. Second edition. Springer, Cham, xxxii + 454 pp.) coined the term 'optimal estimation'. Consider the hyperspace of climate estimation. It has many, but not infinite, dimensions. It consists of the three subspaces Monte Carlo design, method and measure. The Monte Carlo design describes the data generating process. The method subspace describes the estimation and confidence interval construction. The measure subspace describes how to detect the optimal estimation method for the Monte Carlo experiment. The envisaged large increase in computing power may bring the following idea of optimal climate estimation into existence. Given a data sample, some prior information (e.g. measurement standard errors) and a set of questions (parameters to be estimated), the first task is simple: perform an initial estimation on basis of existing knowledge and experience with such types of estimation problems. The second task requires the computing power: explore the hyperspace to find the suitable method, that is, the mode of estimation and uncertainty-measure determination that optimizes a selected measure for prescribed values close to the initial estimates. Also here, intelligent exploration methods (gradient, Brent, etc.) are useful. The third task is to apply the optimal estimation method to the climate dataset. This conference paper illustrates by means of three examples that optimal estimation has the potential to shape future big climate data analysis. First, we consider various hypothesis tests to study whether climate extremes are increasing in their occurrence. Second, we compare Pearson's and Spearman's correlation measures. Third, we introduce a novel estimator of the tail index, which helps to better quantify climate-change related risks.

  19. A new background subtraction method for energy dispersive X-ray fluorescence spectra using a cubic spline interpolation

    NASA Astrophysics Data System (ADS)

    Yi, Longtao; Liu, Zhiguo; Wang, Kai; Chen, Man; Peng, Shiqi; Zhao, Weigang; He, Jialin; Zhao, Guangcui

    2015-03-01

    A new method is presented to subtract the background from the energy dispersive X-ray fluorescence (EDXRF) spectrum using a cubic spline interpolation. To accurately obtain interpolation nodes, a smooth fitting and a set of discriminant formulations were adopted. From these interpolation nodes, the background is estimated by a calculated cubic spline function. The method has been tested on spectra measured from a coin and an oil painting using a confocal MXRF setup. In addition, the method has been tested on an existing sample spectrum. The result confirms that the method can properly subtract the background.

  20. Analyzing Association Mapping in Pedigree-Based GWAS Using a Penalized Multitrait Mixed Model

    PubMed Central

    Liu, Jin; Yang, Can; Shi, Xingjie; Li, Cong; Huang, Jian; Zhao, Hongyu; Ma, Shuangge

    2017-01-01

    Genome-wide association studies (GWAS) have led to the identification of many genetic variants associated with complex diseases in the past 10 years. Penalization methods, with significant numerical and statistical advantages, have been extensively adopted in analyzing GWAS. This study has been partly motivated by the analysis of Genetic Analysis Workshop (GAW) 18 data, which have two notable characteristics. First, the subjects are from a small number of pedigrees and hence related. Second, for each subject, multiple correlated traits have been measured. Most of the existing penalization methods assume independence between subjects and traits and can be suboptimal. There are a few methods in the literature based on mixed modeling that can accommodate correlations. However, they cannot fully accommodate the two types of correlations while conducting effective marker selection. In this study, we develop a penalized multitrait mixed modeling approach. It accommodates the two different types of correlations and includes several existing methods as special cases. Effective penalization is adopted for marker selection. Simulation demonstrates its satisfactory performance. The GAW 18 data are analyzed using the proposed method. PMID:27247027

  1. A Penalized Robust Method for Identifying Gene-Environment Interactions

    PubMed Central

    Shi, Xingjie; Liu, Jin; Huang, Jian; Zhou, Yong; Xie, Yang; Ma, Shuangge

    2015-01-01

    In high-throughput studies, an important objective is to identify gene-environment interactions associated with disease outcomes and phenotypes. Many commonly adopted methods assume specific parametric or semiparametric models, which may be subject to model mis-specification. In addition, they usually use significance level as the criterion for selecting important interactions. In this study, we adopt the rank-based estimation, which is much less sensitive to model specification than some of the existing methods and includes several commonly encountered data and models as special cases. Penalization is adopted for the identification of gene-environment interactions. It achieves simultaneous estimation and identification and does not rely on significance level. For computation feasibility, a smoothed rank estimation is further proposed. Simulation shows that under certain scenarios, for example with contaminated or heavy-tailed data, the proposed method can significantly outperform the existing alternatives with more accurate identification. We analyze a lung cancer prognosis study with gene expression measurements under the AFT (accelerated failure time) model. The proposed method identifies interactions different from those using the alternatives. Some of the identified genes have important implications. PMID:24616063

  2. Non-invasive continuous blood pressure measurement based on mean impact value method, BP neural network, and genetic algorithm.

    PubMed

    Tan, Xia; Ji, Zhong; Zhang, Yadan

    2018-04-25

    Non-invasive continuous blood pressure monitoring can provide an important reference and guidance for doctors wishing to analyze the physiological and pathological status of patients and to prevent and diagnose cardiovascular diseases in the clinical setting. Therefore, it is very important to explore a more accurate method of non-invasive continuous blood pressure measurement. To address the shortcomings of existing blood pressure measurement models based on pulse wave transit time or pulse wave parameters, a new method of non-invasive continuous blood pressure measurement - the GA-MIV-BP neural network model - is presented. The mean impact value (MIV) method is used to select the factors that greatly influence blood pressure from the extracted pulse wave transit time and pulse wave parameters. These factors are used as inputs, and the actual blood pressure values as outputs, to train the BP neural network model. The individual parameters are then optimized using a genetic algorithm (GA) to establish the GA-MIV-BP neural network model. Bland-Altman consistency analysis indicated that the measured and predicted blood pressure values were consistent and interchangeable. Therefore, this algorithm is of great significance to promote the clinical application of a non-invasive continuous blood pressure monitoring method.

  3. Compression and Transmission of RF Signals for Telediagnosis

    NASA Astrophysics Data System (ADS)

    Seko, Toshihiro; Doi, Motonori; Oshiro, Osamu; Chihara, Kunihiro

    2000-05-01

    Health care is a critical issue nowadays. Much emphasis is given to quality care for all people. Telediagnosis has attracted public attention. We propose a new method of ultrasound image transmission for telediagnosis. In conventional methods, video image signals are transmitted. In our method, the RF signals which are acquired by an ultrasound probe, are transmitted. The RF signals can be transformed to color Doppler images or high-resolution images by a receiver. Because a stored form is adopted, the proposed system can be realized with existent technology such as hyper text transfer protocol (HTTP) and file transfer protocol (FTP). In this paper, we describe two lossless compression methods which specialize in the transmission of RF signals. One of the methods uses the characteristics of the RF signal. In the other method, the amount of the data is reduced. Measurements were performed in water targeting an iron block and triangular Styrofoam. Additionally, abdominal fat measurement was performed. Our method achieved a compression rate of 13% with 8 bit data.

  4. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination.

    PubMed

    Ukaegbu, Ikechukwu Kevin; Gamage, Kelum A A

    2018-02-08

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods.

  5. A Novel Method for Remote Depth Estimation of Buried Radioactive Contamination

    PubMed Central

    2018-01-01

    Existing remote radioactive contamination depth estimation methods for buried radioactive wastes are either limited to less than 2 cm or are based on empirical models that require foreknowledge of the maximum penetrable depth of the contamination. These severely limits their usefulness in some real life subsurface contamination scenarios. Therefore, this work presents a novel remote depth estimation method that is based on an approximate three-dimensional linear attenuation model that exploits the benefits of using multiple measurements obtained from the surface of the material in which the contamination is buried using a radiation detector. Simulation results showed that the proposed method is able to detect the depth of caesium-137 and cobalt-60 contamination buried up to 40 cm in both sand and concrete. Furthermore, results from experiments show that the method is able to detect the depth of caesium-137 contamination buried up to 12 cm in sand. The lower maximum depth recorded in the experiment is due to limitations in the detector and the low activity of the caesium-137 source used. Nevertheless, both results demonstrate the superior capability of the proposed method compared to existing methods. PMID:29419759

  6. A Study of the Flow Structure of Tip Vortices on a Hydrofoil

    DTIC Science & Technology

    1986-11-28

    as measured from the flow visualization imager. . . 0 . . . 61 III.10 The vertical location of the tip vortex center as measured from the flow...pressure gra- dients of opposite sign exist on both sides of an airfoil . These gradients induce an inward lateral flow on the suc- tion side and an...And most recently, Cebeci et al. (1986) developed a viscous/inviscid interaction method to calculate the flow around airfoils , emphasizing the

  7. Raman Spectroscopy of Isotopic Water Diffusion in Ultraviscous, Glassy, and Gel States in Aerosol by Use of Optical Tweezers.

    PubMed

    Davies, James F; Wilson, Kevin R

    2016-02-16

    The formation of ultraviscous, glassy, and amorphous gel states in aqueous aerosol following the loss of water results in nonequilibrium dynamics due to the extended time scales for diffusive mixing. Existing techniques for measuring water diffusion by isotopic exchange are limited by contact of samples with the substrate, and methods applied to infer diffusion coefficients from mass transport in levitated droplets requires analysis by complex coupled differential equations to derive diffusion coefficients. We present a new technique that combines contactless levitation with aerosol optical tweezers with isotopic exchange (D2O/H2O) to measure the water diffusion coefficient over a broad range (Dw ≈ 10(-12)-10(-17) m(2)·s(-1)) in viscous organic liquids (citric acid, sucrose, and shikimic acid) and inorganic gels (magnesium sulfate, MgSO4). For the organic liquids in binary and ternary mixtures, Dw depends on relative humidity and follows a simple compositional Vignes relationship. In MgSO4 droplets, water diffusivity decreases sharply with water activity and is consistent with predictions from percolation theory. These measurements show that, by combining micrometer-sized particle levitation (a contactless measurement with rapid mixing times) with an established probe of water diffusion, Dw can be simply and directly quantified for amorphous and glassy states that are inaccessible to existing methods.

  8. Comparison of connectivity analyses for resting state EEG data

    NASA Astrophysics Data System (ADS)

    Olejarczyk, Elzbieta; Marzetti, Laura; Pizzella, Vittorio; Zappasodi, Filippo

    2017-06-01

    Objective. In the present work, a nonlinear measure (transfer entropy, TE) was used in a multivariate approach for the analysis of effective connectivity in high density resting state EEG data in eyes open and eyes closed. Advantages of the multivariate approach in comparison to the bivariate one were tested. Moreover, the multivariate TE was compared to an effective linear measure, i.e. directed transfer function (DTF). Finally, the existence of a relationship between the information transfer and the level of brain synchronization as measured by phase synchronization value (PLV) was investigated. Approach. The comparison between the connectivity measures, i.e. bivariate versus multivariate TE, TE versus DTF, TE versus PLV, was performed by means of statistical analysis of indexes based on graph theory. Main results. The multivariate approach is less sensitive to false indirect connections with respect to the bivariate estimates. The multivariate TE differentiated better between eyes closed and eyes open conditions compared to DTF. Moreover, the multivariate TE evidenced non-linear phenomena in information transfer, which are not evidenced by the use of DTF. We also showed that the target of information flow, in particular the frontal region, is an area of greater brain synchronization. Significance. Comparison of different connectivity analysis methods pointed to the advantages of nonlinear methods, and indicated a relationship existing between the flow of information and the level of synchronization of the brain.

  9. Raman Spectroscopy of Isotopic Water Diffusion in Ultraviscous, Glassy, and Gel States in Aerosol by Use of Optical Tweezers

    DOE PAGES

    Davies, James F.; Wilson, Kevin R.

    2016-01-11

    The formation of ultraviscous, glassy, and amorphous gel states in aqueous aerosol following the loss of water results in nonequilibrium dynamics due to the extended time scales for diffusive mixing. Existing techniques for measuring water diffusion by isotopic exchange are limited by contact of samples with the substrate, and methods applied to infer diffusion coefficients from mass transport in levitated droplets requires analysis by complex coupled differential equations to derive diffusion coefficients. Here, we present a new technique that combines contactless levitation with aerosol optical tweezers with isotopic exchange (D 2O/H 2O) to measure the water diffusion coefficient over amore » broad range (D w ≈ 10 -12-10 -17 m 2s -1) in viscous organic liquids (citric acid, sucrose, and shikimic acid) and inorganic gels (magnesium sulfate, MgSO 4). For the organic liquids in binary and ternary mixtures, D w depends on relative humidity and follows a simple compositional Vignes relationship. In MgSO 4 droplets, water diffusivity decreases sharply with water activity and is consistent with predictions from percolation theory. These measurements show that, by combining micrometer-sized particle levitation (a contactless measurement with rapid mixing times) with an established probe of water diffusion, D w can be simply and directly quantified for amorphous and glassy states that are inaccessible to existing methods.« less

  10. Is ``No-Threshold'' a ``Non-Concept''?

    NASA Astrophysics Data System (ADS)

    Schaeffer, David J.

    1981-11-01

    A controversy prominent in scientific literature that has carried over to newspapers, magazines, and popular books is having serious social and political expressions today: “Is there, or is there not, a threshold below which exposure to a carcinogen will not induce cancer?” The distinction between establishing the existence of this threshold (which is a theoretical question) and its value (which is an experimental one) gets lost in the scientific arguments. Establishing the existence of this threshold has now become a philosophical question (and an emotional one). In this paper I qualitatively outline theoretical reasons why a threshold must exist, discuss experiments which measure thresholds on two chemicals, and describe and apply a statistical method for estimating the threshold value from exposure-response data.

  11. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation

    PubMed Central

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-01-01

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods. PMID:28425946

  12. Development of a measure of asthma-specific quality of life among adults.

    PubMed

    Eberhart, Nicole K; Sherbourne, Cathy D; Edelen, Maria Orlando; Stucky, Brian D; Sin, Nancy L; Lara, Marielena

    2014-04-01

    A key goal in asthma treatment is improvement in quality of life (QoL), but existing measures often confound QoL with symptoms and functional impairment. The current study addresses these limitations and the need for valid patient-reported outcome measures by using state-of-the-art methods to develop an item bank assessing QoL in adults with asthma. This article describes the process for developing an initial item pool for field testing. Five focus group interviews were conducted with a total of 50 asthmatic adults. We used "pile sorting/binning" and "winnowing" methods to identify key QoL dimensions and develop a pool of items based on statements made in the focus group interviews. We then conducted a literature review and consulted with an expert panel to ensure that no key concepts were omitted. Finally, we conducted individual cognitive interviews to ensure that items were well understood and inform final item refinement. Six hundred and sixty-one QoL statements were identified from focus group interview transcripts and subsequently used to generate a pool of 112 items in 16 different content areas. Items covering a broad range of content were developed that can serve as a valid gauge of individuals' perceptions of the effects of asthma and its treatment on their lives. These items do not directly measure symptoms or functional impairment, yet they include a broader range of content than most existent measures of asthma-specific QoL.

  13. Initial constructs for patient-centered outcome measures to evaluate brain-computer interfaces.

    PubMed

    Andresen, Elena M; Fried-Oken, Melanie; Peters, Betts; Patrick, Donald L

    2016-10-01

    The authors describe preliminary work toward the creation of patient-centered outcome (PCO) measures to evaluate brain-computer interface (BCI) as an assistive technology (AT) for individuals with severe speech and physical impairments (SSPI). In Phase 1, 591 items from 15 existing measures were mapped to the International Classification of Functioning, Disability and Health (ICF). In Phase 2, qualitative interviews were conducted with eight people with SSPI and seven caregivers. Resulting text data were coded in an iterative analysis. Most items (79%) were mapped to the ICF environmental domain; over half (53%) were mapped to more than one domain. The ICF framework was well suited for mapping items related to body functions and structures, but less so for items in other areas, including personal factors. Two constructs emerged from qualitative data: quality of life (QOL) and AT. Component domains and themes were identified for each. Preliminary constructs, domains and themes were generated for future PCO measures relevant to BCI. Existing instruments are sufficient for initial items but do not adequately match the values of people with SSPI and their caregivers. Field methods for interviewing people with SSPI were successful, and support the inclusion of these individuals in PCO research. Implications for Rehabilitation Adapted interview methods allow people with severe speech and physical impairments to participate in patient-centered outcomes research. Patient-centered outcome measures are needed to evaluate the clinical implementation of brain-computer interface as an assistive technology.

  14. Robot-Beacon Distributed Range-Only SLAM for Resource-Constrained Operation.

    PubMed

    Torres-González, Arturo; Martínez-de Dios, Jose Ramiro; Ollero, Anibal

    2017-04-20

    This work deals with robot-sensor network cooperation where sensor nodes (beacons) are used as landmarks for Range-Only (RO) Simultaneous Localization and Mapping (SLAM). Most existing RO-SLAM techniques consider beacons as passive devices disregarding the sensing, computational and communication capabilities with which they are actually endowed. SLAM is a resource-demanding task. Besides the technological constraints of the robot and beacons, many applications impose further resource consumption limitations. This paper presents a scalable distributed RO-SLAM scheme for resource-constrained operation. It is capable of exploiting robot-beacon cooperation in order to improve SLAM accuracy while meeting a given resource consumption bound expressed as the maximum number of measurements that are integrated in SLAM per iteration. The proposed scheme combines a Sparse Extended Information Filter (SEIF) SLAM method, in which each beacon gathers and integrates robot-beacon and inter-beacon measurements, and a distributed information-driven measurement allocation tool that dynamically selects the measurements that are integrated in SLAM, balancing uncertainty improvement and resource consumption. The scheme adopts a robot-beacon distributed approach in which each beacon participates in the selection, gathering and integration in SLAM of robot-beacon and inter-beacon measurements, resulting in significant estimation accuracies, resource-consumption efficiency and scalability. It has been integrated in an octorotor Unmanned Aerial System (UAS) and evaluated in 3D SLAM outdoor experiments. The experimental results obtained show its performance and robustness and evidence its advantages over existing methods.

  15. Uncertainty Evaluation of the New Setup for Measurement of Water-Vapor Permeation Rate by a Dew-Point Sensor

    NASA Astrophysics Data System (ADS)

    Hudoklin, D.; Šetina, J.; Drnovšek, J.

    2012-09-01

    The measurement of the water-vapor permeation rate (WVPR) through materials is very important in many industrial applications such as the development of new fabrics and construction materials, in the semiconductor industry, packaging, vacuum techniques, etc. The demand for this kind of measurement grows considerably and thus many different methods for measuring the WVPR are developed and standardized within numerous national and international standards. However, comparison of existing methods shows a low level of mutual agreement. The objective of this paper is to demonstrate the necessary uncertainty evaluation for WVPR measurements, so as to provide a basis for development of a corresponding reference measurement standard. This paper presents a specially developed measurement setup, which employs a precision dew-point sensor for WVPR measurements on specimens of different shapes. The paper also presents a physical model, which tries to account for both dynamic and quasi-static methods, the common types of WVPR measurements referred to in standards and scientific publications. An uncertainty evaluation carried out according to the ISO/IEC guide to the expression of uncertainty in measurement (GUM) shows the relative expanded ( k = 2) uncertainty to be 3.0 % for WVPR of 6.71 mg . h-1 (corresponding to permeance of 30.4 mg . m-2. day-1 . hPa-1).

  16. Measurement error is often neglected in medical literature: a systematic review.

    PubMed

    Brakenhoff, Timo B; Mitroiu, Marian; Keogh, Ruth H; Moons, Karel G M; Groenwold, Rolf H H; van Smeden, Maarten

    2018-06-01

    In medical research, covariates (e.g., exposure and confounder variables) are often measured with error. While it is well accepted that this introduces bias and imprecision in exposure-outcome relations, it is unclear to what extent such issues are currently considered in research practice. The objective was to study common practices regarding covariate measurement error via a systematic review of general medicine and epidemiology literature. Original research published in 2016 in 12 high impact journals was full-text searched for phrases relating to measurement error. Reporting of measurement error and methods to investigate or correct for it were quantified and characterized. Two hundred and forty-seven (44%) of the 565 original research publications reported on the presence of measurement error. 83% of these 247 did so with respect to the exposure and/or confounder variables. Only 18 publications (7% of 247) used methods to investigate or correct for measurement error. Consequently, it is difficult for readers to judge the robustness of presented results to the existence of measurement error in the majority of publications in high impact journals. Our systematic review highlights the need for increased awareness about the possible impact of covariate measurement error. Additionally, guidance on the use of measurement error correction methods is necessary. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. Traceability in hardness measurements: from the definition to industry

    NASA Astrophysics Data System (ADS)

    Germak, Alessandro; Herrmann, Konrad; Low, Samuel

    2010-04-01

    The measurement of hardness has been and continues to be of significant importance to many of the world's manufacturing industries. Conventional hardness testing is the most commonly used method for acceptance testing and production quality control of metals and metallic products. Instrumented indentation is one of the few techniques available for obtaining various property values for coatings and electronic products in the micrometre and nanometre dimensional scales. For these industries to be successful, it is critical that measurements made by suppliers and customers agree within some practical limits. To help assure this measurement agreement, a traceability chain for hardness measurement traceability from the hardness definition to industry has developed and evolved over the past 100 years, but its development has been complicated. A hardness measurement value not only requires traceability of force, length and time measurements but also requires traceability of the hardness values measured by the hardness machine. These multiple traceability paths are needed because a hardness measurement is affected by other influence parameters that are often difficult to identify, quantify and correct. This paper describes the current situation of hardness measurement traceability that exists for the conventional hardness methods (i.e. Rockwell, Brinell, Vickers and Knoop hardness) and for special-application hardness and indentation methods (i.e. elastomer, dynamic, portables and instrumented indentation).

  18. Elementary teachers' ideas about, planning for and implementation of learner-directed and teacher-directed inquiry: A mixed methods study

    NASA Astrophysics Data System (ADS)

    Biggers, Mandy Sue

    Using a framework for variations of classroom inquiry (National Research Council [NRC], 2000, p. 29), this study explored 40 inservice elementary teachers' planning, modification, and enactment of kit-based science curriculum materials. As part of the study, a new observation protocol was modified from an existing protocol (Practices of Science Observation Protocol [P-SOP]) to measure the amount of teacher direction in science inquiry lessons (Practices of Science Observation Protocol + Directedness [P-SOPd]). An embedded mixed methods design was employed to investigate four questions: 1. How valid and reliable is the P-SOPd? 2. In what ways do inservice elementary teachers adapt existing elementary science curriculum materials across the inquiry continuum? 3. What is the relationship between the overall quality of inquiry and variations of inquiry in elementary teachers' enacted science instruction? 4. How do inservice elementary teachers' ideas about the inquiry continuum influence their adaptation of elementary science curriculum materials? Each teacher chose three lessons from a science unit for video-recorded observation, and submitted lesson plans for the three lessons. Lesson plans and videos were scored using the P-SOPd. The scores were also compared between the two protocols to determine if a correlation existed between the level of inquiry (measured on the P-SOP) and the amount of teacher direction (measured on the P-SOPd). Findings indicated no significant differences between planned and enacted lessons for the amount of teacher direction, but a correlation existed between the level of inquiry and the amount of teacher direction. In effect, the elementary teachers taught their science curriculum materials with a high level of fidelity for both the features of inquiry and the amount of teacher direction. A smaller group of three case study teachers were followed for the school year to give a more in-depth explanation of the quantitative findings. Case study findings revealed that the teachers' science instruction was teacher-directed while their conceptions of inquiry were student-directed. This study contributes to existing research on preservice teachers' learning about the continuum (Biggers & Forbes, 2012) and inservice teachers' ideas about the five features of inquiry (Biggers & Forbes, in press).

  19. Optimal patch code design via device characterization

    NASA Astrophysics Data System (ADS)

    Wu, Wencheng; Dalal, Edul N.

    2012-01-01

    In many color measurement applications, such as those for color calibration and profiling, "patch code" has been used successfully for job identification and automation to reduce operator errors. A patch code is similar to a barcode, but is intended primarily for use in measurement devices that cannot read barcodes due to limited spatial resolution, such as spectrophotometers. There is an inherent tradeoff between decoding robustness and the number of code levels available for encoding. Previous methods have attempted to address this tradeoff, but those solutions have been sub-optimal. In this paper, we propose a method to design optimal patch codes via device characterization. The tradeoff between decoding robustness and the number of available code levels is optimized in terms of printing and measurement efforts, and decoding robustness against noises from the printing and measurement devices. Effort is drastically reduced relative to previous methods because print-and-measure is minimized through modeling and the use of existing printer profiles. Decoding robustness is improved by distributing the code levels in CIE Lab space rather than in CMYK space.

  20. Model selection for marginal regression analysis of longitudinal data with missing observations and covariate measurement error.

    PubMed

    Shen, Chung-Wei; Chen, Yi-Hau

    2015-10-01

    Missing observations and covariate measurement error commonly arise in longitudinal data. However, existing methods for model selection in marginal regression analysis of longitudinal data fail to address the potential bias resulting from these issues. To tackle this problem, we propose a new model selection criterion, the Generalized Longitudinal Information Criterion, which is based on an approximately unbiased estimator for the expected quadratic error of a considered marginal model accounting for both data missingness and covariate measurement error. The simulation results reveal that the proposed method performs quite well in the presence of missing data and covariate measurement error. On the contrary, the naive procedures without taking care of such complexity in data may perform quite poorly. The proposed method is applied to data from the Taiwan Longitudinal Study on Aging to assess the relationship of depression with health and social status in the elderly, accommodating measurement error in the covariate as well as missing observations. © The Author 2015. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

Top