NASA Technical Reports Server (NTRS)
Zhou, Wei
1993-01-01
In the high accurate measurement of periodic signals, the greatest common factor frequency and its characteristics have special functions. A method of time difference measurement - the time difference method by dual 'phase coincidence points' detection is described. This method utilizes the characteristics of the greatest common factor frequency to measure time or phase difference between periodic signals. It can suit a very wide frequency range. Measurement precision and potential accuracy of several picoseconds were demonstrated with this new method. The instrument based on this method is very simple, and the demand for the common oscillator is low. This method and instrument can be used widely.
Coordinate alignment of combined measurement systems using a modified common points method
NASA Astrophysics Data System (ADS)
Zhao, G.; Zhang, P.; Xiao, W.
2018-03-01
The co-ordinate metrology has been extensively researched for its outstanding advantages in measurement range and accuracy. The alignment of different measurement systems is usually achieved by integrating local coordinates via common points before measurement. The alignment errors would accumulate and significantly reduce the global accuracy, thus need to be minimized. In this thesis, a modified common points method (MCPM) is proposed to combine different traceable system errors of the cooperating machines, and optimize the global accuracy by introducing mutual geometric constraints. The geometric constraints, obtained by measuring the common points in individual local coordinate systems, provide the possibility to reduce the local measuring uncertainty whereby enhance the global measuring certainty. A simulation system is developed in Matlab to analyze the feature of MCPM using the Monto-Carlo method. An exemplary setup is constructed to verify the feasibility and efficiency of the proposed method associated with laser tracker and indoor iGPS systems. Experimental results show that MCPM could significantly improve the alignment accuracy.
ERIC Educational Resources Information Center
Johnson, R. Jeremy; Savas, Christopher J.; Kartje, Zachary; Hoops, Geoffrey C.
2014-01-01
Measurement of protein denaturation and protein folding is a common laboratory technique used in undergraduate biochemistry laboratories. Differential scanning fluorimetry (DSF) provides a rapid, sensitive, and general method for measuring protein thermal stability in an undergraduate biochemistry laboratory. In this method, the thermal…
Improved collaborative filtering recommendation algorithm of similarity measure
NASA Astrophysics Data System (ADS)
Zhang, Baofu; Yuan, Baoping
2017-05-01
The Collaborative filtering recommendation algorithm is one of the most widely used recommendation algorithm in personalized recommender systems. The key is to find the nearest neighbor set of the active user by using similarity measure. However, the methods of traditional similarity measure mainly focus on the similarity of user common rating items, but ignore the relationship between the user common rating items and all items the user rates. And because rating matrix is very sparse, traditional collaborative filtering recommendation algorithm is not high efficiency. In order to obtain better accuracy, based on the consideration of common preference between users, the difference of rating scale and score of common items, this paper presents an improved similarity measure method, and based on this method, a collaborative filtering recommendation algorithm based on similarity improvement is proposed. Experimental results show that the algorithm can effectively improve the quality of recommendation, thus alleviate the impact of data sparseness.
GIS-based measurements that combine native raster and native vector data are commonly used to assess environmental quality. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used beca...
Timothy G. Wade; James D. Wickham; Maliha S. Nash; Anne C. Neale; Kurt H. Riitters; K. Bruce Jones
2003-01-01
AbstractGIS-based measurements that combine native raster and native vector data are commonly used in environmental assessments. Most of these measurements can be calculated using either raster or vector data formats and processing methods. Raster processes are more commonly used because they can be significantly faster computationally...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Michael; Haeri, Hossein; Reynolds, Arlis
This chapter provides a set of model protocols for determining energy and demand savings that result from specific energy efficiency measures implemented through state and utility efficiency programs. The methods described here are approaches that are or are among the most commonly used and accepted in the energy efficiency industry for certain measures or programs. As such, they draw from the existing body of research and best practices for energy efficiency program evaluation, measurement, and verification (EM&V). These protocols were developed as part of the Uniform Methods Project (UMP), funded by the U.S. Department of Energy (DOE). The principal objectivemore » for the project was to establish easy-to-follow protocols based on commonly accepted methods for a core set of widely deployed energy efficiency measures.« less
Identification of common coexpression modules based on quantitative network comparison.
Jo, Yousang; Kim, Sanghyeon; Lee, Doheon
2018-06-13
Finding common molecular interactions from different samples is essential work to understanding diseases and other biological processes. Coexpression networks and their modules directly reflect sample-specific interactions among genes. Therefore, identification of common coexpression network or modules may reveal the molecular mechanism of complex disease or the relationship between biological processes. However, there has been no quantitative network comparison method for coexpression networks and we examined previous methods for other networks that cannot be applied to coexpression network. Therefore, we aimed to propose quantitative comparison methods for coexpression networks and to find common biological mechanisms between Huntington's disease and brain aging by the new method. We proposed two similarity measures for quantitative comparison of coexpression networks. Then, we performed experiments using known coexpression networks. We showed the validity of two measures and evaluated threshold values for similar coexpression network pairs from experiments. Using these similarity measures and thresholds, we quantitatively measured the similarity between disease-specific and aging-related coexpression modules and found similar Huntington's disease-aging coexpression module pairs. We identified similar Huntington's disease-aging coexpression module pairs and found that these modules are related to brain development, cell death, and immune response. It suggests that up-regulated cell signalling related cell death and immune/ inflammation response may be the common molecular mechanisms in the pathophysiology of HD and normal brain aging in the frontal cortex.
NASA Astrophysics Data System (ADS)
Jiang, Xiangqian; Wang, Kaiwei; Martin, Haydn
2006-12-01
We introduce a new surface measurement method for potential online application. Compared with our previous research, the new design is a significant improvement. It also features high stability because it uses a near common-path configuration. The method should be of great benefit to advanced manufacturing, especially for quality and process control in ultraprecision manufacturing and on the production line. Proof-of-concept experiments have been successfully conducted by measuring the system repeatability and the displacements of a mirror surface.
Quantitative comparison of in situ soil CO2 flux measurement methods
Jennifer D. Knoepp; James M. Vose
2002-01-01
Development of reliable regional or global carbon budgets requires accurate measurement of soil CO2 flux. We conducted laboratory and field studies to determine the accuracy and comparability of methods commonly used to measure in situ soil CO2 fluxes. Methods compared included CO2...
Karasz, Alison; Patel, Viraj; Kabita, Mahbhooba; Shimu, Parvin
2015-01-01
Background Though common mental disorder (CMD) is highly prevalent among South Asian immigrant women, they rarely seek mental treatment. This may be due in part to the lack of conceptual synchrony between medical models of mental disorder and the social models of distress common in South Asian communities. Furthermore, common mental health screening and diagnostic measures may not adequately capture distress in this group. CBPR is ideally suited to help address measurement issues in CMD as well as develop culturally appropriate treatment models. Objectives To use participatory methods to identify an appropriate, culturally specific mental health syndrome and develop an instrument to measure this syndrome. Methods We formed a partnership between researchers, clinicians, and community members. The partnership selected a culturally specific model of emotional distress/ illness, “Tension,” as a focus for further study. Partners developed a scale to measure Tension and tested the new scale on 162 Bangladeshi immigrant women living in the Bronx. Results The 24-item “Tension Scale” had high internal consistency (alpha =0.83). In bivariate analysis, the scale significantly correlated in the expected direction with depressed as measured by the PHQ-2, age, education, self-rated health, having seen a physician in the past year, and other variables. Conclusions Using participatory techniques, we created a new measure designed to assess common mental disorder in an isolated immigrant group. The new measure shows excellent psychometric properties and will be helpful in the implementation of a community-based, culturally synchronous intervention for depression. We describe a useful strategy for the rapid development and field testing of culturally appropriate measures of mental distress and disorder. PMID:24375184
Glanz, Karen; Johnson, Lauren; Yaroch, Amy L; Phillips, Matthew; Ayala, Guadalupe X; Davis, Erica L
2016-04-01
This review describes available measures of retail food store environments, including data collection methods, characteristics of measures, the dimensions most commonly captured across methods, and their strengths and limitations. Articles were included if they were published between 1990 and 2015 in an English-language peer-reviewed journal and presented original research findings on the development and/or use of a measure or method to assess retail food store environments. Four sources were used, including literature databases, backward searching of identified articles, published reviews, and measurement registries. From 3,013 citations identified, 125 observational studies and 5 studies that used sales records were reviewed in-depth. Most studies were cross-sectional and based in the US. The most common tools used were the US Department of Agriculture's Thrifty Food Plan and the Nutrition Environment Measures Survey for Stores. The most common attribute captured was availability of healthful options, followed by price. Measurement quality indicators were minimal and focused mainly on assessments of reliability. Two widely used tools to measure retail food store environments are available and can be refined and adapted. Standardization of measurement across studies and reports of measurement quality (eg, reliability, validity) may better inform practice and policy changes. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
The Other Side of Method Bias: The Perils of Distinct Source Research Designs
ERIC Educational Resources Information Center
Kammeyer-Mueller, John; Steel, Piers D. G.; Rubenstein, Alex
2010-01-01
Common source bias has been the focus of much attention. To minimize the problem, researchers have sometimes been advised to take measurements of predictors from one observer and measurements of outcomes from another observer or to use separate occasions of measurement. We propose that these efforts to eliminate biases due to common source…
DOE Office of Scientific and Technical Information (OSTI.GOV)
None, None
The most common method of measuring air leakage is to perform single (or solo) blower door pressurization and/or depressurization test. In detached housing, the single blower door test measures leakage to the outside. In attached housing, however, this "solo" test method measures both air leakage to the outside and air leakage between adjacent units through common surfaces. Although minimizing leakage to neighboring units is highly recommended to avoid indoor air quality issues between units, reduce pressure differentials between units, and control stack effect, the energy benefits of air sealing can be significantly overpredicted if the solo air leakage number ismore » used in the energy analysis. Guarded blower door testing is more appropriate for isolating and measuring leakage to the outside in attached housing. This method uses multiple blower doors to depressurize adjacent spaces to the same level as the unit being tested. Maintaining a neutral pressure across common walls, ceilings, and floors acts as a "guard" against air leakage between units. The resulting measured air leakage in the test unit is only air leakage to the outside. Although preferred for assessing energy impacts, the challenges of performing guarded testing can be daunting.« less
A new method for registration of heterogeneous sensors in a dimensional measurement system
NASA Astrophysics Data System (ADS)
Zhao, Yan; Wang, Zhong; Fu, Luhua; Qu, Xinghua; Zhang, Heng; Liu, Changjie
2017-10-01
Registration of multiple sensors is a basic step in multi-sensor dimensional or coordinate measuring systems before any measurement. In most cases, a common standard is used to be measured by all sensors, and this may work well for general registration of multiple homogeneous sensors. However, when inhomogeneous sensors detect a common standard, it is usually very difficult to obtain the same information, because of the different working principles of the sensors. In this paper, a new method called multiple steps registration is proposed to register two sensors: a video camera sensor (VCS) and a tactile probe sensor (TPS). In this method, the two sensors measure two separated standards: a chrome circle on a reticle and a reference sphere with a constant distance between them, fixed on a steel plate. The VCS captures only the circle and the TPS touches only the sphere. Both simulations and real experiments demonstrate that the proposed method is robust and accurate in the registration of multiple inhomogeneous sensors in a dimensional measurement system.
Use of ground penetrating radar for construction quality assurance of concrete pavement.
DOT National Transportation Integrated Search
2009-11-01
Extracting concrete cores is the most common method for measuring the thickness of concrete pavement for construction : quality control. Although this method provides a relatively accurate thickness measurement, it is destructive, labor : intensive, ...
Effective Methods of Teaching Moon Phases
NASA Astrophysics Data System (ADS)
Jones, Heather; Hintz, E. G.; Lawler, M. J.; Jones, M.; Mangrubang, F. R.; Neeley, J. E.
2010-01-01
This research investigates the effectiveness of several commonly used methods for teaching the causes of moon phases to sixth grade students. Common teaching methods being investigated are the use of diagrams, animations, modeling/kinesthetics and direct observations of moon phases using a planetarium. Data for each method will be measured by a pre and post assessment of students understanding of moon phases taught using one of the methods. The data will then be used to evaluate the effectiveness of each teaching method individually and comparatively, as well as the method's ability to discourage common misconceptions about moon phases. Results from this research will provide foundational data for the development of educational planetarium shows for the deaf or other linguistically disadvantage children.
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shippert, Tim; Gaustad, Krista
Consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. These challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of data consolidation methods, present a frameworkmore » for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
Solving the Capacitive Effect in the High-Frequency sweep for Langmuir Probe in SYMPLE
NASA Astrophysics Data System (ADS)
Pramila; Patel, J. J.; Rajpal, R.; Hansalia, C. J.; Anitha, V. P.; Sathyanarayana, K.
2017-04-01
Langmuir Probe based measurements need to be routinely carried out to measure various plasma parameters such as the electron density (ne), the electron temperature (Te), the floating potential (Vf), and the plasma potential (Vp). For this, the diagnostic electronics along with the biasing power supplies is installed in standard industrial racks with a 2KV isolation transformer. The Signal Conditioning Electronics (SCE) system is populated inside the 4U-chassis based system with the front-end electronics, designed using high common mode differential amplifiers which can measure small differential signal in presence of high common mode dc- bias or ac ramp voltage used for biasing the probes. DC-biasing of the probe is most common method for getting its I-V characteristic but method of biasing the probe with a sweep at high frequency encounters the problem of corruption of signal due to capacitive effect specially when the sweep period and the discharge time is very fast and die down in the order of μs or lesser. This paper presents and summarises the method of removing such effects encountered while measuring the probe current.
Schilling, Katherine; Applegate, Rachel
2012-01-01
Objectives and Background: Libraries are increasingly called upon to demonstrate student learning outcomes and the tangible benefits of library educational programs. This study reviewed and compared the efficacy of traditionally used measures for assessing library instruction, examining the benefits and drawbacks of assessment measures and exploring the extent to which knowledge, attitudes, and behaviors actually paralleled demonstrated skill levels. Methods: An overview of recent literature on the evaluation of information literacy education addressed these questions: (1) What evaluation measures are commonly used for evaluating library instruction? (2) What are the pros and cons of popular evaluation measures? (3) What are the relationships between measures of skills versus measures of attitudes and behavior? Research outcomes were used to identify relationships between measures of attitudes, behaviors, and skills, which are typically gathered via attitudinal surveys, written skills tests, or graded exercises. Results and Conclusions: Results provide useful information about the efficacy of instructional evaluation methods, including showing significant disparities between attitudes, skills, and information usage behaviors. This information can be used by librarians to implement the most appropriate evaluation methods for measuring important variables that accurately demonstrate students' attitudes, behaviors, or skills. PMID:23133325
Assessment methods in human body composition.
Lee, Seon Yeong; Gallagher, Dympna
2008-09-01
The present study reviews the most recently developed and commonly used methods for the determination of human body composition in vivo with relevance for nutritional assessment. Body composition measurement methods are continuously being perfected with the most commonly used methods being bioelectrical impedance analysis, dilution techniques, air displacement plethysmography, dual energy X-ray absorptiometry, and MRI or magnetic resonance spectroscopy. Recent developments include three-dimensional photonic scanning and quantitative magnetic resonance. Collectively, these techniques allow for the measurement of fat, fat-free mass, bone mineral content, total body water, extracellular water, total adipose tissue and its subdepots (visceral, subcutaneous, and intermuscular), skeletal muscle, select organs, and ectopic fat depots. There is an ongoing need to perfect methods that provide information beyond mass and structure (static measures) to kinetic measures that yield information on metabolic and biological functions. On the basis of the wide range of measurable properties, analytical methods and known body composition models, clinicians and scientists can quantify a number of body components and with longitudinal assessment, can track changes in health and disease with implications for understanding efficacy of nutritional and clinical interventions, diagnosis, prevention, and treatment in clinical settings. With the greater need to understand precursors of health risk beginning in childhood, a gap exists in appropriate in-vivo measurement methods beginning at birth.
Assessment methods in human body composition
Lee, Seon Yeong; Gallagher, Dympna
2009-01-01
Purpose of review The present study reviews the most recently developed and commonly used methods for the determination of human body composition in vivo with relevance for nutritional assessment. Recent findings Body composition measurement methods are continuously being perfected with the most commonly used methods being bioelectrical impedance analysis, dilution techniques, air displacement plethysmography, dual energy X-ray absorptiometry, and MRI or magnetic resonance spectroscopy. Recent developments include three-dimensional photonic scanning and quantitative magnetic resonance. Collectively, these techniques allow for the measurement of fat, fat-free mass, bone mineral content, total body water, extracellular water, total adipose tissue and its subdepots (visceral, subcutaneous, and intermuscular), skeletal muscle, select organs, and ectopic fat depots. Summary There is an ongoing need to perfect methods that provide information beyond mass and structure (static measures) to kinetic measures that yield information on metabolic and biological functions. On the basis of the wide range of measurable properties, analytical methods and known body composition models, clinicians and scientists can quantify a number of body components and with longitudinal assessment, can track changes in health and disease with implications for understanding efficacy of nutritional and clinical interventions, diagnosis, prevention, and treatment in clinical settings. With the greater need to understand precursors of health risk beginning in childhood, a gap exists in appropriate in-vivo measurement methods beginning at birth. PMID:18685451
A rapid and cost effective method for soil carbon mineralization under static incubations
USDA-ARS?s Scientific Manuscript database
Soil incubations with subsequent measurement of carbon dioxide (CO2) evolved are common soil assays to estimate C mineralization rates and active organic C. Two common methods used to detect CO2 in laboratory incubations are gas chromatography (GC) and alkali absorption followed by titration (NaOH)...
USDA-ARS?s Scientific Manuscript database
There has been growing concern about methods used to measure the CO2 photocompensation point, a vital parameter to model leaf photosynthesis. the CO2 photocompensation point is often measured as the common intercept of several CO2 response curves, but this method may over-estimate the CO2 photocompe...
Wirth, Troy A.; Pyke, David A.
2007-01-01
Emergency Stabilization and Rehabilitation (ES&R) and Burned Area Emergency Response (BAER) treatments are short-term, high-intensity treatments designed to mitigate the adverse effects of wildfire on public lands. The federal government expends significant resources implementing ES&R and BAER treatments after wildfires; however, recent reviews have found that existing data from monitoring and research are insufficient to evaluate the effects of these activities. The purpose of this report is to: (1) document what monitoring methods are generally used by personnel in the field; (2) describe approaches and methods for post-fire vegetation and soil monitoring documented in agency manuals; (3) determine the common elements of monitoring programs recommended in these manuals; and (4) describe a common monitoring approach to determine the effectiveness of future ES&R and BAER treatments in non-forested regions. Both qualitative and quantitative methods to measure effectiveness of ES&R treatments are used by federal land management agencies. Quantitative methods are used in the field depending on factors such as funding, personnel, and time constraints. There are seven vegetation monitoring manuals produced by the federal government that address monitoring methods for (primarily) vegetation and soil attributes. These methods vary in their objectivity and repeatability. The most repeatable methods are point-intercept, quadrat-based density measurements, gap intercepts, and direct measurement of soil erosion. Additionally, these manuals recommend approaches for designing monitoring programs for the state of ecosystems or the effect of management actions. The elements of a defensible monitoring program applicable to ES&R and BAER projects that most of these manuals have in common are objectives, stratification, control areas, random sampling, data quality, and statistical analysis. The effectiveness of treatments can be determined more accurately if data are gathered using an approach that incorporates these six monitoring program design elements and objectives, as well as repeatable procedures to measure cover, density, gap intercept, and soil erosion within each ecoregion and plant community. Additionally, using a common monitoring program design with comparable methods, consistently documenting results, and creating and maintaining a central database for query and reporting, will ultimately allow a determination of the effectiveness of post-fire rehabilitation activities region-wide.
A Simple and Accurate Method for Measuring Enzyme Activity.
ERIC Educational Resources Information Center
Yip, Din-Yan
1997-01-01
Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…
Vertebral rotation measurement: a summary and comparison of common radiographic and CT methods
Lam, Gabrielle C; Hill, Doug L; Le, Lawrence H; Raso, Jim V; Lou, Edmond H
2008-01-01
Current research has provided a more comprehensive understanding of Adolescent Idiopathic Scoliosis (AIS) as a three-dimensional spinal deformity, encompassing both lateral and rotational components. Apart from quantifying curve severity using the Cobb angle, vertebral rotation has become increasingly prominent in the study of scoliosis. It demonstrates significance in both preoperative and postoperative assessment, providing better appreciation of the impact of bracing or surgical interventions. In the past, the need for computer resources, digitizers and custom software limited studies of rotation to research performed after a patient left the scoliosis clinic. With advanced technology, however, rotation measurements are now more feasible. While numerous vertebral rotation measurement methods have been developed and tested, thorough comparisons of these are still relatively unexplored. This review discusses the advantages and disadvantages of six common measurement techniques based on technology most pertinent in clinical settings: radiography (Cobb, Nash-Moe, Perdriolle and Stokes' method) and computer tomography (CT) imaging (Aaro-Dahlborn and Ho's method). Better insight into the clinical suitability of rotation measurement methods currently available is presented, along with a discussion of critical concerns that should be addressed in future studies and development of new methods. PMID:18976498
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shippert, Tim; Gaustad, Krista
In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
An architecture for consolidating multidimensional time-series data onto a common coordinate grid
Shippert, Tim; Gaustad, Krista
2016-12-16
In this paper, consolidating measurement data for use by data models or in inter-comparison studies frequently requires transforming the data onto a common grid. Standard methods for interpolating multidimensional data are often not appropriate for data with non-homogenous dimensionality, and are hard to implement in a consistent manner for different datastreams. In addition, these challenges are increased when dealing with the automated procedures necessary for use with continuous, operational datastreams. In this paper we introduce a method of applying a series of one-dimensional transformations to merge data onto a common grid, examine the challenges of ensuring consistent application of datamore » consolidation methods, present a framework for addressing those challenges, and describe the implementation of such a framework for the Atmospheric Radiation Measurement (ARM) program.« less
Non-nuclear methods for HMA density measurements : final report, June 2008.
DOT National Transportation Integrated Search
2008-05-01
Non-nuclear methods for the measurement of hot-mix asphalt (HMA) density offer the ability to take numerous density readings in a very short period of time, without the need for intensive licensing, training, and maintenance efforts common to nuclear...
Use of the Digital Surface Roughness Meter in Virginia.
DOT National Transportation Integrated Search
2006-01-01
Pavement surface texture is measured in a variety of ways in Virginia. Two methods commonly used are ASTM E 965, Standard Test Method for Measuring Pavement Macrotexture Depth Using a Volumetric Technique, known as the "sand patch" test, and ASTM E 2...
Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael
2015-05-01
There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.
Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P
2010-08-07
Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.
Simultaneous optimization method for absorption spectroscopy postprocessing.
Simms, Jean M; An, Xinliang; Brittelle, Mack S; Ramesh, Varun; Ghandhi, Jaal B; Sanders, Scott T
2015-05-10
A simultaneous optimization method is proposed for absorption spectroscopy postprocessing. This method is particularly useful for thermometry measurements based on congested spectra, as commonly encountered in combustion applications of H2O absorption spectroscopy. A comparison test demonstrated that the simultaneous optimization method had greater accuracy, greater precision, and was more user-independent than the common step-wise postprocessing method previously used by the authors. The simultaneous optimization method was also used to process experimental data from an environmental chamber and a constant volume combustion chamber, producing results with errors on the order of only 1%.
Bieńkowski, Paweł; Cała, Paweł; Zubrzak, Bartłomiej
2015-01-01
This paper presents the characteristics of the mobile phone base station (BS) as an electromagnetic field (EMF) source. The most common system configurations with their construction are described. The parameters of radiated EMF in the context of the access to methods and other parameters of the radio transmission are discussed. Attention was also paid to antennas that are used in this technology. The influence of individual components of a multi-frequency EMF, most commonly found in the BS surroundings, on the resultant EMF strength value indicated by popular broadband EMF meters was analyzed. The examples of metrological characteristics of the most common EMF probes and 2 measurement scenarios of the multisystem base station, with and without microwave relays, are shown. The presented method for measuring the multi-frequency EMF using 2 broadband probes allows for the significant minimization of measurement uncertainty. Equations and formulas that can be used to calculate the actual EMF intensity from multi-frequency sources are shown. They have been verified in the laboratory conditions on a specific standard setup as well as in real conditions in a survey of the existing base station with microwave relays. Presented measurement methodology of multi-frequency EMF from BS with microwave relays, validated both in laboratory and real conditions. It has been proven that the described measurement methodology is the optimal approach to the evaluation of EMF exposure in BS surrounding. Alternative approaches with much greater uncertainty (precaution method) or more complex measuring procedure (sources exclusion method) are also presented). This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Platinum thin film resistors as accurate and stable temperature sensors
NASA Technical Reports Server (NTRS)
Diehl, W.
1984-01-01
The measurement characteristics of thin-Pt-film temperature sensors fabricated using advanced methods are discussed. The limitations of wound-wire Pt temperature sensors and the history of Pt-film development are outlined, and the commonly used film-deposition, structuring, and trimming methods are presented in a table. The development of a family of sputtered film resistors is described in detail and illustrated with photographs of the different types. The most commonly used tolerances are reported as + or - 0.3 C + 0.5 percent of the temperature measured.
FIELD MEASUREMENT OF DISSOLVED OXYGEN: A COMPARISON OF METHODS
The ability to confidently measure the concentration of dissolved oxygen (D.O.) in ground water is a key aspect of remedial selection and assessment. Presented here is a comparison of the commonly practiced methods for determining D.O. concentrations in ground water, including c...
Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt
2014-06-01
The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for the RSES and to quantify and predict the method effects. This sample involves two waves (N =2,513 9th-grade and 2,370 10th-grade students) from five waves of a school-based longitudinal study. The RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained a large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style and found that being a girl and having a higher number of depressive symptoms were associated with both low self-esteem and negative response style, as measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents.
Urbán, Róbert; Szigeti, Réka; Kökönyei, Gyöngyi; Demetrovics, Zsolt
2013-01-01
The Rosenberg Self-Esteem Scale (RSES) is a widely used measure for assessing self-esteem, but its factor structure is debated. Our goals were to compare 10 alternative models for RSES; and to quantify and predict the method effects. This sample involves two waves (N=2513 ninth-grade and 2370 tenth-grade students) from five waves of a school-based longitudinal study. RSES was administered in each wave. The global self-esteem factor with two latent method factors yielded the best fit to the data. The global factor explained large amount of the common variance (61% and 46%); however, a relatively large proportion of the common variance was attributed to the negative method factor (34 % and 41%), and a small proportion of the common variance was explained by the positive method factor (5% and 13%). We conceptualized the method effect as a response style, and found that being a girl and having higher number of depressive symptoms were associated with both low self-esteem and negative response style measured by the negative method factor. Our study supported the one global self-esteem construct and quantified the method effects in adolescents. PMID:24061931
ERIC Educational Resources Information Center
Ungar, Michael; Liebenberg, Linda
2011-01-01
An international team of investigators in 11 countries have worked collaboratively to develop a culturally and contextually relevant measure of youth resilience, the Child and Youth Resilience Measure (CYRM-28). The team used a mixed methods design that facilitated understanding of both common and unique aspects of resilience across cultures.…
Currently there are no EPA reference sampling methods that have been promulgated for measuring stack emissions of Hg from coal combustion sources, however, EPA Method 29 is most commonly applied. The draft ASTM Ontario Hydro Method for measuring oxidized, elemental, particulate-b...
An experimental method to simulate incipient decay of wood basidiomycete fungi
Simon Curling; Jerrold E. Winandy; Carol A. Clausen
2000-01-01
At very early stages of decay of wood by basidiomycete fungi, strength loss can be measured from wood before any measurable weight loss. Therefore, strength loss is a more efficient measure of incipient decay than weight loss. However, common standard decay tests (e.g. EN 113 or ASTM D2017) use weight loss as the measure of decay. A method was developed that allowed...
Laser tracker orientation in confined space using on-board targets
NASA Astrophysics Data System (ADS)
Gao, Yang; Kyle, Stephen; Lin, Jiarui; Yang, Linghui; Ren, Yu; Zhu, Jigui
2016-08-01
This paper presents a novel orientation method for two laser trackers using on-board targets attached to the tracker head and rotating with it. The technique extends an existing method developed for theodolite intersection systems which are now rarely used. This method requires only a very narrow space along the baseline between the instrument heads, in order to establish the orientation relationship. This has potential application in environments where space is restricted. The orientation parameters can be calculated by means of two-face reciprocal measurements to the on-board targets, and measurements to a common point close to the baseline. An accurate model is then applied which can be solved through nonlinear optimization. Experimental comparison has been made with the conventional orientation method, which is based on measurements to common intersection points located off the baseline. This requires more space and the comparison has demonstrated the feasibility of the more compact technique presented here. Physical setup and testing suggest that the method is practical. Uncertainties estimated by simulation indicate good performance in terms of measurement quality.
The feasibility of harmonizing gluten ELISA measurements.
Rzychon, Malgorzata; Brohée, Marcel; Cordeiro, Fernando; Haraszi, Reka; Ulberth, Franz; O'Connor, Gavin
2017-11-01
Many publications have highlighted that routine ELISA methods do not give rise to equivalent gluten content measurement results. In this study, we assess this variation between results and its likely impact on the enforcement of the EU gluten-free legislation. This study systematically examines the feasibility of harmonizing gluten ELISA assays by the introduction of: a common extraction procedure; a common calibrator, such as a pure gluten extract and an incurred matrix material. The comparability of measurements is limited by a weak correlation between kit results caused by differences in the selectivity of the methods. This lack of correlation produces bias that cannot be corrected by using reference materials alone. The use of a common calibrator reduced the between-assay variability to some extent, but variation due to differences in selectivity of the assays was unaffected. Consensus on robust markers and their conversion to "gluten content" are required. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Moving beyond Traditional Methods of Survey Validation
ERIC Educational Resources Information Center
Maul, Andrew
2017-01-01
In his focus article, "Rethinking Traditional Methods of Survey Validation," published in this issue of "Measurement: Interdisciplinary Research and Perspectives," Andrew Maul wrote that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as…
Pollard, Beth; Johnston, Marie; Dixon, Diane
2007-01-01
Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the clinician report measures appeared less well developed. It would be of value if new measures defined the construct of interest and, that the construct, be part of theoretical model. By ensuring measures are both theoretically and empirically valid then improvements in subjective health outcome measures should be possible. PMID:17343739
Comparability and repeatability of three commonly used methods for measuring endurance capacity.
Baxter-Gilbert, James; Mühlenhaupt, Max; Whiting, Martin J
2017-12-01
Measures of endurance (time to exhaustion) have been used to address a wide range of questions in ecomorphological and physiological research, as well as being used as a proxy for survival and fitness. Swimming, stationary (circular) track running, and treadmill running are all commonly used methods for measuring endurance. Despite the use of these methods across a broad range of taxa, how comparable these methods are to one another, and whether they are biologically relevant, is rarely examined. We used Australian water dragons (Intellagama lesueurii), a species that is morphologically adept at climbing, swimming, and running, to compare these three methods of endurance and examined if there is repeatability within and between trial methods. We found that time to exhaustion was not highly repeatable within a method, suggesting that single measures or a mean time to exhaustion across trials are not appropriate. Furthermore, we compared mean maximal endurance times among the three methods, and found that the two running methods (i.e., stationary track and treadmill) were similar, but swimming was distinctly different, resulting in lower mean maximal endurance times. Finally, an individual's endurance rank was not repeatable across methods, suggesting that the three endurance trial methods are not providing similar information about an individual's performance capacity. Overall, these results highlight the need to carefully match a measure of performance capacity with the study species and the research questions being asked so that the methods being used are behaviorally, ecologically, and physiologically relevant. © 2018 Wiley Periodicals, Inc.
Measurement environments and testing
NASA Astrophysics Data System (ADS)
Marvin, A. C.
1991-06-01
The various methods used to assess both the emission (interference generation) performance of electronic equipment and the immunity of electronic equipment to external electromagnetic interference are described. The measurement methods attempt to simulate realistic operating conditions for the equipment being tested, yet at the same time they must be repeatable and practical to operate. This has led to the development of a variety of test methods, each of which has its limitations. Concentration is on the most common measurement methods such as open-field test sites, screened enclosures and transverse electromagnetic (TEM) cells. The physical justification for the methods, their limitations, and measurement precision are described. Ways of relating similar measurements made by different methods are discussed, and some thoughts on future measurement improvements are presented.
Zhang, Shu-Bo; Lai, Jian-Huang
2015-03-01
Quantifying the semantic similarities between pairs of terms in the Gene Ontology (GO) structure can help to explore the functional relationships between biological entities. A common approach to this problem is to measure the information they have in common based on the information content of their common ancestors. However, many studies have their limitations in measuring the information two GO terms share. This study presented a new measurement, exclusively inherited shared information (EISI) that captured the information shared by two terms based on an intuitive observation on the multiple inheritance relationships among the terms in the GO graph. EISI was derived from the information content of the exclusively inherited common ancestors (EICAs), which were screened from the common ancestors according to the attribute of their direct children. The effectiveness of EISI was evaluated against some state-of-the-art measurements on both artificial and real datasets, it produced more relevant results with experts' scores on the artificial dataset, and supported the prior knowledge of gene function in pathways on the Saccharomyces genome database (SGD). The promising features of EISI are the following: (1) it provides a more effective way to characterize the semantic relationship between two GO terms by taking into account multiple common ancestors related, and (2) can quickly detect all EICAs with time complexity of O(n), which is much more efficient than other methods based on disjunctive common ancestors. It is a promising alternative to multiple inheritance based methods for practical applications on large-scale dataset. The algorithm EISI was implemented in Matlab and is freely available from http://treaton.evai.pl/EISI/. Copyright © 2014 Elsevier B.V. All rights reserved.
Quantum Field Energy Sensor based on the Casimir Effect
NASA Astrophysics Data System (ADS)
Ludwig, Thorsten
The Casimir effect converts vacuum fluctuations into a measurable force. Some new energy technologies aim to utilize these vacuum fluctuations in commonly used forms of energy like electricity or mechanical motion. In order to study these energy technologies it is helpful to have sensors for the energy density of vacuum fluctuations. In today's scientific instrumentation and scanning microscope technologies there are several common methods to measure sub-nano Newton forces. While the commercial atomic force microscopes (AFM) mostly work with silicon cantilevers, there are a large number of reports on the use of quartz tuning forks to get high-resolution force measurements or to create new force sensors. Both methods have certain advantages and disadvantages over the other. In this report the two methods are described and compared towards their usability for Casimir force measurements. Furthermore a design for a quantum field energy sensor based on the Casimir force measurement will be described. In addition some general considerations on extracting energy from vacuum fluctuations will be given.
ERIC Educational Resources Information Center
Bearss, Karen; Taylor, Christopher A.; Aman, Michael G.; Whittemore, Robin; Lecavalier, Luc; Miller, Judith; Pritchett, Jill; Green, Bryson; Scahill, Lawrence
2016-01-01
Anxiety is common in youth with autism spectrum disorder. Despite this common co-occurrence, studies targeting anxiety in this population are hindered by the under-developed state of measures in youth with autism spectrum disorder. Content validity (the extent to which an instrument measures the domain of interest) and an instrument's relevance to…
Basques, Bryce A; Long, William D; Golinvaux, Nicholas S; Bohl, Daniel D; Samuel, Andre M; Lukasiewicz, Adam M; Webb, Matthew L; Grauer, Jonathan N
2017-06-01
Multiple methods are used to measure proximal junctional angle (PJA) and diagnose proximal junctional kyphosis (PJK) after fusion for adolescent idiopathic scoliosis (AIS); however, there is no gold standard. Previous studies using the three most common measurement methods, upper-instrumented vertebra (UIV)+1, UIV+2, and UIV to T2, have minimized the difficulty in obtaining these measurements, and often exclude patients for which measurements cannot be recorded. The purpose of this study is to assess the technical feasibility of measuring PJA and PJK in a series of AIS patients who have undergone posterior instrumented fusion and to assess the variability in results depending on the measurement technique used. A retrospective cohort study was carried out. There were 460 radiographs from 98 patients with AIS who underwent posterior spinal fusion at a single institution from 2006 through 2012. The outcomes for this study were the ability to obtain a PJA measurement for each method, the ability to diagnose PJK, and the inter- and intra-rater reliability of these measurements. Proximal junctional angle was determined by measuring the sagittal Cobb angle on preoperative and postoperative lateral upright films using the three most common methods (UIV+1, UIV+2, and UIV to T2). The ability to obtain a PJA measurement, the ability to assess PJK, and the total number of patients with a PJK diagnosis were tabulated for each method based on established definitions. Intra- and inter-rater reliability of each measurement method was assessed using intra-class correlation coefficients (ICCs). A total of 460 radiographs from 98 patients were evaluated. The average number of radiographs per patient was 5.3±1.7 (mean±standard deviation), with an average follow-up of 2.1 years (780±562 days). A PJA measurement was only readable on 13%-18% of preoperative filmsand 31%-49% of postoperative films (range based on measurement technique). Only 12%-31% of films were able to be assessed for PJK based on established definitions. The rate of PJK diagnosis ranged from 1% to 29%. Of these diagnoses, 21%-100% disappeared on at least one subsequent film for the given patient. ICC ranges for intra-rater and inter-rater reliability were 0.730-0.799 and 0.794-0.836, respectively. This study suggests significant limitations of the three most common methods of measuring and diagnosing PJK. The results of studies using these methods can be significantly affected based on the exclusion of patients for whom measurements cannot be made and choice of measurement technique. Copyright © 2015 Elsevier Inc. All rights reserved.
Multivariate analysis of longitudinal rates of change.
Bryan, Matthew; Heagerty, Patrick J
2016-12-10
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed in the literature. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, 'accelerated time' methods have been developed which assume that covariates rescale time in longitudinal models for disease progression. In this manuscript, we detail an alternative multivariate model formulation that directly structures longitudinal rates of change and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Comparison of infusion pumps calibration methods
NASA Astrophysics Data System (ADS)
Batista, Elsa; Godinho, Isabel; do Céu Ferreira, Maria; Furtado, Andreia; Lucas, Peter; Silva, Claudia
2017-12-01
Nowadays, several types of infusion pump are commonly used for drug delivery, such as syringe pumps and peristaltic pumps. These instruments present different measuring features and capacities according to their use and therapeutic application. In order to ensure the metrological traceability of these flow and volume measuring equipment, it is necessary to use suitable calibration methods and standards. Two different calibration methods can be used to determine the flow error of infusion pumps. One is the gravimetric method, considered as a primary method, commonly used by National Metrology Institutes. The other calibration method, a secondary method, relies on an infusion device analyser (IDA) and is typically used by hospital maintenance offices. The suitability of the IDA calibration method was assessed by testing several infusion instruments at different flow rates using the gravimetric method. In addition, a measurement comparison between Portuguese Accredited Laboratories and hospital maintenance offices was performed under the coordination of the Portuguese Institute for Quality, the National Metrology Institute. The obtained results were directly related to the used calibration method and are presented in this paper. This work has been developed in the framework of the EURAMET projects EMRP MeDD and EMPIR 15SIP03.
Flow Control and Measurement in Electric Propulsion Systems: Towards an AIAA Reference Standard
NASA Technical Reports Server (NTRS)
Snyder, John Steven; Baldwin, Jeff; Frieman, Jason D.; Walker, Mitchell L. R.; Hicks, Nathan S.; Polzin, Kurt A.; Singleton, James T.
2013-01-01
Accurate control and measurement of propellant flow to a thruster is one of the most basic and fundamental requirements for operation of electric propulsion systems, whether they be in the laboratory or on flight spacecraft. Hence, it is important for the electric propulsion community to have a common understanding of typical methods for flow control and measurement. This paper addresses the topic of propellant flow primarily for the gaseous propellant systems which have dominated laboratory research and flight application over the last few decades, although other types of systems are also briefly discussed. While most flight systems have employed a type of pressure-fed flow restrictor for flow control, both thermal-based and pressure-based mass flow controllers are routinely used in laboratories. Fundamentals and theory of operation of these types of controllers are presented, along with sources of uncertainty associated with their use. Methods of calibration and recommendations for calibration processes are presented. Finally, details of uncertainty calculations are presented for some common calibration methods and for the linear fits to calibration data that are commonly used.
Onset patterns in autism: Variation across informants, methods, and timing.
Ozonoff, Sally; Gangi, Devon; Hanzel, Elise P; Hill, Alesha; Hill, Monique M; Miller, Meghan; Schwichtenberg, A J; Steinfeld, Mary Beth; Parikh, Chandni; Iosif, Ana-Maria
2018-05-01
While previous studies suggested that regressive forms of onset were not common in autism spectrum disorder (ASD), more recent investigations suggest that the rates are quite high and may be under-reported using certain methods. The current study undertook a systematic investigation of how rates of regression differed by measurement method. Infants with (n = 147) and without a family history of ASD (n = 83) were seen prospectively for up to 7 visits in the first three years of life. Reports of symptom onset were collected using four measures that systematically varied the informant (examiner vs. parent), the decision type (categorical [regression absent or present] vs. dimensional [frequency of social behaviors]), and the timing of the assessment (retrospective vs. prospective). Latent class growth models were used to classify individual trajectories to see whether regressive onset patterns were infrequent or widespread within the ASD group. A majority of the sample was classified as having a regressive onset using either examiner (88%) or parent (69%) prospective dimensional ratings. Rates of regression were much lower using retrospective or categorical measures (from 29 to 47%). Agreement among different measurement methods was low. Declining trajectories of development, consistent with a regressive onset pattern, are common in children with ASD and may be more the rule than the exception. The accuracy of widely used methods of measuring onset is questionable and the present findings argue against their widespread use. Autism Res 2018, 11: 788-797. © 2018 International Society for Autism Research, Wiley Periodicals, Inc. This study examines different ways of measuring the onset of symptoms in autism spectrum disorder (ASD). The present findings suggest that declining developmental skills, consistent with a regressive onset pattern, are common in children with ASD and may be more the rule than the exception. The results question the accuracy of widely used methods of measuring symptom onset and argue against their widespread use. © 2018 International Society for Autism Research, Wiley Periodicals, Inc.
Schilling, Katherine; Applegate, Rachel
2012-10-01
Libraries are increasingly called upon to demonstrate student learning outcomes and the tangible benefits of library educational programs. This study reviewed and compared the efficacy of traditionally used measures for assessing library instruction, examining the benefits and drawbacks of assessment measures and exploring the extent to which knowledge, attitudes, and behaviors actually paralleled demonstrated skill levels. An overview of recent literature on the evaluation of information literacy education addressed these questions: (1) What evaluation measures are commonly used for evaluating library instruction? (2) What are the pros and cons of popular evaluation measures? (3) What are the relationships between measures of skills versus measures of attitudes and behavior? Research outcomes were used to identify relationships between measures of attitudes, behaviors, and skills, which are typically gathered via attitudinal surveys, written skills tests, or graded exercises. Results provide useful information about the efficacy of instructional evaluation methods, including showing significant disparities between attitudes, skills, and information usage behaviors. This information can be used by librarians to implement the most appropriate evaluation methods for measuring important variables that accurately demonstrate students' attitudes, behaviors, or skills.
Applicability of common measures in multifocus image fusion comparison
NASA Astrophysics Data System (ADS)
Vajgl, Marek
2017-11-01
Image fusion is an image processing area aimed at fusion of multiple input images to achieve an output image somehow better then each of the input ones. In the case of "multifocus fusion", input images are capturing the same scene differing ina focus distance. The aim is to obtain an image, which is sharp in all its areas. The are several different approaches and methods used to solve this problem. However, it is common question which one is the best. This work describes a research covering the field of common measures with a question, if some of them can be used as a quality measure of the fusion result evaluation.
Microsiemens or Milligrams: Measures of Ionic Mixtures ...
In December of 2016, EPA released the Draft Field-Based Methods for Developing Aquatic Life Criteria for Specific Conductivity for public comment. Once final, states and authorized tribes may use these methods to derive field-based ecoregional ambient Aquatic Life Ambient Water Quality Criteria (AWQC) for specific conductivity (SC) in flowing waters. The methods provide flexible approaches for developing science-based SC criteria that reflect ecoregional or state specific factors. The concentration of a dissolved salt mixture can be measured in a number of ways including measurement of total dissolved solids, freezing point depression, refractive index, density, or the sum of the concentrations of individually measured ions. For the draft method, SC was selected as the measure because SC is a measure of all ions in the mixture; the measurement technology is fast, inexpensive, and accurate, and it measures only dissolved ions. When developing water quality criteria for major ions, some stakeholders may prefer to identify the ionic constituents as a measure of exposure instead of SC. A field-based method was used to derive example chronic and acute water quality criteria for SC and two anions a common mixture of ions (bicarbonate plus sulfate, [HCO3−] + [SO42−] in mg/L) that represent common mixtures in streams. These two anions are sufficient to model the ion mixture and SC (R2 = 0.94). Using [HCO3−] + [SO42−] does not imply that these two anions are the
Tests of Measurement Invariance without Subgroups: A Generalization of Classical Methods
ERIC Educational Resources Information Center
Merkle, Edgar C.; Zeileis, Achim
2013-01-01
The issue of measurement invariance commonly arises in factor-analytic contexts, with methods for assessment including likelihood ratio tests, Lagrange multiplier tests, and Wald tests. These tests all require advance definition of the number of groups, group membership, and offending model parameters. In this paper, we study tests of measurement…
A Structural and Correlational Analysis of Two Common Measures of Personal Epistemology
ERIC Educational Resources Information Center
Laster, Bonnie Bost
2010-01-01
Scope and Method of Study: The current inquiry is a factor analytic study which utilizes first and second order factor analytic methods to examine the internal structures of two measurements of personal epistemological beliefs: the Schommer Epistemological Questionnaire (SEQ) and Epistemic Belief Inventory (EBI). The study also examines the…
[The measurement of data quality in censuses of population and housing].
1980-01-01
The determination of data quality in population and housing censuses is discussed. Principal types of errors commonly found in census data are reviewed, and the parameters used to evaluate data quality are described. Various methods for measuring data quality are outlined and possible applications of the methods are illustrated using Canadian examples
Alternative Methods in the Evaluation of School District Cash Management Programs.
ERIC Educational Resources Information Center
Dembowski, Frederick L.
1980-01-01
Empirically evaluates three measures of effectiveness of school district cash management: the rate of return method in common use and two new measures--efficiency rating and Net Present Value (NPV). The NPV approach allows examination of efficiency and provides a framework for evaluating other areas of educational policy. (Author/IRT)
Emotion Recognition Ability: A Multimethod-Multitrait Study.
ERIC Educational Resources Information Center
Gaines, Margie; And Others
A common paradigm in measuring the ability to recognize facial expressions of emotion is to present photographs of facial expressions and to ask subjects to identify the emotion. The Affect Blend Test (ABT) uses this method of assessment and is scored for accuracy on specific affects as well as total accuracy. Another method of measuring affect…
Pitfalls in Aggregating Performance Measures in Higher Education
ERIC Educational Resources Information Center
Williams, Ross; de Rassenfosse, Gaétan
2016-01-01
National and international rankings of universities are now an accepted part of the higher education landscape. Rankings aggregate different performance measures into a single scale and therefore depend on the methods and weights used to aggregate. The most common method is to scale each variable relative to the highest performing entity prior to…
Shallow Reflection Method for Water-Filled Void Detection and Characterization
NASA Astrophysics Data System (ADS)
Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Hazreek, Z. A. M.; Mohammad, A. H.; Izzaty, R. A.
2018-04-01
Shallow investigation is crucial in enhancing the characteristics of subsurface void commonly encountered in civil engineering, and one such technique commonly used is seismic-reflection technique. An assessment of the effectiveness of such an approach is critical to determine whether the quality of the works meets the prescribed requirements. Conventional quality testing suffers limitations including: limited coverage (both area and depth) and problems with resolution quality. Traditionally quality assurance measurements use laboratory and in-situ invasive and destructive tests. However geophysical approaches, which are typically non-invasive and non-destructive, offer a method by which improvement of detection can be measured in a cost-effective way. Of this seismic reflection have proved useful to assess void characteristic, this paper evaluates the application of shallow seismic-reflection method in characterizing the water-filled void properties at 0.34 m depth, specifically for detection and characterization of void measurement using 2-dimensional tomography.
Multivariate Analysis of Longitudinal Rates of Change
Bryan, Matthew; Heagerty, Patrick J.
2016-01-01
Longitudinal data allow direct comparison of the change in patient outcomes associated with treatment or exposure. Frequently, several longitudinal measures are collected that either reflect a common underlying health status, or characterize processes that are influenced in a similar way by covariates such as exposure or demographic characteristics. Statistical methods that can combine multivariate response variables into common measures of covariate effects have been proposed by Roy and Lin [1]; Proust-Lima, Letenneur and Jacqmin-Gadda [2]; and Gray and Brookmeyer [3] among others. Current methods for characterizing the relationship between covariates and the rate of change in multivariate outcomes are limited to select models. For example, Gray and Brookmeyer [3] introduce an “accelerated time” method which assumes that covariates rescale time in longitudinal models for disease progression. In this manuscript we detail an alternative multivariate model formulation that directly structures longitudinal rates of change, and that permits a common covariate effect across multiple outcomes. We detail maximum likelihood estimation for a multivariate longitudinal mixed model. We show via asymptotic calculations the potential gain in power that may be achieved with a common analysis of multiple outcomes. We apply the proposed methods to the analysis of a trivariate outcome for infant growth and compare rates of change for HIV infected and uninfected infants. PMID:27417129
Methods for Quantitative Creatinine Determination.
Moore, John F; Sharer, J Daniel
2017-04-06
Reliable measurement of creatinine is necessary to assess kidney function, and also to quantitate drug levels and diagnostic compounds in urine samples. The most commonly used methods are based on the Jaffe principal of alkaline creatinine-picric acid complex color formation. However, other compounds commonly found in serum and urine may interfere with Jaffe creatinine measurements. Therefore, many laboratories have made modifications to the basic method to remove or account for these interfering substances. This appendix will summarize the basic Jaffe method, as well as a modified, automated version. Also described is a high performance liquid chromatography (HPLC) method that separates creatinine from contaminants prior to direct quantification by UV absorption. Lastly, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method is described that uses stable isotope dilution to reliably quantify creatinine in any sample. This last approach has been recommended by experts in the field as a means to standardize all quantitative creatinine methods against an accepted reference. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.
Thirty Years of Nonparametric Item Response Theory.
ERIC Educational Resources Information Center
Molenaar, Ivo W.
2001-01-01
Discusses relationships between a mathematical measurement model and its real-world applications. Makes a distinction between large-scale data matrices commonly found in educational measurement and smaller matrices found in attitude and personality measurement. Also evaluates nonparametric methods for estimating item response functions and…
Robert E. Keane; Stacy A. Drury; Eva C. Karau; Paul F. Hessburg; Keith M. Reynolds
2010-01-01
This paper presents modeling methods for mapping fire hazard and fire risk using a research model called FIREHARM (FIRE Hazard and Risk Model) that computes common measures of fire behavior, fire danger, and fire effects to spatially portray fire hazard over space. FIREHARM can compute a measure of risk associated with the distribution of these measures over time using...
Study on AC loss measurements of HTS power cable for standardizing
NASA Astrophysics Data System (ADS)
Mukoyama, Shinichi; Amemiya, Naoyuki; Watanabe, Kazuo; Iijima, Yasuhiro; Mido, Nobuhiro; Masuda, Takao; Morimura, Toshiya; Oya, Masayoshi; Nakano, Tetsutaro; Yamamoto, Kiyoshi
2017-09-01
High-temperature superconducting power cables (HTS cables) have been developed for more than 20 years. In addition of the cable developments, the test methods of the HTS cables have been discussed and proposed in many laboratories and companies. Recently the test methods of the HTS cables is required to standardize and to common in the world. CIGRE made the working group (B1-31) for the discussion of the test methods of the HTS cables as a power cable, and published the recommendation of the test method. Additionally, IEC TC20 submitted the New Work Item Proposal (NP) based on the recommendation of CIGRE this year, IEC TC20 and IEC TC90 started the standardization work on Testing of HTS AC cables. However, the individual test method that used to measure a performance of HTS cables hasn’t been established as world’s common methods. The AC loss is one of the most important properties to disseminate low loss and economical efficient HTS cables in the world. We regard to establish the method of the AC loss measurements in rational and in high accuracy. Japan is at a leading position in the AC loss study, because Japanese researchers have studied on the AC loss technically and scientifically, and also developed the effective technologies for the AC loss reduction. The JP domestic commission of TC90 made a working team to discussion the methods of the AC loss measurements for aiming an international standard finally. This paper reports about the AC loss measurement of two type of the HTS conductors, such as a HTS conductor without a HTS shield and a HTS conductor with a HTS shield. The AC loss measurement method is suggested by the electrical method..
Evolving forecasting classifications and applications in health forecasting
Soyiri, Ireneous N; Reidpath, Daniel D
2012-01-01
Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533
Biases of chamber methods for measuring soil CO2 efflux demonstrated with a laboratory apparatus.
S. Mark Nay; Kim G. Mattson; Bernard T. Bormann
1994-01-01
Investigators have historically measured soil CO2 efflux as an indicator of soil microbial and root activity and more recently in calculations of carbon budgets. The most common methods estimate CO2 efflux by placing a chamber over the soil surface and quantifying the amount of CO2 entering the...
Currently there are no EPA reference sampling mehtods that have been promulgated for measuring Hg from coal combustion sources. EPA Method 29 is most commonly applied. The ASTM Ontario Hydro Draft Method for measuring oxidized, elemental, particulate-bound and total Hg is now und...
NASA Technical Reports Server (NTRS)
Snyder, G. Jeffrey (Inventor)
2015-01-01
A high temperature Seebeck coefficient measurement apparatus and method with various features to minimize typical sources of errors is described. Common sources of temperature and voltage measurement errors which may impact accurate measurement are identified and reduced. Applying the identified principles, a high temperature Seebeck measurement apparatus and method employing a uniaxial, four-point geometry is described to operate from room temperature up to 1300K. These techniques for non-destructive Seebeck coefficient measurements are simple to operate, and are suitable for bulk samples with a broad range of physical types and shapes.
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
Karasz, Alison; Patel, Viraj; Kabita, Mahbhooba; Shimu, Parvin
2013-01-01
Although common mental disorder (CMD) is highly prevalent among South Asian immigrant women, they rarely seek mental treatment. This may be owing in part to the lack of conceptual synchrony between medical models of mental disorder and the social models of distress common in South Asian communities. Furthermore, common mental health screening and diagnostic measures may not adequately capture distress in this group. Community-based participatory research (CBPR) is ideally suited to help address measurement issues in CMD as well as to develop culturally appropriate treatment models. To use participatory methods to identify an appropriate, culturally specific mental health syndrome and develop an instrument to measure this syndrome. We formed a partnership between researchers, clinicians, and community members. The partnership selected a culturally specific model of emotional distress/illness, "tension," as a focus for further study. Partners developed a scale to measure Tension and tested the new scale on 162 Bangladeshi immigrant women living in the Bronx. The 24-item "Tension Scale" had high internal consistency (α = 0.83). On bivariate analysis, the scale significantly correlated in the expected direction with depressed as measured by the Patient Health Questionnaire (PHQ-2), age, education, self-rated health, having seen a physician in the past year, and other variables. Using participatory techniques, we created a new measure designed to assess CMD in an isolated immigrant group. The new measure shows excellent psychometric properties and will be helpful in the implementation of a community-based, culturally synchronous intervention for depression. We describe a useful strategy for the rapid development and field testing of culturally appropriate measures of mental distress and disorder.
NASA Astrophysics Data System (ADS)
Marshak, William P.; Darkow, David J.; Wesler, Mary M.; Fix, Edward L.
2000-08-01
Computer-based display designers have more sensory modes and more dimensions within sensory modality with which to encode information in a user interface than ever before. This elaboration of information presentation has made measurement of display/format effectiveness and predicting display/format performance extremely difficult. A multivariate method has been devised which isolates critical information, physically measures its signal strength, and compares it with other elements of the display, which act like background noise. This common Metric relates signal-to-noise ratios (SNRs) within each stimulus dimension, then combines SNRs among display modes, dimensions and cognitive factors can predict display format effectiveness. Examples with their Common Metric assessment and validation in performance will be presented along with the derivation of the metric. Implications of the Common Metric in display design and evaluation will be discussed.
Methods for Human Dehydration Measurement
NASA Astrophysics Data System (ADS)
Trenz, Florian; Weigel, Robert; Hagelauer, Amelie
2018-03-01
The aim of this article is to give a broad overview of current methods for the identification and quantification of the human dehydration level. Starting off from most common clinical setups, including vital parameters and general patients' appearance, more quantifiable results from chemical laboratory and electromagnetic measurement methods will be reviewed. Different analysis methods throughout the electromagnetic spectrum, ranging from direct current (DC) conductivity measurements up to neutron activation analysis (NAA), are discussed on the base of published results. Finally, promising technologies, which allow for an integration of a dehydration assessment system in a compact and portable way, will be spotted.
Aging Studies in Drosophila melanogaster
Sun, Yaning; Yolitz, Jason; Wang, Cecilia; Spangler, Edward; Zhan, Ming; Zou, Sige
2015-01-01
Summary Drosophila is a genetically tractable system ideal for investigating the mechanisms of aging and developing interventions for promoting healthy aging. Here we describe methods commonly used in Drosophila aging research. These include basic approaches for preparation of diets and measurements of lifespan, food intake and reproductive output. We also describe some commonly used assays to measure changes in physiological and behavioral functions of Drosophila in aging, such as stress resistance and locomotor activity. PMID:23929099
NASA Technical Reports Server (NTRS)
Lewandowski, W.
1994-01-01
The introduction of the GPS common-view method at the beginning of the 1980's led to an immediate and dramatic improvement of international time comparisons. Since then, further progress brought the precision and accuracy of GPS common-view intercontinental time transfer from tens of nanoseconds to a few nanoseconds, even with SA activated. This achievement was made possible by the use of the following: ultra-precise ground antenna coordinates, post-processed precise ephemerides, double-frequency measurements of ionosphere, and appropriate international coordination and standardization. This paper reviews developments and applications of the GPS common-view method during the last decade and comments on possible future improvements whose objective is to attain sub-nanosecond uncertainty.
Objective comparison of particle tracking methods.
Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R; Godinez, William J; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E G; Jaldén, Joakim; Blau, Helen M; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P; Dan, Han-Wei; Tsai, Yuh-Show; Ortiz de Solórzano, Carlos; Olivo-Marin, Jean-Christophe; Meijering, Erik
2014-03-01
Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Because manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized an open competition in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to notable practical conclusions for users and developers.
A new accuracy measure based on bounded relative error for time series forecasting
Twycross, Jamie; Garibaldi, Jonathan M.
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred. PMID:28339480
A new accuracy measure based on bounded relative error for time series forecasting.
Chen, Chao; Twycross, Jamie; Garibaldi, Jonathan M
2017-01-01
Many accuracy measures have been proposed in the past for time series forecasting comparisons. However, many of these measures suffer from one or more issues such as poor resistance to outliers and scale dependence. In this paper, while summarising commonly used accuracy measures, a special review is made on the symmetric mean absolute percentage error. Moreover, a new accuracy measure called the Unscaled Mean Bounded Relative Absolute Error (UMBRAE), which combines the best features of various alternative measures, is proposed to address the common issues of existing measures. A comparative evaluation on the proposed and related measures has been made with both synthetic and real-world data. The results indicate that the proposed measure, with user selectable benchmark, performs as well as or better than other measures on selected criteria. Though it has been commonly accepted that there is no single best accuracy measure, we suggest that UMBRAE could be a good choice to evaluate forecasting methods, especially for cases where measures based on geometric mean of relative errors, such as the geometric mean relative absolute error, are preferred.
ERIC Educational Resources Information Center
Rhemtulla, Mijke; Borsboom, Denny; van Bork, Riet
2017-01-01
In his focus article, "Rethinking Traditional Methods of Survey Validation" in this v15 n2 2017 issue of "Journal Measurement: Interdisciplinary Research and Perspectives," Andrew Maul writes that it is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes,…
Liegl, Gregor; Wahl, Inka; Berghöfer, Anne; Nolte, Sandra; Pieh, Christoph; Rose, Matthias; Fischer, Felix
2016-03-01
To investigate the validity of a common depression metric in independent samples. We applied a common metrics approach based on item-response theory for measuring depression to four German-speaking samples that completed the Patient Health Questionnaire (PHQ-9). We compared the PHQ item parameters reported for this common metric to reestimated item parameters that derived from fitting a generalized partial credit model solely to the PHQ-9 items. We calibrated the new model on the same scale as the common metric using two approaches (estimation with shifted prior and Stocking-Lord linking). By fitting a mixed-effects model and using Bland-Altman plots, we investigated the agreement between latent depression scores resulting from the different estimation models. We found different item parameters across samples and estimation methods. Although differences in latent depression scores between different estimation methods were statistically significant, these were clinically irrelevant. Our findings provide evidence that it is possible to estimate latent depression scores by using the item parameters from a common metric instead of reestimating and linking a model. The use of common metric parameters is simple, for example, using a Web application (http://www.common-metrics.org) and offers a long-term perspective to improve the comparability of patient-reported outcome measures. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Hsiao, Yu-Yu; Kwok, Oi-Man; Lai, Mark H. C.
2018-01-01
Path models with observed composites based on multiple items (e.g., mean or sum score of the items) are commonly used to test interaction effects. Under this practice, researchers generally assume that the observed composites are measured without errors. In this study, we reviewed and evaluated two alternative methods within the structural…
Common Mental Disorders among Occupational Groups: Contributions of the Latent Class Model
Martins Carvalho, Fernando; de Araújo, Tânia Maria
2016-01-01
Background. The Self-Reporting Questionnaire (SRQ-20) is widely used for evaluating common mental disorders. However, few studies have evaluated the SRQ-20 measurements performance in occupational groups. This study aimed to describe manifestation patterns of common mental disorders symptoms among workers populations, by using latent class analysis. Methods. Data derived from 9,959 Brazilian workers, obtained from four cross-sectional studies that used similar methodology, among groups of informal workers, teachers, healthcare workers, and urban workers. Common mental disorders were measured by using SRQ-20. Latent class analysis was performed on each database separately. Results. Three classes of symptoms were confirmed in the occupational categories investigated. In all studies, class I met better criteria for suspicion of common mental disorders. Class II discriminated workers with intermediate probability of answers to the items belonging to anxiety, sadness, and energy decrease that configure common mental disorders. Class III was composed of subgroups of workers with low probability to respond positively to questions for screening common mental disorders. Conclusions. Three patterns of symptoms of common mental disorders were identified in the occupational groups investigated, ranging from distinctive features to low probabilities of occurrence. The SRQ-20 measurements showed stability in capturing nonpsychotic symptoms. PMID:27630999
Conklin, Annalijn; Nolte, Ellen; Vrijhoef, Hubertus
2013-01-01
An overview was produced of approaches currently used to evaluate chronic disease management in selected European countries. The study aims to describe the methods and metrics used in Europe as a first to help advance the methodological basis for their assessment. A common template for collection of evaluation methods and performance measures was sent to key informants in twelve European countries; responses were summarized in tables based on template evaluation categories. Extracted data were descriptively analyzed. Approaches to the evaluation of chronic disease management vary widely in objectives, designs, metrics, observation period, and data collection methods. Half of the reported studies used noncontrolled designs. The majority measure clinical process measures, patient behavior and satisfaction, cost and utilization; several also used a range of structural indicators. Effects are usually observed over 1 or 3 years on patient populations with a single, commonly prevalent, chronic disease. There is wide variation within and between European countries on approaches to evaluating chronic disease management in their objectives, designs, indicators, target audiences, and actors involved. This study is the first extensive, international overview of the area reported in the literature.
Student Perceptions of Classroom Achievement Goal Structure: Is It Appropriate to Aggregate?
ERIC Educational Resources Information Center
Lam, Arena C.; Ruzek, Erik A.; Schenke, Katerina; Conley, AnneMarie M.; Karabenick, Stuart A.
2015-01-01
Student reports are a common approach to characterizing how students experience their classrooms. We used a recently developed method--multilevel confirmatory factor analysis--to determine whether commonly employed measures of achievement goal structure constructs (mastery and performance) typically verified at the student level can be verified at…
Bennett, Derrick A; Landry, Denise; Little, Julian; Minelli, Cosetta
2017-09-19
Several statistical approaches have been proposed to assess and correct for exposure measurement error. We aimed to provide a critical overview of the most common approaches used in nutritional epidemiology. MEDLINE, EMBASE, BIOSIS and CINAHL were searched for reports published in English up to May 2016 in order to ascertain studies that described methods aimed to quantify and/or correct for measurement error for a continuous exposure in nutritional epidemiology using a calibration study. We identified 126 studies, 43 of which described statistical methods and 83 that applied any of these methods to a real dataset. The statistical approaches in the eligible studies were grouped into: a) approaches to quantify the relationship between different dietary assessment instruments and "true intake", which were mostly based on correlation analysis and the method of triads; b) approaches to adjust point and interval estimates of diet-disease associations for measurement error, mostly based on regression calibration analysis and its extensions. Two approaches (multiple imputation and moment reconstruction) were identified that can deal with differential measurement error. For regression calibration, the most common approach to correct for measurement error used in nutritional epidemiology, it is crucial to ensure that its assumptions and requirements are fully met. Analyses that investigate the impact of departures from the classical measurement error model on regression calibration estimates can be helpful to researchers in interpreting their findings. With regard to the possible use of alternative methods when regression calibration is not appropriate, the choice of method should depend on the measurement error model assumed, the availability of suitable calibration study data and the potential for bias due to violation of the classical measurement error model assumptions. On the basis of this review, we provide some practical advice for the use of methods to assess and adjust for measurement error in nutritional epidemiology.
Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error
Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee
2017-01-01
Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146
Using Caspar Creek flow records to test peak flow estimation methods applicable to crossing design
Peter H. Cafferata; Leslie M. Reid
2017-01-01
Long-term flow records from sub-watersheds in the Caspar Creek Experimental Watersheds were used to test the accuracy of four methods commonly used to estimate peak flows in small forested watersheds: the Rational Method, the updated USGS Magnitude and Frequency Method, flow transference methods, and the NRCS curve number method. Comparison of measured and calculated...
Incremental and Predictive Utility of Formative Assessment Methods of Reading Comprehension
ERIC Educational Resources Information Center
Marcotte, Amanda M.; Hintze, John M.
2009-01-01
Formative assessment measures are commonly used in schools to assess reading and to design instruction accordingly. The purpose of this research was to investigate the incremental and concurrent validity of formative assessment measures of reading comprehension. It was hypothesized that formative measures of reading comprehension would contribute…
Quantitative Measures of Sustainability in Institutions of Higher Education
ERIC Educational Resources Information Center
Klein-Banai, Cynthia
2010-01-01
The measurement of sustainability for institutions, businesses, regions, and nations is a complex undertaking. There are many disciplinary approaches but sustainability is innately interdisciplinary and the challenge is to apply these approaches in a way that can best measure progress towards sustainability. The most common methods used by…
Xu, Anping; Chen, Weidong; Xia, Yong; Zhou, Yu; Ji, Ling
2018-04-07
HbA1c is a widely used biomarker for diabetes mellitus management. Here, we evaluated the accuracy of six methods for determining HbA1c values in Chinese patients with common α- and β-globin chains variants in China. Blood samples from normal subjects and individuals exhibiting hemoglobin variants were analyzed for HbA1c, using Sebia Capillarys 2 Flex Piercing (C2FP), Bio-Rad Variant II Turbo 2.0, Tosoh HLC-723 G8 (ver. 5.24), Arkray ADAMS A1c HA-8180V fast mode, Cobas c501 and Trinity Ultra2 systems. DNA sequencing revealed five common β-globin chain variants and three common α-globin chain variants. The most common variant was Hb E, followed by Hb New York, Hb J-Bangkok, Hb G-Coushatta, Hb Q-Thailand, Hb G-Honolulu, Hb Ube-2 and Hb G-Taipei. Variant II Turbo 2.0, Ultra2 and Cobas c501 showed good agreement with C2FP for most samples with variants. HLC-723 G8 yielded no HbA1c values for Hb J-Bangkok, Hb Q-Thailand and Hb G-Honolulu. Samples with Hb E, Hb G-Coushatta, Hb G-Taipei and Hb Ube-2 produced significant negative biases for HLC-723 G8. HA-8180V showed statistically significant differences for Hb E, Hb G-Coushatta, Hb G-Taipei, Hb Q-Thailand and Hb G-Honolulu. HA-8180V yielded no HbA1c values for Hb J-Bangkok. All methods showed good agreement for samples with Hb New York. Some common hemoglobin variants can interfere with HbA1c determination by the most popular methods in China.
George, David L; Smith, Michael J; Draugalis, JoLaine R; Tolma, Eleni L; Keast, Shellie L; Wilson, Justin B
2018-03-01
The Center for Medicare and Medicaid Services (CMS) created the Star Rating system based on multiple measures that indicate the overall quality of health plans. Community pharmacists can impact certain Star Ratings measure scores through medication adherence and patient safety interventions. To explore methods, needs, and workflow issues of community pharmacists to improve CMS Star Ratings measures. Think-aloud protocols (TAPs) were conducted with active community retail pharmacists in Oklahoma. Each TAP was audio recorded and transcribed to documents for analysis. Analysts agreed on common themes, illuminated differences in findings, and saturation of the data gathered. Methods, needs, and workflow themes of community pharmacists associated with improving Star Ratings measures were compiled and organized to exhibit a decision-making process. Five TAPs were performed among three independent pharmacy owners, one multi-store owner, and one chain-store administrator. A thematically common 4-step process to monitor and improve CMS Star Ratings scores among participants was identified. To improve Star Ratings measures, pharmacists: 1) used technology to access scores, 2) analyzed data to strategically set goals, 3) assessed individual patient information for comprehensive assessment, and 4) decided on interventions to best impact Star Ratings scores. Participants also shared common needs, workflow issues, and benefits associated with methods used in improving Star Ratings. TAPs were useful in exploring processes of pharmacists who improve CMS Star Ratings scores. Pharmacists demonstrated and verbalized their methods, workflow issues, needs, and benefits related to performing the task. The themes and decision-making process identified to improving CMS Star Ratings scores will assist in the development of training and education programs for pharmacists in the community setting. Published by Elsevier Inc.
Assessment and Evaluation Methods for Access Services
ERIC Educational Resources Information Center
Long, Dallas
2014-01-01
This article serves as a primer for assessment and evaluation design by describing the range of methods commonly employed in library settings. Quantitative methods, such as counting and benchmarking measures, are useful for investigating the internal operations of an access services department in order to identify workflow inefficiencies or…
USDA-ARS?s Scientific Manuscript database
Volatilization represents a significant loss pathway for many pesticides, herbicides and other agrochemicals. One common method for measuring the volatilization of agrochemicals is the flux-gradient method. Using this method, the chemical flux is estimated as the product of the vertical concentratio...
Measuring true localization accuracy in super resolution microscopy with DNA-origami nanostructures
NASA Astrophysics Data System (ADS)
Reuss, Matthias; Fördős, Ferenc; Blom, Hans; Öktem, Ozan; Högberg, Björn; Brismar, Hjalmar
2017-02-01
A common method to assess the performance of (super resolution) microscopes is to use the localization precision of emitters as an estimate for the achieved resolution. Naturally, this is widely used in super resolution methods based on single molecule stochastic switching. This concept suffers from the fact that it is hard to calibrate measures against a real sample (a phantom), because true absolute positions of emitters are almost always unknown. For this reason, resolution estimates are potentially biased in an image since one is blind to true position accuracy, i.e. deviation in position measurement from true positions. We have solved this issue by imaging nanorods fabricated with DNA-origami. The nanorods used are designed to have emitters attached at each end in a well-defined and highly conserved distance. These structures are widely used to gauge localization precision. Here, we additionally determined the true achievable localization accuracy and compared this figure of merit to localization precision values for two common super resolution microscope methods STED and STORM.
How to Detect Insight Moments in Problem Solving Experiments.
Laukkonen, Ruben E; Tangen, Jason M
2018-01-01
Arguably, it is not possible to study insight moments during problem solving without being able to accurately detect when they occur (Bowden and Jung-Beeman, 2007). Despite over a century of research on the insight moment, there is surprisingly little consensus on the best way to measure them in real-time experiments. There have also been no attempts to evaluate whether the different ways of measuring insight converge. Indeed, if it turns out that the popular measures of insight diverge , then this may indicate that researchers who have used one method may have been measuring a different phenomenon to those who have used another method. We compare the strengths and weaknesses of the two most commonly cited ways of measuring insight: The feelings-of-warmth measure adapted from Metcalfe and Wiebe (1987), and the self-report measure adapted from Bowden and Jung-Beeman (2007). We find little empirical agreement between the two measures, and conclude that the self-report measure of Aha! is superior both methodologically and theoretically, and provides a better representation of what is commonly regarded as insight. We go on to describe and recommend a novel visceral measure of insight using a dynamometer as described in Creswell et al. (2016).
How to Detect Insight Moments in Problem Solving Experiments
Laukkonen, Ruben E.; Tangen, Jason M.
2018-01-01
Arguably, it is not possible to study insight moments during problem solving without being able to accurately detect when they occur (Bowden and Jung-Beeman, 2007). Despite over a century of research on the insight moment, there is surprisingly little consensus on the best way to measure them in real-time experiments. There have also been no attempts to evaluate whether the different ways of measuring insight converge. Indeed, if it turns out that the popular measures of insight diverge, then this may indicate that researchers who have used one method may have been measuring a different phenomenon to those who have used another method. We compare the strengths and weaknesses of the two most commonly cited ways of measuring insight: The feelings-of-warmth measure adapted from Metcalfe and Wiebe (1987), and the self-report measure adapted from Bowden and Jung-Beeman (2007). We find little empirical agreement between the two measures, and conclude that the self-report measure of Aha! is superior both methodologically and theoretically, and provides a better representation of what is commonly regarded as insight. We go on to describe and recommend a novel visceral measure of insight using a dynamometer as described in Creswell et al. (2016). PMID:29593598
Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method
NASA Astrophysics Data System (ADS)
Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu
2017-10-01
Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.
Duct Leakage Repeatability Testing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walker, Iain; Sherman, Max
2014-08-01
The purpose of this report is to evaluate the repeatability of the three most significant measurement techniques for duct leakage using data from the literature and recently obtained field data. We will also briefly discuss the first two factors. The main question to be answered by this study is to determine if differences in the repeatability of these tests methods is sufficient to indicate that any of these methods is so poor that it should be excluded from consideration as an allowed procedure in codes and standards. The three duct leak measurement methods assessed in this report are the twomore » duct pressurization methods that are commonly used by many practitioners and the DeltaQ technique. These are methods B, C and A, respectively of the ASTM E1554 standard. Although it would be useful to evaluate other duct leak test methods, this study focused on those test methods that are commonly used and are required in various test standards, such as BPI (2010), RESNET (2014), ASHRAE 62.2 (2013), California Title 24 (CEC 2012), DOE Weatherization and many other energy efficiency programs.« less
Saturation-inversion-recovery: A method for T1 measurement
NASA Astrophysics Data System (ADS)
Wang, Hongzhi; Zhao, Ming; Ackerman, Jerome L.; Song, Yiqiao
2017-01-01
Spin-lattice relaxation (T1) has always been measured by inversion-recovery (IR), saturation-recovery (SR), or related methods. These existing methods share a common behavior in that the function describing T1 sensitivity is the exponential, e.g., exp(- τ /T1), where τ is the recovery time. In this paper, we describe a saturation-inversion-recovery (SIR) sequence for T1 measurement with considerably sharper T1-dependence than those of the IR and SR sequences, and demonstrate it experimentally. The SIR method could be useful in improving the contrast between regions of differing T1 in T1-weighted MRI.
Kristensen, Gunn B B; Rustad, Pål; Berg, Jens P; Aakre, Kristin M
2016-09-01
We undertook this study to evaluate method differences for 5 components analyzed by immunoassays, to explore whether the use of method-dependent reference intervals may compensate for method differences, and to investigate commutability of external quality assessment (EQA) materials. Twenty fresh native single serum samples, a fresh native serum pool, Nordic Federation of Clinical Chemistry Reference Serum X (serum X) (serum pool), and 2 EQA materials were sent to 38 laboratories for measurement of cobalamin, folate, ferritin, free T4, and thyroid-stimulating hormone (TSH) by 5 different measurement procedures [Roche Cobas (n = 15), Roche Modular (n = 4), Abbott Architect (n = 8), Beckman Coulter Unicel (n = 2), and Siemens ADVIA Centaur (n = 9)]. The target value for each component was calculated based on the mean of method means or measured by a reference measurement procedure (free T4). Quality specifications were based on biological variation. Local reference intervals were reported from all laboratories. Method differences that exceeded acceptable bias were found for all components except folate. Free T4 differences from the uncommonly used reference measurement procedure were large. Reference intervals differed between measurement procedures but also within 1 measurement procedure. The serum X material was commutable for all components and measurement procedures, whereas the EQA materials were noncommutable in 13 of 50 occasions (5 components, 5 methods, 2 EQA materials). The bias between the measurement procedures was unacceptably large in 4/5 tested components. Traceability to reference materials as claimed by the manufacturers did not lead to acceptable harmonization. Adjustment of reference intervals in accordance with method differences and use of commutable EQA samples are not implemented commonly. © 2016 American Association for Clinical Chemistry.
Objective comparison of particle tracking methods
Chenouard, Nicolas; Smal, Ihor; de Chaumont, Fabrice; Maška, Martin; Sbalzarini, Ivo F.; Gong, Yuanhao; Cardinale, Janick; Carthel, Craig; Coraluppi, Stefano; Winter, Mark; Cohen, Andrew R.; Godinez, William J.; Rohr, Karl; Kalaidzidis, Yannis; Liang, Liang; Duncan, James; Shen, Hongying; Xu, Yingke; Magnusson, Klas E. G.; Jaldén, Joakim; Blau, Helen M.; Paul-Gilloteaux, Perrine; Roudot, Philippe; Kervrann, Charles; Waharte, François; Tinevez, Jean-Yves; Shorte, Spencer L.; Willemse, Joost; Celler, Katherine; van Wezel, Gilles P.; Dan, Han-Wei; Tsai, Yuh-Show; de Solórzano, Carlos Ortiz; Olivo-Marin, Jean-Christophe; Meijering, Erik
2014-01-01
Particle tracking is of key importance for quantitative analysis of intracellular dynamic processes from time-lapse microscopy image data. Since manually detecting and following large numbers of individual particles is not feasible, automated computational methods have been developed for these tasks by many groups. Aiming to perform an objective comparison of methods, we gathered the community and organized, for the first time, an open competition, in which participating teams applied their own methods independently to a commonly defined data set including diverse scenarios. Performance was assessed using commonly defined measures. Although no single method performed best across all scenarios, the results revealed clear differences between the various approaches, leading to important practical conclusions for users and developers. PMID:24441936
ERIC Educational Resources Information Center
Watkins, J.; Espie, C. A.; Curtice, L.; Mantala, K.; Corp, A.; Foley, J.
2006-01-01
Background: Epilepsy is common in people with intellectual disability, yet clinicians and researchers seldom obtain information directly from the client. The development and preliminary validation of a novel measure for use with people with mild to moderate intellectual disabilities is described. Methods: Focus group methods (6 groups; 24…
Intercomparison of six ambient [CH2O] measurement techniques
NASA Astrophysics Data System (ADS)
Gilpin, Tim; Apel, Eric; Fried, Alan; Wert, Bryan; Calvert, Jack; Genfa, Zhang; Dasgupta, Purnendu; Harder, Jerry W.; Heikes, Brian; Hopkins, Brian; Westberg, Hal; Kleindienst, Tad; Lee, Yin-Nan; Zhou, Xianliang; Lonneman, William; Sewell, Scott
1997-09-01
From May 29 to June 3, 1995 a blind intercomparison of six ambient formaldehyde measurement techniques took place at a field site near the National Center for Atmospheric Research in Boulder, Colorado. The continuous measurement methods intercompared were tunable diode laser absorption spectroscopy, (TDLAS); coil/2,4-dinitrophenylhydrazine, (CDNPH); 1,3-cyclohexanedione-diffusion scrubber (CHDDS); and the coil enzyme method (CENZ). In addition, two different cartridge methods were compared: silica gel-2,4-dinitrophenylhydrazine (DPNH) systems and a C-18-DNPH system. The intercomparison was conducted with spiked zero air (part 1) and ambient air (part 2). The CH2O standards for part 1 were calibrated by several independent methods and delivered to participants via a common glass manifold with potential trace gas interférants common to ambient air (O3, SO2, NO2, isoprene, H2O). The TDLAS system was used to confirm the absolute accuracy of the standards and served as a mission reference for part 1. The ambient phase lasted 44 hours with all participants sampling from a common glass tower. Differences between the ambient [CH2O] observed by the TDLAS and the other continuous methods were significant in some cases. For matched ambient measurement times the average ratios (±1σ) [CH2O]measured/[CH2O]TDLAS were: 0.89±0.12 (CDNPH); 1.30±0.02 (CHDDS); 0.63±0.03 (CENZ). The methods showed similar variations but different absolute values and the divergences appeared to result largely from calibration differences (no gas phase standards were used by groups other than NCAR). When the regressions of the participant [CH2O] values versus the TDLAS values, (measured in part 1), were used to normalize all of the results to the common gas phase standards of the NCAR group, the average ratios (±1σ), [CH2O]corrected/[CH2O]TDLAS for the first measurement period were much closer to unity: 1.04±0.14 (CDNPH), 1.00±0.11 (CHDDS), and 0.82±0.08 (CENZ). With the continuous methods used here, no unequivocal interferences were seen when SO2, NO2, O3, and isoprene impurities were added to prepared mixtures or when these were present in ambient air. The measurements with the C-18 DNPH (no O3 scrubber) and silica gel DNPH cartridges (with O3 scrubber) showed a reasonable correlation with the TDLAS measurements, although the results from the silica cartridges were about a factor of two below the standards in the spike experiments and about 35% below in the ambient measurements. Using the NCAR gas-phase spike data to calibrate the response of the silica gel cartridges in the ambient studies, the results are the same within statistical uncertainty. When the same gas phase calibration was used with the C-18 cartridges, the results showed a positive bias of about 35%, presumably reflecting a positive ozone interference in this case (no ozone scrubber used). The silica DNPH cartridge results from the second participant were highly scattered and showed no significant correlation with the TDLAS measurements.
The human motor neuron pools receive a dominant slow‐varying common synaptic input
Negro, Francesco; Yavuz, Utku Şükrü
2016-01-01
Key points Motor neurons in a pool receive both common and independent synaptic inputs, although the proportion and role of their common synaptic input is debated.Classic correlation techniques between motor unit spike trains do not measure the absolute proportion of common input and have limitations as a result of the non‐linearity of motor neurons.We propose a method that for the first time allows an accurate quantification of the absolute proportion of low frequency common synaptic input (<5 Hz) to motor neurons in humans.We applied the proposed method to three human muscles and determined experimentally that they receive a similar large amount (>60%) of common input, irrespective of their different functional and control properties.These results increase our knowledge about the role of common and independent input to motor neurons in force control. Abstract Motor neurons receive both common and independent synaptic inputs. This observation is classically based on the presence of a significant correlation between pairs of motor unit spike trains. The functional significance of different relative proportions of common input across muscles, individuals and conditions is still debated. One of the limitations in our understanding of correlated input to motor neurons is that it has not been possible so far to quantify the absolute proportion of common input with respect to the total synaptic input received by the motor neurons. Indeed, correlation measures of pairs of output spike trains only allow for relative comparisons. In the present study, we report for the first time an approach for measuring the proportion of common input in the low frequency bandwidth (<5 Hz) to a motor neuron pool in humans. This estimate is based on a phenomenological model and the theoretical fitting of the experimental values of coherence between the permutations of groups of motor unit spike trains. We demonstrate the validity of this theoretical estimate with several simulations. Moreover, we applied this method to three human muscles: the abductor digiti minimi, tibialis anterior and vastus medialis. Despite these muscles having different functional roles and control properties, as confirmed by the results of the present study, we estimate that their motor pools receive a similar and large (>60%) proportion of common low frequency oscillations with respect to their total synaptic input. These results suggest that the central nervous system provides a large amount of common input to motor neuron pools, in a similar way to that for muscles with different functional and control properties. PMID:27151459
NASA Astrophysics Data System (ADS)
Lü, Xiaozhou; Xie, Kai; Xue, Dongfeng; Zhang, Feng; Qi, Liang; Tao, Yebo; Li, Teng; Bao, Weimin; Wang, Songlin; Li, Xiaoping; Chen, Renjie
2017-10-01
Micro-capacitance sensors are widely applied in industrial applications for the measurement of mechanical variations. The measurement accuracy of micro-capacitance sensors is highly dependent on the capacitance measurement circuit. To overcome the inability of commonly used methods to directly measure capacitance variation and deal with the conflict between the measurement range and accuracy, this paper presents a capacitance variation measurement method which is able to measure the output capacitance variation (relative value) of the micro-capacitance sensor with a continuously variable measuring range. We present the principles and analyze the non-ideal factors affecting this method. To implement the method, we developed a capacitance variation measurement circuit and carried out experiments to test the circuit. The result shows that the circuit is able to measure a capacitance variation range of 0-700 pF linearly with a maximum relative accuracy of 0.05% and a capacitance range of 0-2 nF (with a baseline capacitance of 1 nF) with a constant resolution of 0.03%. The circuit is proposed as a new method to measure capacitance and is expected to have applications in micro-capacitance sensors for measuring capacitance variation with a continuously variable measuring range.
[Research progresses on ergonomics assessment and measurement methods for push-pull behavior].
Zhao, Yan; Li, Dongxu; Guo, Shengpeng
2011-10-01
Pushing and pulling (P&P) is a common operating mode of operator's physical works, and plays an important role in evaluation of human behavior health and operation performance. At present, there are many research methods of P&P, and this article is a state-of-art review of the classification of P&P research methods, the various impact factors in P&P program, technical details of internal/external P&P force measurement and evaluation, the limitation of current research methods and the future developments in the ergonomics field.
ERIC Educational Resources Information Center
Overmyer, Jerry
2015-01-01
This quantitative research compares five sections of College Algebra using flipped classroom methods with six sections using the traditional lecture/homework structure and its effect on student achievement as measured through a common final exam. Common final exam scores were the dependent variables. Instructors of flipped sections who had…
NASA Astrophysics Data System (ADS)
Radić, Valentina; Menounos, Brian; Shea, Joseph; Fitzpatrick, Noel; Tessema, Mekdes A.; Déry, Stephen J.
2017-12-01
As part of surface energy balance models used to simulate glacier melting, choosing parameterizations to adequately estimate turbulent heat fluxes is extremely challenging. This study aims to evaluate a set of four aerodynamic bulk methods (labeled as C methods), commonly used to estimate turbulent heat fluxes for a sloped glacier surface, and two less commonly used bulk methods developed from katabatic flow models. The C methods differ in their parameterizations of the bulk exchange coefficient that relates the fluxes to the near-surface measurements of mean wind speed, air temperature, and humidity. The methods' performance in simulating 30 min sensible- and latent-heat fluxes is evaluated against the measured fluxes from an open-path eddy-covariance (OPEC) method. The evaluation is performed at a point scale of a mountain glacier, using one-level meteorological and OPEC observations from multi-day periods in the 2010 and 2012 summer seasons. The analysis of the two independent seasons yielded the same key findings, which include the following: first, the bulk method, with or without the commonly used Monin-Obukhov (M-O) stability functions, overestimates the turbulent heat fluxes over the observational period, mainly due to a substantial overestimation of the friction velocity. This overestimation is most pronounced during the katabatic flow conditions, corroborating the previous findings that the M-O theory works poorly in the presence of a low wind speed maximum. Second, the method based on a katabatic flow model (labeled as the KInt method) outperforms any C method in simulating the friction velocity; however, the C methods outperform the KInt method in simulating the sensible-heat fluxes. Third, the best overall performance is given by a hybrid method, which combines the KInt approach with the C method; i.e., it parameterizes eddy viscosity differently than eddy diffusivity. An error analysis reveals that the uncertainties in the measured meteorological variables and the roughness lengths produce errors in the modeled fluxes that are smaller than the differences between the modeled and observed fluxes. This implies that further advances will require improvement to model theory rather than better measurements of input variables. Further data from different glaciers are needed to investigate any universality of these findings.
Comparison of the dye method with the thermocouple psychrometer for measuring leaf water potentials.
Knipling, E B; Kramer, P J
1967-10-01
The dye method for measuring water potential was examined and compared with the thermocouple psychrometer method in order to evaluate its usefulness for measuring leaf water potentials of forest trees and common laboratory plants. Psychrometer measurements are assumed to represent the true leaf water potentials. Because of the contamination of test solutions by cell sap and leaf surface residues, dye method values of most species varied about 1 to 5 bars from psychrometer values over the leaf water potential range of 0 to -30 bars. The dye method is useful for measuring changes and relative values in leaf potential. Because of species differences in the relationships of dye method values to true leaf water potentials, dye method values should be interpreted with caution when comparing different species or the same species growing in widely different environments. Despite its limitations the dye method has a usefulness to many workers because it is simple, requires no elaborate equipment, and can be used in both the laboratory and field.
Measuring Prices in Health Care Markets Using Commercial Claims Data.
Neprash, Hannah T; Wallace, Jacob; Chernew, Michael E; McWilliams, J Michael
2015-12-01
To compare methods of price measurement in health care markets. Truven Health Analytics MarketScan commercial claims. We constructed medical prices indices using three approaches: (1) a "sentinel" service approach based on a single common service in a specific clinical domain, (2) a market basket approach, and (3) a spending decomposition approach. We constructed indices at the Metropolitan Statistical Area level and estimated correlations between and within them. Price indices using a spending decomposition approach were strongly and positively correlated with indices constructed from broad market baskets of common services (r > 0.95). Prices of single common services exhibited weak to moderate correlations with each other and other measures. Market-level price measures that reflect broad sets of services are likely to rank markets similarly. Price indices relying on individual sentinel services may be more appropriate for examining specialty- or service-specific drivers of prices. © Health Research and Educational Trust.
Zhang, Lida; Sun, Da-Wen; Zhang, Zhihang
2017-03-24
Moisture sorption isotherm is commonly determined by saturated salt slurry method, which has defects of long time cost, cumbersome labor, and microbial deterioration of samples. Thus, a novel method, a w measurement (AWM) method, has been developed to overcome these drawbacks. Fundamentals and applications of this fast method have been introduced with respects to its typical operational steps, a variety of equipment set-ups and applied samples. The resultant rapidness and reliability have been evaluated by comparing with conventional methods. This review also discussed factors impairing measurement precision and accuracy, including inappropriate choice of predryingwetting techniques and unachieved moisture uniformity in samples due to inadequate time. This analysis and corresponding suggestions can facilitate improved AWM method with more satisfying accuracy and time cost.
Common pitfalls in statistical analysis: Measures of agreement.
Ranganathan, Priya; Pramesh, C S; Aggarwal, Rakesh
2017-01-01
Agreement between measurements refers to the degree of concordance between two (or more) sets of measurements. Statistical methods to test agreement are used to assess inter-rater variability or to decide whether one technique for measuring a variable can substitute another. In this article, we look at statistical measures of agreement for different types of data and discuss the differences between these and those for assessing correlation.
Reliability of proton NMR spectroscopy for the assessment of frying oil oxidation
USDA-ARS?s Scientific Manuscript database
Although there are many analytical methods developed to assess oxidation of edible oil, it is still common to see a lack of consistency in results from different methods. This inconsistency is expected since there are numerous oxidation products and any analytical method measuring only one kind of o...
Reliability of cervical lordosis measurement techniques on long-cassette radiographs.
Janusz, Piotr; Tyrakowski, Marcin; Yu, Hailong; Siemionow, Kris
2016-11-01
Lateral radiographs are commonly used to assess cervical sagittal alignment. Three assessment methods have been described and are commonly utilized in clinical practice. These methods are described for perfect lateral cervical radiographs, however in everyday practice radiograph quality varies. The aim of this study was to compare the reliability and reproducibility of 3 cervical lordosis (CL) measurement methods. Forty-four standing lateral radiographs were randomly chosen from a lateral long-cassette radiograph database. Measurements of CL were performed with: Cobb method C2-C7 (CM), C2-C7 posterior tangent method (PTM), sum of posterior tangent method for each segment (SPTM). Three independent orthopaedic surgeons measured CL using the three methods on 44 lateral radiographs. One researcher used the three methods to measured CL three times at 4-week time intervals. Agreement between the methods as well as their intra- and interobserver reliability were tested and quantified by intraclass correlation coefficient (ICC) and median error for a single measurement (SEM). ICC of 0.75 or more reflected an excellent agreement/reliability. The results were compared with repeated ANOVA test, with p < 0.05 considered as significant. All methods revealed excellent intra- and interobserver reliability. Agreement (ICC, SEM) between three methods was (0.89°, 3.44°), between CM and SPTM was (0.82°, 4.42°), between CM and PTM was (0.80°, 4.80°) and between PTM and SPTM was (0.99°, 1.10°). Mean values CL for a CM, PTM, SPTM were 10.5° ± 13.9°, 17.5° ± 15.6° and 17.7° ± 15.9° (p < 0.0001), respectively. The significant difference was between CM vs PTM (p < 0.0001) and CM vs SPTM (p < 0.0001), but not between PTM vs SPTM (p > 0.05). All three methods appeared to be highly reliable. Although, high agreement between all measurement methods was shown, we do not recommend using Cobb measurement method interchangeably with PTM or SPTM within a single study as this could lead to error, whereas, such a comparison between tangent methods can be considered.
NASA Astrophysics Data System (ADS)
Durham, J. M.; Poulson, D.; Bacon, J.; Chichester, D. L.; Guardincerri, E.; Morris, C. L.; Plaud-Ramos, K.; Schwendiman, W.; Tolman, J. D.; Winston, P.
2018-04-01
Most of the plutonium in the world resides inside spent nuclear reactor fuel rods. This high-level radioactive waste is commonly held in long-term storage within large, heavily shielded casks. Currently, international nuclear safeguards inspectors have no stand-alone method of verifying the amount of reactor fuel stored within a sealed cask. Here we demonstrate experimentally that measurements of the scattering angles of cosmic-ray muons, which pass through a storage cask, can be used to determine if spent fuel assemblies are missing without opening the cask. This application of technology and methods commonly used in high-energy particle physics provides a potential solution to this long-standing problem in international nuclear safeguards.
Defining adherence to therapeutic exercise for musculoskeletal pain: a systematic review.
Bailey, Daniel L; Holden, Melanie A; Foster, Nadine E; Quicke, Jonathan G; Haywood, Kirstie L; Bishop, Annette
2018-06-06
To establish the meaning of the term 'adherence' (including conceptual and measurement definitions) in the context of therapeutic exercise (TE) for musculoskeletal (MSK) pain. Systematic review using a search strategy including terms for: adherence, TE and MSK pain. Identified studies were independently screened for inclusion by two researchers. Two independent researchers extracted data on: study type; MSK pain population; type of TE used; definitions, parameters, measurement methods and values of adherence. Seven electronic databases were searched from inception to December 2016. Any study type featuring TE for adults with MSK pain and containing a definition of adherence, or a description of how adherence was measured. 459 studies were identified and 86 were included in the review. Most were prospective cohort studies and featured back and/or neck pain. Strengthening and stretching were the most common types of TE. A clearly identifiable definition of adherence was provided in 40% of the studies, with 12% using the same definition. Exercise frequency was the most commonly measured parameter of adherence, with self-report logs the most common measurement method. The most common value range used to determine satisfactory adherence was 80%-99% of the recommended exercise dose. No single definition of adherence to TE was apparent. We found no definition of adherence that specifically related to TE for MSK pain or described the dimensions of TE that should be measured. We recommend conceptualising adherence to TE for MSK pain from the perspective of all relevant stakeholders. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
Predicting recreational water quality advisories: A comparison of statistical methods
Brooks, Wesley R.; Corsi, Steven R.; Fienen, Michael N.; Carvin, Rebecca B.
2016-01-01
Epidemiological studies indicate that fecal indicator bacteria (FIB) in beach water are associated with illnesses among people having contact with the water. In order to mitigate public health impacts, many beaches are posted with an advisory when the concentration of FIB exceeds a beach action value. The most commonly used method of measuring FIB concentration takes 18–24 h before returning a result. In order to avoid the 24 h lag, it has become common to ”nowcast” the FIB concentration using statistical regressions on environmental surrogate variables. Most commonly, nowcast models are estimated using ordinary least squares regression, but other regression methods from the statistical and machine learning literature are sometimes used. This study compares 14 regression methods across 7 Wisconsin beaches to identify which consistently produces the most accurate predictions. A random forest model is identified as the most accurate, followed by multiple regression fit using the adaptive LASSO.
Estimating heat tolerance of plants by ion leakage: a new method based on gradual heating.
Ilík, Petr; Špundová, Martina; Šicner, Michal; Melkovičová, Helena; Kučerová, Zuzana; Krchňák, Pavel; Fürst, Tomáš; Večeřová, Kristýna; Panzarová, Klára; Benediktyová, Zuzana; Trtílek, Martin
2018-05-01
Heat tolerance of plants related to cell membrane thermostability is commonly estimated via the measurement of ion leakage from plant segments after defined heat treatment. To compare heat tolerance of various plants, it is crucial to select suitable heating conditions. This selection is time-consuming and optimizing the conditions for all investigated plants may even be impossible. Another problem of the method is its tendency to overestimate basal heat tolerance. Here we present an improved ion leakage method, which does not suffer from these drawbacks. It is based on gradual heating of plant segments in a water bath or algal suspensions from room temperature up to 70-75°C. The electrical conductivity of the bath/suspension, which is measured continuously during heating, abruptly increases at a certain temperature T COND (within 55-70°C). The T COND value can be taken as a measure of cell membrane thermostability, representing the heat tolerance of plants/organisms. Higher T COND corresponds to higher heat tolerance (basal or acquired) connected to higher thermostability of the cell membrane, as evidenced by the common ion leakage method. The new method also enables determination of the thermostability of photochemical reactions in photosynthetic samples via the simultaneous measurement of Chl fluorescence. © 2018 The Authors. New Phytologist © 2018 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Di, Jianglei; Song, Yu; Xi, Teli; Zhang, Jiwei; Li, Ying; Ma, Chaojie; Wang, Kaiqiang; Zhao, Jianlin
2017-11-01
Biological cells are usually transparent with a small refractive index gradient. Digital holographic interferometry can be used in the measurement of biological cells. We propose a dual-wavelength common-path digital holographic microscopy for the quantitative phase imaging of biological cells. In the proposed configuration, a parallel glass plate is inserted in the light path to create the lateral shearing, and two lasers with different wavelengths are used as the light source to form the dual-wavelength composite digital hologram. The information of biological cells for different wavelengths is separated and extracted in the Fourier domain of the hologram, and then combined to a shorter wavelength in the measurement process. This method could improve the system's temporal stability and reduce speckle noises simultaneously. Mouse osteoblastic cells and peony pollens are measured to show the feasibility of this method.
Technology Solutions Case Study: Predicting Envelope Leakage in Attached Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2013-11-01
The most common method of measuring air leakage is to perform single (or solo) blower door pressurization and/or depressurization test. In detached housing, the single blower door test measures leakage to the outside. In attached housing, however, this “solo” test method measures both air leakage to the outside and air leakage between adjacent units through common surfaces. In an attempt to create a simplified tool for predicting leakage to the outside, Building America team Consortium for Advanced Residential Buildings (CARB) performed a preliminary statistical analysis on blower door test results from 112 attached dwelling units in four apartment complexes. Althoughmore » the subject data set is limited in size and variety, the preliminary analyses suggest significant predictors are present and support the development of a predictive model. Further data collection is underway to create a more robust prediction tool for use across different construction types, climate zones, and unit configurations.« less
Effectiveness of modified 1-hour air-oven moisture methods for determining popcorn moisture
USDA-ARS?s Scientific Manuscript database
Two of the most commonly used approved grain moisture air-oven reference methods are the air oven method ASAE S352.2, which requires long heating time (72-h) for unground samples, and the AACC 44-15.02 air-oven method, which dries a ground sample for 1 hr, but there is specific moisture measurement ...
Measurement equivalence: a glossary for comparative population health research.
Morris, Katherine Ann
2018-03-06
Comparative population health studies are becoming more common and are advancing solutions to crucial public health problems, but decades-old measurement equivalence issues remain without a common vocabulary to identify and address the biases that contribute to non-equivalence. This glossary defines sources of measurement non-equivalence. While drawing examples from both within-country and between-country studies, this glossary also defines methods of harmonisation and elucidates the unique opportunities in addition to the unique challenges of particular harmonisation methods. Its primary objective is to enable population health researchers to more clearly articulate their measurement assumptions and the implications of their findings for policy. It is also intended to provide scholars and policymakers across multiple areas of inquiry with tools to evaluate comparative research and thus contribute to urgent debates on how to ameliorate growing health disparities within and between countries. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
A wet/wet differential pressure sensor for measuring vertical hydraulic gradient.
Fritz, Brad G; Mackley, Rob D
2010-01-01
Vertical hydraulic gradient is commonly measured in rivers, lakes, and streams for studies of groundwater-surface water interaction. While a number of methods with subtle differences have been applied, these methods can generally be separated into two categories; measuring surface water elevation and pressure in the subsurface separately or making direct measurements of the head difference with a manometer. Making separate head measurements allows for the use of electronic pressure sensors, providing large datasets that are particularly useful when the vertical hydraulic gradient fluctuates over time. On the other hand, using a manometer-based method provides an easier and more rapid measurement with a simpler computation to calculate the vertical hydraulic gradient. In this study, we evaluated a wet/wet differential pressure sensor for use in measuring vertical hydraulic gradient. This approach combines the advantage of high-temporal frequency measurements obtained with instrumented piezometers with the simplicity and reduced potential for human-induced error obtained with a manometer board method. Our results showed that the wet/wet differential pressure sensor provided results comparable to more traditional methods, making it an acceptable method for future use.
Optical versus tactile geometry measurement: alternatives or counterparts
NASA Astrophysics Data System (ADS)
Lehmann, Peter
2003-05-01
This contribution deals with measuring strategies and methods for the determination of several geometrical features, covering the surface micro-topography and the form of mechanical objects. The measuring principles used in optical surface metrology include optical focusing profilers, confocal point measuring and areal measuring sensors as well as interferometrical principles such as white light interferometry and speckle techniques. In comparison with stylus instruments optical techniques provide certain advantages such as a fast data acquisition, in-process applicability or contactless measurement. However, the frequency response characteristics of optical and tactile measurement differ significantly. In addition, optical sensors are commonly more influenced by critical geometrical conditions and optical properties of an object. For precise form measurement mechanical instruments dominate till now. One reason for this may be, that commonly the complete 360 degrees geometry of the measuring object has to be analyzed. Another point is that optical principles such as form measuring interferometry fail in cases of complex object geometry or rougher object surfaces. Other methods, e.g. fringe projection or digital holography, till now do not meet the accuracy demands of precision engineered workpieces. Hence, a combination of mechanical concepts and optical sensors represents an interesting potential for current and future measuring tasks, which require high accuracy and maximum flexibility.
Antimicrobial Stewardship Programs: Appropriate Measures and Metrics to Study their Impact.
Morris, Andrew M
Antimicrobial stewardship is a new field that struggles to find the right balance between meaningful and useful metrics to study the impact of antimicrobial stewardship programs (ASPs). ASP metrics primarily measure antimicrobial use, although microbiological resistance and clinical outcomes are also important measures of the impact an ASP has on a hospital and its patient population. Antimicrobial measures looking at consumption are the most commonly used measures, and are focused on defined daily doses, days of therapy, and costs, usually standardized per 1,000 patient-days. Each measure provides slightly different information, with their own upsides and downfalls. Point prevalence measurement of antimicrobial use is an increasingly used approach to understanding consumption that does not entirely rely on sophisticated electronic information systems, and is also replicable. Appropriateness measures hold appeal and promise, but have not been developed to the degree that makes them useful and widely applicable. The primary reason why antimicrobial stewardship is necessary is the growth of antimicrobial resistance. Accordingly, antimicrobial resistance is an important metric of the impact of an ASP. The most common approach to measuring resistance for ASP purposes is to report rates of common or important community- or nosocomial-acquired antimicrobial-resistant organisms, such as methicillin-resistant Staphylococcus aureus and Clostridium difficile. Such an approach is dependent on detection methods, community rates of resistance, and co-interventions, and therefore may not be the most accurate or reflective measure of antimicrobial stewardship interventions. Development of an index to reflect the net burden of resistance holds theoretical promise, but has yet to be realized. Finally, programs must consider patient outcome measures. Mortality is the most objective and reliable method, but has several drawbacks. Disease- or organism-specific mortality, or cure, are increasingly used metrics.
Non-Invasive Methods for Iron Concentration Assessment
NASA Astrophysics Data System (ADS)
Carneiro, Antonio A. O.; Baffa, Oswaldo; Angulo, Ivan L.; Covas, Dimas T.
2002-08-01
Iron excess is commonly observed in patients with transfusional iron overload. The iron chelation therapy in these patients require accurate determination of the magnitude of iron excess. The most promising method for noninvasive assessment of iron stores is based on measurements of hepatic magnetic susceptibility.
ANALYTICAL METHOD COMPARISONS BY ESTIMATES OF PRECISION AND LOWER DETECTION LIMIT
The paper describes the use of principal component analysis to estimate the operating precision of several different analytical instruments or methods simultaneously measuring a common sample of a material whose actual value is unknown. This approach is advantageous when none of ...
MEASUREMENT OF VOCS DESORBED FROM BUILDING MATERIALS--A HIGH TEMPERATURE DYNAMIC CHAMBER METHOD
Mass balance is a commonly used approach for characterizing the source and sink behavior of building materials. Because the traditional sink test methods evaluate the adsorption and desorption of volatile organic compounds (VOC) at ambient temperatures, the desorption process is...
Sequi, Marco; Campi, Rita; Clavenna, Antonio; Bonati, Maurizio
2013-03-01
To evaluate the quality of data reporting and statistical methods performed in drug utilization studies in the pediatric population. Drug utilization studies evaluating all drug prescriptions to children and adolescents published between January 1994 and December 2011 were retrieved and analyzed. For each study, information on measures of exposure/consumption, the covariates considered, descriptive and inferential analyses, statistical tests, and methods of data reporting was extracted. An overall quality score was created for each study using a 12-item checklist that took into account the presence of outcome measures, covariates of measures, descriptive measures, statistical tests, and graphical representation. A total of 22 studies were reviewed and analyzed. Of these, 20 studies reported at least one descriptive measure. The mean was the most commonly used measure (18 studies), but only five of these also reported the standard deviation. Statistical analyses were performed in 12 studies, with the chi-square test being the most commonly performed test. Graphs were presented in 14 papers. Sixteen papers reported the number of drug prescriptions and/or packages, and ten reported the prevalence of the drug prescription. The mean quality score was 8 (median 9). Only seven of the 22 studies received a score of ≥10, while four studies received a score of <6. Our findings document that only a few of the studies reviewed applied statistical methods and reported data in a satisfactory manner. We therefore conclude that the methodology of drug utilization studies needs to be improved.
ERIC Educational Resources Information Center
Ades, A. E.; Lu, Guobing; Dias, Sofia; Mayo-Wilson, Evan; Kounali, Daphne
2015-01-01
Objective: Trials often may report several similar outcomes measured on different test instruments. We explored a method for synthesising treatment effect information both within and between trials and for reporting treatment effects on a common scale as an alternative to standardisation Study design: We applied a procedure that simultaneously…
ERIC Educational Resources Information Center
Friend Wise, Alyssa; Padmanabhan, Poornima; Duffy, Thomas M.
2009-01-01
This mixed-methods study probed the effectiveness of three kinds of objects (video, theory, metaphor) as common reference points for conversations between online learners (student teachers). Individuals' degree of detail-focus was examined as a potentially interacting covariate and the outcome measure was learners' level of tacit knowledge related…
Tapering Timbers: Finding the Volume of Conical Frustums
ERIC Educational Resources Information Center
Jones, Dustin L.; Coleman, Max
2012-01-01
Throughout history, humans have developed and refined methods of measuring. For the volumes of some common shapes, they have derived formulas. One such formula is that for the volume of a conical frustum. The conical frustum is not usually on a short list of common geometric shapes, but students encounter it in their everyday experience. In the…
Modeling Outcomes with Floor or Ceiling Effects: An Introduction to the Tobit Model
ERIC Educational Resources Information Center
McBee, Matthew
2010-01-01
In gifted education research, it is common for outcome variables to exhibit strong floor or ceiling effects due to insufficient range of measurement of many instruments when used with gifted populations. Common statistical methods (e.g., analysis of variance, linear regression) produce biased estimates when such effects are present. In practice,…
NASA Astrophysics Data System (ADS)
Du, Yi-Chun; Chen, Yung-Fu; Li, Chien-Ming; Lin, Chia-Hung; Yang, Chia-En; Wu, Jian-Xing; Chen, Tainsong
2013-12-01
The Achilles tendon is one of the most commonly observed tendons injured with a variety of causes, such as trauma, overuse and degeneration, in the human body. Rupture and tendinosis are relatively common for this strong tendon. Stress-strain properties and shape change are important biomechanical properties of the tendon to assess surgical repair or healing progress. Currently, there are rather limited non-invasive methods available for precisely quantifying the in vivo biomechanical properties of the tendons. The aim of this study was to apply quantitative ultrasound (QUS) methods, including ultrasonic attenuation and speed of sound (SOS), to investigate porcine tendons in different stress-strain conditions. In order to find a reliable method to evaluate the change of tendon shape, ultrasound measurement was also utilized for measuring tendon thickness and compared with the change in tendon cross-sectional area under different stress. A total of 15 porcine tendons of hind trotters were examined. The test results show that the attenuation and broadband ultrasound attenuation decreased and the SOS increased by a smaller magnitude as the uniaxial loading of the stress-strain upon tendons increased. Furthermore, the tendon thickness measured with the ultrasound method was significantly correlated with tendon cross-sectional area (Pearson coefficient = 0.86). These results also indicate that attenuation of QUS and ultrasonic thickness measurement are reliable and potential parameters for assessing biomechanical properties of tendons. Further investigations are needed to warrant the application of the proposed method in a clinical setting.
Thermophysical Properties Measurements of Zr62Cu20Al10Ni8
NASA Technical Reports Server (NTRS)
Bradshaw, Richard C.; Waren, Mary; Rogers, Jan R.; Rathz, Thomas J.; Gangopadhyay, Anup K.; Kelton, Ken F.; Hyers, Robert W.
2006-01-01
Thermophysical property studies performed at high temperature can prove challenging because of reactivity problems brought on by the elevated temperatures. Contaminants from measuring devices and container walls can cause changes in properties. To prevent this, containerless processing techniques can be employed to isolate a sample during study. A common method used for this is levitation. Typical levitation methods used for containerless processing are, aerodynamically, electromagnetically and electrostatically based. All levitation methods reduce heterogeneous nucleation sites, 'which in turn provide access to metastable undercooled phases. In particular, electrostatic levitation is appealing because sample motion and stirring are minimized; and by combining it with optically based non-contact measuring techniques, many thermophysical properties can be measured. Applying some of these techniques, surface tension, viscosity and density have been measured for the glass forming alloy Zr62Cu20Al10Ni8 and will be presented with a brief overview of the non-contact measuring method used.
Abbott, M.; Einerson, J.; Schuster, P.; Susong, D.; Taylor, Howard E.; ,
2004-01-01
Snow sampling and analysis methods which produce accurate and ultra-low measurements of trace elements and common ion concentration in southeastern Idaho snow, were developed. Snow samples were collected over two winters to assess trace elements and common ion concentrations in air pollutant fallout across the southeastern Idaho. The area apportionment of apportionment of fallout concentrations measured at downwind location were investigated using pattern recognition and multivariate statistical technical techniques. Results show a high level of contribution from phosphates processing facilities located outside Pocatello in the southern portion of the Eastern Snake River Plain, and no obvious source area profiles other than at Pocatello.
ERIC Educational Resources Information Center
Toplak, Maggie E.; West, Richard F.; Stanovich, Keith E.
2013-01-01
Background: Both performance-based and rating measures are commonly used to index executive function in clinical and neuropsychological assessments. They are intended to index the same broad underlying mental construct of executive function. The association between these two types of measures was investigated in the current article. Method and…
Bioelectrical Impedance and Body Composition Assessment
ERIC Educational Resources Information Center
Martino, Mike
2006-01-01
This article discusses field tests that can be used in physical education programs. The most common field tests are anthropometric measurements, which include body mass index (BMI), girth measurements, and skinfold testing. Another field test that is gaining popularity is bioelectrical impedance analysis (BIA). Each method has particular strengths…
Measuring snow water equivalent from common-offset GPR records through migration velocity analysis
NASA Astrophysics Data System (ADS)
St. Clair, James; Holbrook, W. Steven
2017-12-01
Many mountainous regions depend on seasonal snowfall for their water resources. Current methods of predicting the availability of water resources rely on long-term relationships between stream discharge and snowpack monitoring at isolated locations, which are less reliable during abnormal snow years. Ground-penetrating radar (GPR) has been shown to be an effective tool for measuring snow water equivalent (SWE) because of the close relationship between snow density and radar velocity. However, the standard methods of measuring radar velocity can be time-consuming. Here we apply a migration focusing method originally developed for extracting velocity information from diffracted energy observed in zero-offset seismic sections to the problem of estimating radar velocities in seasonal snow from common-offset GPR data. Diffractions are isolated by plane-wave-destruction (PWD) filtering and the optimal migration velocity is chosen based on the varimax norm of the migrated image. We then use the radar velocity to estimate snow density, depth, and SWE. The GPR-derived SWE estimates are within 6 % of manual SWE measurements when the GPR antenna is coupled to the snow surface and 3-21 % of the manual measurements when the antenna is mounted on the front of a snowmobile ˜ 0.5 m above the snow surface.
Assessment of glomerular filtration rate measurement with plasma sampling: a technical review.
Murray, Anthony W; Barnfield, Mark C; Waller, Michael L; Telford, Tania; Peters, A Michael
2013-06-01
This article reviews available radionuclide-based techniques for glomerular filtration rate (GFR) measurement, focusing on clinical indications for GFR measurement, ideal GFR radiopharmaceutical tracer properties, and the 2 most common tracers in clinical use. Methods for full, 1-compartment, and single-sample renal clearance characterization are discussed. GFR normalization and the role of GFR measurement in chemotherapy dosing are also considered.
USDA-ARS?s Scientific Manuscript database
Hand candling is the most common method of assessing interior egg quality. While this method is non-destructive, it is very subjective and takes some skill. The Haugh unit was developed in 1937 by R. Haugh and is revered as the “gold standard” for measuring interior egg quality. This objective me...
Middle East Regional Irrigation Management Information Systems project-Some science products
USDA-ARS?s Scientific Manuscript database
Similarities in the aridity of environments and water scarcity for irrigation allow common approaches to irrigation management problems and research methods in the Southern Great Plains of the United States and the Middle East. Measurement methods involving weighing lysimeters and eddy covariance sy...
Collinear interferometer with variable delay for carrier-envelope offset frequency measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pawlowska, Monika; Ozimek, Filip; Fita, Piotr
2009-08-15
We demonstrate a novel scheme for measuring the carrier-envelope offset frequency in a femtosecond optical frequency comb. Our method is based on a common-path interferometer with a calcite Babinet-Soleil compensator employed to control the delay between the two interfering beams of pulses. The large delay range (up to 8 ps) of our device is sufficient for systems that rely on spectral broadening in microstructured fibers. We show an experimental proof that the stability of a common-path arrangement is superior to that of the standard interferometers.
Collinear interferometer with variable delay for carrier-envelope offset frequency measurement
NASA Astrophysics Data System (ADS)
Pawłowska, Monika; Ozimek, Filip; Fita, Piotr; Radzewicz, Czesław
2009-08-01
We demonstrate a novel scheme for measuring the carrier-envelope offset frequency in a femtosecond optical frequency comb. Our method is based on a common-path interferometer with a calcite Babinet-Soleil compensator employed to control the delay between the two interfering beams of pulses. The large delay range (up to 8 ps) of our device is sufficient for systems that rely on spectral broadening in microstructured fibers. We show an experimental proof that the stability of a common-path arrangement is superior to that of the standard interferometers.
Ikram, Najmul; Qadir, Muhammad Abdul; Afzal, Muhammad Tanvir
2018-01-01
Sequence similarity is a commonly used measure to compare proteins. With the increasing use of ontologies, semantic (function) similarity is getting importance. The correlation between these measures has been applied in the evaluation of new semantic similarity methods, and in protein function prediction. In this research, we investigate the relationship between the two similarity methods. The results suggest absence of a strong correlation between sequence and semantic similarities. There is a large number of proteins with low sequence similarity and high semantic similarity. We observe that Pearson's correlation coefficient is not sufficient to explain the nature of this relationship. Interestingly, the term semantic similarity values above 0 and below 1 do not seem to play a role in improving the correlation. That is, the correlation coefficient depends only on the number of common GO terms in proteins under comparison, and the semantic similarity measurement method does not influence it. Semantic similarity and sequence similarity have a distinct behavior. These findings are of significant effect for future works on protein comparison, and will help understand the semantic similarity between proteins in a better way.
Bialas, Andrzej
2010-01-01
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method.
Ultrasound measurement of the brachial artery flow-mediated dilation without ECG gating.
Gemignani, Vincenzo; Bianchini, Elisabetta; Faita, Francesco; Giannarelli, Chiara; Plantinga, Yvonne; Ghiadoni, Lorenzo; Demi, Marcello
2008-03-01
The methods commonly used for noninvasive ultrasound assessment of endothelium-dependent flow-mediated dilation (FMD) require an electrocardiogram (ECG) signal to synchronize the measurements with the cardiac cycle. In this article, we present a method for assessing FMD that does not require ECG gating. The approach is based on temporal filtering of the diameter-time curve, which is obtained by means of a B-mode image processing system. The method was tested on 22 healthy volunteers without cardiovascular risk factors. The measurements obtained with the proposed approach were compared with those obtained with ECG gating and with both systolic and end-diastolic measurements. Results showed good agreement between the methods and a higher precision of the new method due to the fact that it is based on a larger number of measurements. Further advantages were also found both in terms of reliability of the measure and simplification of the instrumentation. (E-mail: gemi@ifc.cnr.it).
On-line noninvasive one-point measurements of pulse wave velocity.
Harada, Akimitsu; Okada, Takashi; Niki, Kiyomi; Chang, Dehua; Sugawara, Motoaki
2002-12-01
Pulse wave velocity (PWV) is a basic parameter in the dynamics of pressure and flow waves traveling in arteries. Conventional on-line methods of measuring PWV have mainly been based on "two-point" measurements, i.e., measurements of the time of travel of the wave over a known distance. This paper describes two methods by which on-line "one-point" measurements can be made, and compares the results obtained by the two methods. The principle of one method is to measure blood pressure and velocity at a point, and use the water-hammer equation for forward traveling waves. The principle of the other method is to derive PWV from the stiffness parameter of the artery. Both methods were realized by using an ultrasonic system which we specially developed for noninvasive measurements of wave intensity. We applied the methods to the common carotid artery in 13 normal humans. The regression line of the PWV (m/s) obtained by the former method on the PWV (m/s) obtained by the latter method was y = 1.03x - 0.899 (R(2) = 0.83). Although regional PWV in the human carotid artery has not been reported so far, the correlation between the PWVs obtained by the present two methods was so high that we are convinced of the validity of these methods.
Harari, Gil
2014-01-01
Statistic significance, also known as p-value, and CI (Confidence Interval) are common statistics measures and are essential for the statistical analysis of studies in medicine and life sciences. These measures provide complementary information about the statistical probability and conclusions regarding the clinical significance of study findings. This article is intended to describe the methodologies, compare between the methods, assert their suitability for the different needs of study results analysis and to explain situations in which each method should be used.
Rosenberry, Donald O.; Stannard, David L.; Winter, Thomas C.; Martinez, Margo L.
2004-01-01
Evapotranspiration determined using the energy-budget method at a semi-permanent prairie-pothole wetland in east-central North Dakota, USA was compared with 12 other commonly used methods. The Priestley-Taylor and deBruin-Keijman methods compared best with the energy-budget values; mean differences were less than 0.1 mm d−1, and standard deviations were less than 0.3 mm d−1. Both methods require measurement of air temperature, net radiation, and heat storage in the wetland water. The Penman, Jensen-Haise, and Brutsaert-Stricker methods provided the next-best values for evapotranspiration relative to the energy-budget method. The mass-transfer, deBruin, and Stephens-Stewart methods provided the worst comparisons; the mass-transfer and deBruin comparisons with energy-budget values indicated a large standard deviation, and the deBruin and Stephens-Stewart comparisons indicated a large bias. The Jensen-Haise method proved to be cost effective, providing relatively accurate comparisons with the energy-budget method (mean difference=0.44 mm d−1, standard deviation=0.42 mm d−1) and requiring only measurements of air temperature and solar radiation. The Mather (Thornthwaite) method is the simplest, requiring only measurement of air temperature, and it provided values that compared relatively well with energy-budget values (mean difference=0.47 mm d−1, standard deviation=0.56 mm d−1). Modifications were made to several of the methods to make them more suitable for use in prairie wetlands. The modified Makkink, Jensen-Haise, and Stephens-Stewart methods all provided results that were nearly as close to energy-budget values as were the Priestley-Taylor and deBruin-Keijman methods, and all three of these modified methods only require measurements of air temperature and solar radiation. The modified Hamon method provided values that were within 20 percent of energy-budget values during 95 percent of the comparison periods, and it only requires measurement of air temperature. The mass-transfer coefficient, associated with the commonly used mass-transfer method, varied seasonally, with the largest values occurring during summer.
Artifacts in Digital Coincidence Timing
Moses, W. W.; Peng, Q.
2014-01-01
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into a time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator. All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e., the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the “optimal” method. The purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization. PMID:25321885
Artifacts in digital coincidence timing
Moses, W. W.; Peng, Q.
2014-10-16
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Artifacts in digital coincidence timing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, W. W.; Peng, Q.
Digital methods are becoming increasingly popular for measuring time differences, and are the de facto standard in PET cameras. These methods usually include a master system clock and a (digital) arrival time estimate for each detector that is obtained by comparing the detector output signal to some reference portion of this clock (such as the rising edge). Time differences between detector signals are then obtained by subtracting the digitized estimates from a detector pair. A number of different methods can be used to generate the digitized arrival time of the detector output, such as sending a discriminator output into amore » time to digital converter (TDC) or digitizing the waveform and applying a more sophisticated algorithm to extract a timing estimator.All measurement methods are subject to error, and one generally wants to minimize these errors and so optimize the timing resolution. A common method for optimizing timing methods is to measure the coincidence timing resolution between two timing signals whose time difference should be constant (such as detecting gammas from positron annihilation) and selecting the method that minimizes the width of the distribution (i.e. the timing resolution). Unfortunately, a common form of error (a nonlinear transfer function) leads to artifacts that artificially narrow this resolution, which can lead to erroneous selection of the 'optimal' method. In conclusion, the purpose of this note is to demonstrate the origin of this artifact and suggest that caution should be used when optimizing time digitization systems solely on timing resolution minimization.« less
Durham, J. M.; Poulson, D.; Bacon, J.; ...
2018-04-10
Most of the plutonium in the world resides inside spent nuclear reactor fuel rods. This high-level radioactive waste is commonly held in long-term storage within large, heavily shielded casks. Currently, international nuclear safeguards inspectors have no stand-alone method of verifying the amount of reactor fuel stored within a sealed cask. In this paper, we demonstrate experimentally that measurements of the scattering angles of cosmic-ray muons, which pass through a storage cask, can be used to determine if spent fuel assemblies are missing without opening the cask. Finally, this application of technology and methods commonly used in high-energy particle physics providesmore » a potential solution to this long-standing problem in international nuclear safeguards.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Durham, J. M.; Poulson, D.; Bacon, J.
Most of the plutonium in the world resides inside spent nuclear reactor fuel rods. This high-level radioactive waste is commonly held in long-term storage within large, heavily shielded casks. Currently, international nuclear safeguards inspectors have no stand-alone method of verifying the amount of reactor fuel stored within a sealed cask. In this paper, we demonstrate experimentally that measurements of the scattering angles of cosmic-ray muons, which pass through a storage cask, can be used to determine if spent fuel assemblies are missing without opening the cask. Finally, this application of technology and methods commonly used in high-energy particle physics providesmore » a potential solution to this long-standing problem in international nuclear safeguards.« less
Liu, Xikai; Ma, Dong; Chen, Liang; Liu, Xiangdong
2018-02-08
Tuning the stiffness balance is crucial to full-band common-mode rejection for a superconducting gravity gradiometer (SGG). A reliable method to do so has been proposed and experimentally tested. In the tuning scheme, the frequency response functions of the displacement of individual test mass upon common-mode accelerations were measured and thus determined a characteristic frequency for each test mass. A reduced difference in characteristic frequencies between the two test masses was utilized as the criterion for an effective tuning. Since the measurement of the characteristic frequencies does not depend on the scale factors of displacement detection, stiffness tuning can be done independently. We have tested this new method on a single-component SGG and obtained a reduction of two orders of magnitude in stiffness mismatch.
Broadcasting a Lab Measurement over Existing Conductor Networks
ERIC Educational Resources Information Center
Knipp, Peter A.
2009-01-01
Students learn about physical laws and the scientific method when they analyze experimental data in a laboratory setting. Three common sources exist for the experimental data that they analyze: (1) "hands-on" measurements by the students themselves, (2) electronic transfer (by downloading a spreadsheet, video, or computer-aided data-acquisition…
Steinmetz, Josiane; Schiele, Françoise; Gueguen, René; Férard, Georges; Henny, Joseph
2007-01-01
The improvement of the consistency of gamma-glutamyltransferase (GGT) activity results among different assays after calibration with a common material was estimated. We evaluated if this harmonization could lead to reference limits common to different routine methods. Seven laboratories measured GGT activity using their own routine analytical system both according to the manufacturer's recommendation and after calibration with a multi-enzyme calibrator [value assigned by the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) reference procedure]. All samples were re-measured using the IFCC reference procedure. Two groups of subjects were selected in each laboratory: a group of healthy men aged 18-25 years without long-term medication and with alcohol consumption less than 44 g/day and a group of subjects with elevated GGT activity. The day-to-day coefficients of variation were less than 2.9% in each laboratory. The means obtained in the group of healthy subjects without common calibration (range of the means 16-23 U/L) were significantly different from those obtained by the IFCC procedure in five laboratories. After calibration, the means remained significantly different from the IFCC procedure results in only one laboratory. For three calibrated methods, the slope values of linear regression vs. the IFCC procedure were not different from the value 1. The results obtained with these three methods for healthy subjects (n=117) were gathered and reference limits were calculated. These were 11-49 U/L (2.5th-97.5th percentiles). The calibration also improved the consistency of elevated results when compared to the IFCC procedure. The common calibration improved the level of consistency between different routine methods. It permitted to define common reference limits which are quite similar to those proposed by the IFCC. This approach should lead to a real benefit in terms of prevention, screening, diagnosis, therapeutic monitoring and for epidemiological studies.
Method for adjusting warp measurements to a different board dimension
William T. Simpson; John R. Shelly
2000-01-01
Warp in lumber is a common problem that occurs while lumber is being dried. In research or other testing programs, it is sometimes necessary to compare warp of different species or warp caused by different process variables. If lumber dimensions are not the same, then direct comparisons are not possible, and adjusting warp to a common dimension would be desirable so...
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold; Cieza, Alarcos; Üstün, Tevfik Bedirhan
2016-10-01
Our aim was to specify the requirements of an architecture to serve as the foundation for standardized reporting of health information and to provide an exemplary application of this architecture. The World Health Organization's International Classification of Functioning, Disability and Health (ICF) served as the conceptual framework. Methods to establish content comparability were the ICF Linking Rules. The Rasch measurement model, as a special case of additive conjoint measurement, which satisfies the required criteria for fundamental measurement, allowed for the development of a common metric foundation for measurement unit conversion. Secondary analysis of data from the North Yorkshire Survey was used to illustrate these methods. Patients completed three instruments and the items were linked to the ICF. The Rasch measurement model was applied, first to each scale, and then to items across scales which were linked to a common domain. Based on the linking of items to the ICF, the majority of items were grouped into two domains, Mobility and Self-care. Analysis of the individual scales and of items linked to a common domain across scales satisfied the requirements of the Rasch measurement model. The measurement unit conversion between items from the three instruments linked to the Mobility and Self-care domains, respectively, was demonstrated. The realization of an ICF-based architecture for information on patients' functioning enables harmonization of health information while allowing clinicians and researchers to continue using their existing instruments. This architecture will facilitate access to comprehensive and consistently reported health information to serve as the foundation for informed decision-making. © The Author(s) 2016.
Knipling, Edward B.; Kramer, Paul J.
1967-01-01
The dye method for measuring water potential was examined and compared with the thermocouple psychrometer method in order to evaluate its usefulness for measuring leaf water potentials of forest trees and common laboratory plants. Psychrometer measurements are assumed to represent the true leaf water potentials. Because of the contamination of test solutions by cell sap and leaf surface residues, dye method values of most species varied about 1 to 5 bars from psychrometer values over the leaf water potential range of 0 to −30 bars. The dye method is useful for measuring changes and relative values in leaf potential. Because of species differences in the relationships of dye method values to true leaf water potentials, dye method values should be interpreted with caution when comparing different species or the same species growing in widely different environments. Despite its limitations the dye method has a usefulness to many workers because it is simple, requires no elaborate equipment, and can be used in both the laboratory and field. PMID:16656657
A multi-phase instrument comparison study was conducted on two different diesel engines on a dynamometer to compare commonly used particulate matter (PM) measurement techniques while sampling the same diesel exhaust aerosol and to evaluate inter- and intra-method variability. In...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kurnik, Charles W; Violette, Daniel M.; Rathbun, Pamela
This chapter focuses on the methods used to estimate net energy savings in evaluation, measurement, and verification (EM and V) studies for energy efficiency (EE) programs. The chapter provides a definition of net savings, which remains an unsettled topic both within the EE evaluation community and across the broader public policy evaluation community, particularly in the context of attribution of savings to a program. The chapter differs from the measure-specific Uniform Methods Project (UMP) chapters in both its approach and work product. Unlike other UMP resources that provide recommended protocols for determining gross energy savings, this chapter describes and comparesmore » the current industry practices for determining net energy savings but does not prescribe methods.« less
ERIC Educational Resources Information Center
Mergler, S.; Lobker, B.; Evenhuis, H. M.; Penning, C.
2010-01-01
Low bone mineral density (BMD) and fractures are common in people with intellectual disabilities (ID). Reduced mobility in case of motor impairment and the use of anti-epileptic drugs contribute to the development of low BMD. Quantitative ultrasound (QUS) measurement of the heel bone is a non-invasive and radiation-free method for measuring bone…
USDA-ARS?s Scientific Manuscript database
Storage bags are common in Africa, Asia and many other less developed countries therefore a grain probing method is well-suited for moisture content (MC) measurement. A low cost meter was developed as part of a USAID project to reduce the post-harvest loss (PHL). The meter measures the MC of maize a...
Proposal of a method for the evaluation of inaccuracy of home sphygmomanometers.
Akpolat, Tekin
2009-10-01
There is no formal protocol for evaluating the individual accuracy of home sphygmomanometers. The aims of this study were to propose a method for achieving accuracy in automated home sphygmomanometers and to test the applicability of the defined method. The purposes of this method were to avoid major inaccuracies and to estimate the optimal circumstance for individual accuracy. The method has three stages and sequential measurement of blood pressure is used. The tested devices were categorized into four groups: accurate, acceptable, inaccurate and very inaccurate (major inaccuracy). The defined method takes approximately 10 min (excluding relaxation time) and was tested on three different occasions. The application of the method has shown that inaccuracy is a common problem among non-tested devices, that validated devices are superior to those that are non-validated or whose validation status is unknown, that major inaccuracy is common, especially in non-tested devices and that validation does not guarantee individual accuracy. A protocol addressing the accuracy of a particular sphygmomanometer in an individual patient is required, and a practical method has been suggested to achieve this. This method can be modified, but the main idea and approach should be preserved unless a better method is proposed. The purchase of validated devices and evaluation of accuracy for the purchased device in an individual patient will improve the monitoring of self-measurement of blood pressure at home. This study addresses device inaccuracy, but errors related to the patient, observer or blood pressure measurement technique should not be underestimated, and strict adherence to the manufacturer's instructions is essential.
On the Quantification of Cellular Velocity Fields.
Vig, Dhruv K; Hamby, Alex E; Wolgemuth, Charles W
2016-04-12
The application of flow visualization in biological systems is becoming increasingly common in studies ranging from intracellular transport to the movements of whole organisms. In cell biology, the standard method for measuring cell-scale flows and/or displacements has been particle image velocimetry (PIV); however, alternative methods exist, such as optical flow constraint. Here we review PIV and optical flow, focusing on the accuracy and efficiency of these methods in the context of cellular biophysics. Although optical flow is not as common, a relatively simple implementation of this method can outperform PIV and is easily augmented to extract additional biophysical/chemical information such as local vorticity or net polymerization rates from speckle microscopy. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Comparison of three methods for evaluation of work postures in a truck assembly plant.
Zare, Mohsen; Biau, Sophie; Brunet, Rene; Roquelaure, Yves
2017-11-01
This study compared the results of three risk assessment tools (self-reported questionnaire, observational tool, direct measurement method) for the upper limbs and back in a truck assembly plant at two cycle times (11 and 8 min). The weighted Kappa factor showed fair agreement between the observational and direct measurement method for the arm (0.39) and back (0.47). The weighted Kappa factor for these methods was poor for the neck (0) and wrist (0) but the observed proportional agreement (P o ) was 0.78 for the neck and 0.83 for the wrist. The weighted Kappa factor between questionnaire and direct measurement showed poor or slight agreement (0) for different body segments in both cycle times. The results revealed moderate agreement between the observational tool and the direct measurement method, and poor agreement between the self-reported questionnaire and direct measurement. Practitioner Summary: This study provides risk exposure measurement by different common ergonomic methods in the field. The results help to develop valid measurements and improve exposure evaluation. Hence, the ergonomist/practitioners should apply the methods with caution, or at least knowing what the issues/errors are.
Incorporating Measurement Error from Modeled Air Pollution Exposures into Epidemiological Analyses.
Samoli, Evangelia; Butland, Barbara K
2017-12-01
Outdoor air pollution exposures used in epidemiological studies are commonly predicted from spatiotemporal models incorporating limited measurements, temporal factors, geographic information system variables, and/or satellite data. Measurement error in these exposure estimates leads to imprecise estimation of health effects and their standard errors. We reviewed methods for measurement error correction that have been applied in epidemiological studies that use model-derived air pollution data. We identified seven cohort studies and one panel study that have employed measurement error correction methods. These methods included regression calibration, risk set regression calibration, regression calibration with instrumental variables, the simulation extrapolation approach (SIMEX), and methods under the non-parametric or parameter bootstrap. Corrections resulted in small increases in the absolute magnitude of the health effect estimate and its standard error under most scenarios. Limited application of measurement error correction methods in air pollution studies may be attributed to the absence of exposure validation data and the methodological complexity of the proposed methods. Future epidemiological studies should consider in their design phase the requirements for the measurement error correction method to be later applied, while methodological advances are needed under the multi-pollutants setting.
Saub, R; Locker, D; Allison, P
2008-09-01
To compare two methods of developing short forms of the Malaysian Oral Health Impact Profile (OHIP-M) measure. Cross sectional data obtained using the long form of the OHIP-M was used to produce two types of OHIP-M short forms, derived using two different methods; namely regression and item frequency methods. The short version derived using a regression method is known as Reg-SOHIP(M) and that derived using a frequency method is known as Freq-SOHIP(M). Both short forms contained 14 items. These two forms were then compared in terms of their content, scores, reliability, validity and the ability to distinguish between groups. Out of 14 items, only four were in common. The form derived from the frequency method contained more high prevalence items and higher scores than the form derived from the regression method. Both methods produced a reliable and valid measure. However, the frequency method produced a measure, which was slightly better in terms of distinguishing between groups. Regardless of the method used to produce the measures, both forms performed equally well when tested for their cross-sectional psychometric properties.
Rubber Balloons, Buoyancy and the Weight of Air: A Look Inside
ERIC Educational Resources Information Center
Calza, G.; Gratton, L. M.; Lopez-Arias, T.; Oss, S.
2009-01-01
We discuss three methods of measuring the density of air most commonly used in a teaching context. Emphasis is put on the advantages and/or difficulties of each method. In particular, we show that the 'rubber balloon' method can still be performed with meaningful physical insight, but it requires a very careful approach. (Contains 4 figures and 3…
Heart rate measurement based on face video sequence
NASA Astrophysics Data System (ADS)
Xu, Fang; Zhou, Qin-Wu; Wu, Peng; Chen, Xing; Yang, Xiaofeng; Yan, Hong-jian
2015-03-01
This paper proposes a new non-contact heart rate measurement method based on photoplethysmography (PPG) theory. With this method we can measure heart rate remotely with a camera and ambient light. We collected video sequences of subjects, and detected remote PPG signals through video sequences. Remote PPG signals were analyzed with two methods, Blind Source Separation Technology (BSST) and Cross Spectral Power Technology (CSPT). BSST is a commonly used method, and CSPT is used for the first time in the study of remote PPG signals in this paper. Both of the methods can acquire heart rate, but compared with BSST, CSPT has clearer physical meaning, and the computational complexity of CSPT is lower than that of BSST. Our work shows that heart rates detected by CSPT method have good consistency with the heart rates measured by a finger clip oximeter. With good accuracy and low computational complexity, the CSPT method has a good prospect for the application in the field of home medical devices and mobile health devices.
Campbell, David J T; Tam-Tham, Helen; Dhaliwal, Kirnvir K; Manns, Braden J; Hemmelgarn, Brenda R; Sanmartin, Claudia; King-Shier, Kathryn
2017-01-01
Mixed methods research, the use of both qualitative and quantitative methods within 1 program of study, is becoming increasingly popular to allow investigators to explore patient experiences (qualitative) and also measure outcomes (quantitative). Coronary artery disease and its risk factors are some of the most studied conditions; however, the extent to which mixed methods studies are being conducted in these content areas is unknown. We sought to comprehensively describe the characteristics of published mixed methods studies on coronary artery disease and major risk factors (diabetes mellitus and hypertension). We conducted a scoping review of the literature indexed in PubMed, Medline, EMBASE, and CINAHL. We identified 811 abstracts for screening, of which 254 articles underwent full-text review and 97 reports of 81 studies met criteria for inclusion. The majority of studies in this area were conducted in the past 10 years by nurse researchers from the United States and United Kingdom. Diabetes mellitus was the most common content area for mixed methods investigation (compared with coronary artery disease and hypertension). Most authors described their rationale for using mixed methods as complementarity and did not describe study priority or how they reconciled differences in methodological paradigms. Some mixed methods study designs were more commonly used than others, including concurrent timing and integration at the interpretation stage. Qualitative strands were most commonly descriptive studies using interviews for data collection. Quantitative strands were most commonly cross-sectional observational studies, which relied heavily on self-report data such as surveys and scales. Although mixed methods research is becoming increasingly popular in the area of coronary artery disease and its risk factors, many of the more advanced mixed methods, qualitative, and quantitative techniques have not been commonly used in these areas. © 2016 American Heart Association, Inc.
Xu, Xiaobin; Li, Zhenghui; Li, Guo; Zhou, Zhe
2017-04-21
Estimating the state of a dynamic system via noisy sensor measurement is a common problem in sensor methods and applications. Most state estimation methods assume that measurement noise and state perturbations can be modeled as random variables with known statistical properties. However in some practical applications, engineers can only get the range of noises, instead of the precise statistical distributions. Hence, in the framework of Dempster-Shafer (DS) evidence theory, a novel state estimatation method by fusing dependent evidence generated from state equation, observation equation and the actual observations of the system states considering bounded noises is presented. It can be iteratively implemented to provide state estimation values calculated from fusion results at every time step. Finally, the proposed method is applied to a low-frequency acoustic resonance level gauge to obtain high-accuracy measurement results.
Exploring a taxonomy for aggression against women: can it aid conceptual clarity?
Cook, Sarah; Parrott, Dominic
2009-01-01
The assessment of aggression against women is demanding primarily because assessment strategies do not share a common language to describe reliably the wide range of forms of aggression women experience. The lack of a common language impairs efforts to describe these experiences, understand causes and consequences of aggression against women, and develop effective intervention and prevention efforts. This review accomplishes two goals. First, it applies a theoretically and empirically based taxonomy to behaviors assessed by existing measurement instruments. Second, it evaluates whether the taxonomy provides a common language for the field. Strengths of the taxonomy include its ability to describe and categorize all forms of aggression found in existing quantitative measures. The taxonomy also classifies numerous examples of aggression discussed in the literature but notably absent from quantitative measures. Although we use existing quantitative measures as a starting place to evaluate the taxonomy, its use is not limited to quantitative methods. Implications for theory, research, and practice are discussed.
Comparing 3D foot scanning with conventional measurement methods.
Lee, Yu-Chi; Lin, Gloria; Wang, Mao-Jiun J
2014-01-01
Foot dimension information on different user groups is important for footwear design and clinical applications. Foot dimension data collected using different measurement methods presents accuracy problems. This study compared the precision and accuracy of the 3D foot scanning method with conventional foot dimension measurement methods including the digital caliper, ink footprint and digital footprint. Six commonly used foot dimensions, i.e. foot length, ball of foot length, outside ball of foot length, foot breadth diagonal, foot breadth horizontal and heel breadth were measured from 130 males and females using four foot measurement methods. Two-way ANOVA was performed to evaluate the sex and method effect on the measured foot dimensions. In addition, the mean absolute difference values and intra-class correlation coefficients (ICCs) were used for precision and accuracy evaluation. The results were also compared with the ISO 20685 criteria. The participant's sex and the measurement method were found (p < 0.05) to exert significant effects on the measured six foot dimensions. The precision of the 3D scanning measurement method with mean absolute difference values between 0.73 to 1.50 mm showed the best performance among the four measurement methods. The 3D scanning measurements showed better measurement accuracy performance than the other methods (mean absolute difference was 0.6 to 4.3 mm), except for measuring outside ball of foot length and foot breadth horizontal. The ICCs for all six foot dimension measurements among the four measurement methods were within the 0.61 to 0.98 range. Overall, the 3D foot scanner is recommended for collecting foot anthropometric data because it has relatively higher precision, accuracy and robustness. This finding suggests that when comparing foot anthropometric data among different references, it is important to consider the differences caused by the different measurement methods.
Clinical review: improving the measurement of serum thyroglobulin with mass spectrometry.
Hoofnagle, Andrew N; Roth, Mara Y
2013-04-01
Serum thyroglobulin (Tg) measurements are central to the management of patients treated for differentiated thyroid carcinoma. For decades, Tg measurements have relied on methods that are subject to interference by commonly found substances in human serum and plasma, such as Tg autoantibodies. As a result, many patients need additional imaging studies to rule out cancer persistence or recurrence that could be avoided with more sensitive and specific testing methods. The aims of this review are to: 1) briefly review the interferences common to Tg immunoassays; 2) introduce readers to liquid chromatography-tandem mass spectrometry as a method for quantifying proteins in human serum/plasma; and 3) discuss the potential benefits and limitations of the method in the quantification of serum Tg. Mass spectrometric methods have traditionally lacked the sensitivity, robustness, and throughput to be useful clinical assays. These methods failed to meet the necessary clinical benchmarks due to the nature of the mass spectrometry workflow and instrumentation. Over the past few years, there have been major advances in reagents, automation, and instrumentation for the quantification of proteins using mass spectrometry. More recently, methods using mass spectrometry to detect and quantify Tg have been developed and are of sufficient quality to be used in the management of patients. Novel serum Tg assays that use mass spectrometry may avoid the issue of autoantibody interference and other problems with currently available immunoassays for Tg. Prospective studies are needed to fully understand the potential benefits of novel Tg assays to patients and care providers.
Recent research in data description of the measurement property resource on common data dictionary
NASA Astrophysics Data System (ADS)
Lu, Tielin; Fan, Zitian; Wang, Chunxi; Liu, Xiaojing; Wang, Shuo; Zhao, Hua
2018-03-01
A method for measurement equipment data description has been proposed based on the property resource analysis. The applications of common data dictionary (CDD) to devices and equipment is mainly used in digital factory to advance the management not only in the enterprise, also to the different enterprise in the same data environment. In this paper, we can make a brief of the data flow in the whole manufacture enterprise and the automatic trigger the process of the data exchange. Furthermore,the application of the data dictionary is available for the measurement and control equipment, which can also be used in other different industry in smart manufacture.
Caplan, Susan
2016-08-01
In order to understand the effects of interventions designed to reduce stigma about mental illness, we need valid measures. However, the validity of commonly used measures is compromised by social desirability bias. The purpose of this pilot study was to test an anonymous method of measuring stigma in the community setting. The method of data collection, Preguntas con Cartas (Questions with Cards) used numbered playing cards to conduct anonymous group polling about stigmatizing beliefs during a mental health literacy intervention. An analysis of the difference between Preguntas con Cartas stigma votes and corresponding face-to-face individual survey results for the same seven stigma questions indicated that there was a statistically significant differences in the distributions between the two methods of data collection (χ(2) = 8.27, p = 0.016). This exploratory study has shown the potential effectiveness of Preguntas con Cartas as a novel method of measuring stigma in the community-based setting.
NASA Technical Reports Server (NTRS)
Hoell, J. M.; Gregory, G. L.; Carroll, M. A.; Mcfarland, M.; Ridley, B. A.; Davis, D. D.; Bradshaw, J.; Rodgers, M. O.; Torres, A. L.; Condon, E. P.
1984-01-01
Results from an intercomparison of methods to measure carbon monoxide (CO), nitric oxide (NO), and the hydroxyl radical (OH) are discussed. The intercomparison was conducted at Wallops Island, Virginia, in July 1983 and included a laser differential absorption and three grab sample/gas chromatograph methods for CO, a laser-induced fluorescence (LIF) and two chemiluminescence methods for NO, and two LIF methods and a radiocarbon tracer method for OH. The intercomparison was conducted as a field measurement program involving ambient measurements of CO (150-300 ppbv) and NO (10-180 pptv) from a common manifold with controlled injection of CO in incremental steps from 20 to 500 ppbv and NO in steps from 10 to 220 pptv. Only ambient measurements of OH were made. The agreement between the techniques was on the order of 14 percent for CO and 17 percent for NO. Hardware difficulties during the OH tests resulted in a data base with insufficient data and uncertanties too large to permit a meaningful intercomposition.
Bialas, Andrzej
2010-01-01
The paper discusses the security issues of intelligent sensors that are able to measure and process data and communicate with other information technology (IT) devices or systems. Such sensors are often used in high risk applications. To improve their robustness, the sensor systems should be developed in a restricted way to provide them with assurance. One of assurance creation methodologies is Common Criteria (ISO/IEC 15408), used for IT products and systems. The contribution of the paper is a Common Criteria compliant and pattern-based method for the intelligent sensors security development. The paper concisely presents this method and its evaluation for the sensor detecting methane in a mine, focusing on the security problem of the intelligent sensor definition and solution. The aim of the validation is to evaluate and improve the introduced method. PMID:22399888
NASA Astrophysics Data System (ADS)
McJannet, D. L.; Cook, F. J.; McGloin, R. P.; McGowan, H. A.; Burn, S.
2011-05-01
The use of scintillometers to determine sensible and latent heat flux is becoming increasingly common because of their ability to quantify convective fluxes over distances of hundreds of meters to several kilometers. The majority of investigations using scintillometry have focused on processes above land surfaces, but here we propose a new methodology for obtaining sensible and latent heat fluxes from a scintillometer deployed over open water. This methodology has been tested by comparison with eddy covariance measurements and through comparison with alternative scintillometer calculation approaches that are commonly used in the literature. The methodology is based on linearization of the Bowen ratio, which is a common assumption in models such as Penman's model and its derivatives. Comparison of latent heat flux estimates from the eddy covariance system and the scintillometer showed excellent agreement across a range of weather conditions and flux rates, giving a high level of confidence in scintillometry-derived latent heat fluxes. The proposed approach produced better estimates than other scintillometry calculation methods because of the reliance of alternative methods on measurements of water temperature or water body heat storage, which are both notoriously hard to quantify. The proposed methodology requires less instrumentation than alternative scintillometer calculation approaches, and the spatial scales of required measurements are arguably more compatible. In addition to scintillometer measurements of the structure parameter of the refractive index of air, the only measurements required are atmospheric pressure, air temperature, humidity, and wind speed at one height over the water body.
Podsakoff, Nathan P; Whiting, Steven W; Welsh, David T; Mai, Ke Michael
2013-09-01
Despite the increased attention paid to biases attributable to common method variance (CMV) over the past 50 years, researchers have only recently begun to systematically examine the effect of specific sources of CMV in previously published empirical studies. Our study contributes to this research by examining the extent to which common rater, item, and measurement context characteristics bias the relationships between organizational citizenship behaviors and performance evaluations using a mixed-effects analytic technique. Results from 173 correlations reported in 81 empirical studies (N = 31,146) indicate that even after controlling for study-level factors, common rater and anchor point number similarity substantially biased the focal correlations. Indeed, these sources of CMV (a) led to estimates that were between 60% and 96% larger when comparing measures obtained from a common rater, versus different raters; (b) led to 39% larger estimates when a common source rated the scales using the same number, versus a different number, of anchor points; and (c) when taken together with other study-level predictors, accounted for over half of the between-study variance in the focal correlations. We discuss the implications for researchers and practitioners and provide recommendations for future research. PsycINFO Database Record (c) 2013 APA, all rights reserved
Optical microtopographic inspection of the surface of tooth subjected to stripping reduction
NASA Astrophysics Data System (ADS)
Costa, Manuel F.; Pereira, Pedro B.
2011-05-01
In orthodontics, the decreasing of tooth-size by reducing interproximal enamel surfaces (stripping) of teeth is a common procedure which allows dental alignment with minimal changes in the facial profile and no arch expansion. In order to achieve smooth surfaces, clinicians have been testing various methods and progressively improved this therapeutic technique. In order to evaluate the surface roughness of teeth subject to interproximal reduction through the five most commonly used methods, teeth were inspected by scanning electron microscopy and microtopographically measured using the optical active triangulation based microtopographer MICROTOP.06.MFC. The metrological procedure will be presented as well as the comparative results concluding on the most suitable tooth interproximal reduction method.
Methods of blood flow measurement in the arterial circulatory system.
Tabrizchi, R; Pugsley, M K
2000-01-01
The most commonly employed techniques for the in vivo measurement of arterial blood flow to individual organs involve the use of flow probes or sensors. Commercially available systems for the measurement of in vivo blood flow can be divided into two categories: ultrasonic and electromagnetic. Two types of ultrasonic probes are used. The first type of flow probe measures blood flow-mediated Doppler shifts (Doppler flowmetry) in a vessel. The second type of flow probe measures the "transit time" required by an emitted ultrasound wave to traverse the vessel and are transit-time volume flow sensors. Measurement of blood flow in any vessel requires that the flow probe or sensor be highly accurate and exhibit signal linearity over the flow range in the vessel of interest. Moreover, additional desirable features include compact design, size, and weight. An additional important feature for flow probes is that they exhibit good biocompatability; it is imperative for the sensor to behave in an inert manner towards the biological system. A sensitive and reliable method to assess blood flow in individual organs in the body, other than by the use of probes/sensors, is the reference sample method that utilizes hematogeneously delivered microspheres. This method has been utilized to a large extend to assess regional blood flow in the entire body. Obviously, the purpose of measuring blood flow is to determine the amount of blood delivered to a given region per unit time (milliliters per minute) and it is desirable to achieve this goal by noninvasive methodologies. This, however, is not always possible. This review attempts to offer an overview of some of the techniques available for the assessment of regional blood flow in the arterial circulatory system and discusses advantages and disadvantages of these common techniques.
Interpreting spectral unmixing coefficients: From spectral weights to mass fractions
NASA Astrophysics Data System (ADS)
Grumpe, Arne; Mengewein, Natascha; Rommel, Daniela; Mall, Urs; Wöhler, Christian
2018-01-01
It is well known that many common planetary minerals exhibit prominent absorption features. Consequently, the analysis of spectral reflectance measurements has become a major tool of remote sensing. Quantifying the mineral abundances, however, is not a trivial task. The interaction between the incident light rays and particulate surfaces, e.g., the lunar regolith, leads to a non-linear relationship between the reflectance spectra of the pure minerals, the so-called ;endmembers;, and the surface's reflectance spectrum. It is, however, possible to transform the non-linear reflectance mixture into a linear mixture of single-scattering albedos of the Hapke model. The abundances obtained by inverting the linear single-scattering albedo mixture may be interpreted as volume fractions which are weighted by the endmember's extinction coefficient. Commonly, identical extinction coefficients are assumed throughout all endmembers and the obtained volume fractions are converted to mass fractions using either measured or assumed densities. In theory, the proposed method may cover different grain sizes if each grain size range of a mineral is treated as a distinct endmember. Here, we present a method to transform the mixing coefficients to mass fractions for arbitrary combinations of extinction coefficients and densities. The required parameters are computed from reflectance measurements of well defined endmember mixtures. Consequently, additional measurements, e.g., the endmember density, are no longer required. We evaluate the method based on laboratory measurements and various results presented in the literature, respectively. It is shown that the procedure transforms the mixing coefficients to mass fractions yielding an accuracy comparable to carefully calibrated laboratory measurements without additional knowledge. For our laboratory measurements, the square root of the mean squared error is less than 4.82 wt%. In addition, the method corrects for systematic effects originating from mixtures of endmembers showing a highly varying albedo, e.g., plagioclase and pyroxene.
Hawkins, Liam J; Storey, Kenneth B
2017-01-01
Common Western-blot imaging systems have previously been adapted to measure signals from luminescent microplate assays. This can be a cost saving measure as Western-blot imaging systems are common laboratory equipment and could substitute a dedicated luminometer if one is not otherwise available. One previously unrecognized limitation is that the signals captured by the cameras in these systems are not equal for all wells. Signals are dependent on the angle of incidence to the camera, and thus the location of the well on the microplate. Here we show that: •The position of a well on a microplate significantly affects the signal captured by a common Western-blot imaging system from a luminescent assay.•The effect of well position can easily be corrected for.•This method can be applied to commercially available luminescent assays, allowing for high-throughput quantification of a wide range of biological processes and biochemical reactions.
Methodological Choices in Peer Nomination Research
ERIC Educational Resources Information Center
Cillessen, Antonius H. N.; Marks, Peter E. L.
2017-01-01
Although peer nomination measures have been used by researchers for nearly a century, common methodological practices and rules of thumb (e.g., which variables to measure; use of limited vs. unlimited nomination methods) have continued to develop in recent decades. At the same time, other key aspects of the basic nomination procedure (e.g.,…
The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data re...
ERIC Educational Resources Information Center
Shriberg, Michael
2002-01-01
This paper analyzes recent efforts to measure sustainability in higher education across institutions. The benefits of cross-institutional assessments include: identifying and benchmarking leaders and best practices; communicating common goals, experiences, and methods; and providing a directional tool to measure progress toward the concept of a…
Fitting program for linear regressions according to Mahon (1996)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trappitsch, Reto G.
2018-01-09
This program takes the users' Input data and fits a linear regression to it using the prescription presented by Mahon (1996). Compared to the commonly used York fit, this method has the correct prescription for measurement error propagation. This software should facilitate the proper fitting of measurements with a simple Interface.
ERIC Educational Resources Information Center
Höhne, Jan Karem; Schlosser, Stephan; Krebs, Dagmar
2017-01-01
Measuring attitudes and opinions employing agree/disagree (A/D) questions is a common method in social research because it appears to be possible to measure different constructs with identical response scales. However, theoretical considerations suggest that A/D questions require a considerable cognitive processing. Item-specific (IS) questions,…
Quantification of protein carbonylation.
Wehr, Nancy B; Levine, Rodney L
2013-01-01
Protein carbonylation is the most commonly used measure of oxidative modification of proteins. It is most often measured spectrophotometrically or immunochemically by derivatizing proteins with the classical carbonyl reagent 2,4 dinitrophenylhydrazine (DNPH). We present protocols for the derivatization and quantification of protein carbonylation with these two methods, including a newly described dot blot with greatly increased sensitivity.
Copula Models for Sociology: Measures of Dependence and Probabilities for Joint Distributions
ERIC Educational Resources Information Center
Vuolo, Mike
2017-01-01
Often in sociology, researchers are confronted with nonnormal variables whose joint distribution they wish to explore. Yet, assumptions of common measures of dependence can fail or estimating such dependence is computationally intensive. This article presents the copula method for modeling the joint distribution of two random variables, including…
How to choose methods for lake greenhouse gas flux measurements?
NASA Astrophysics Data System (ADS)
Bastviken, David
2017-04-01
Lake greenhouse gas (GHG) fluxes are increasingly recognized as important for lake ecosystems as well as for large scale carbon and GHG budgets. However, many of our flux estimates are uncertain and it can be discussed if the presently available data is representative for the systems studied or not. Data are also very limited for some important flux pathways. Hence, many ongoing efforts try to better constrain fluxes and understand flux regulation. A fundamental challenge towards improved knowledge and when starting new studies is what methods to choose. A variety of approaches to measure aquatic GHG exchange is used and data from different methods and methodological approaches have often been treated as equally valid to create large datasets for extrapolations and syntheses. However, data from different approaches may cover different flux pathways or spatio-temporal domains and are thus not always comparable. Method inter-comparisons and critical method evaluations addressing these issues are rare. Emerging efforts to organize systematic multi-lake monitoring networks for GHG fluxes leads to method choices that may set the foundation for decades of data generation and therefore require fundamental evaluation of different approaches. The method choices do not only regard the equipment but also for example consideration of overall measurement design and field approaches, relevant spatial and temporal resolution for different flux components, and accessory variables to measure. In addition, consideration of how to design monitoring approaches being affordable, suitable for widespread (global) use, and comparable across regions is needed. Inspired by discussions with Prof. Dr. Cristian Blodau during the EGU General Assembly 2016, this presentation aims to (1) illustrate fundamental pros and cons for a number of common methods, (2) show how common methodological approaches originally adapted for other environments can be improved for lake flux measurements, (3) suggest how consideration of spatio-temporal dimensions of flux variability can lead to more optimized approaches, and (4) highlight possibilities of efficient ways forward including low-cost technologies that has potential for world-wide use.
Diagnostic methods to assess inspiratory and expiratory muscle strength*
Caruso, Pedro; de Albuquerque, André Luis Pereira; Santana, Pauliane Vieira; Cardenas, Leticia Zumpano; Ferreira, Jeferson George; Prina, Elena; Trevizan, Patrícia Fernandes; Pereira, Mayra Caleffi; Iamonti, Vinicius; Pletsch, Renata; Macchione, Marcelo Ceneviva; Carvalho, Carlos Roberto Ribeiro
2015-01-01
Impairment of (inspiratory and expiratory) respiratory muscles is a common clinical finding, not only in patients with neuromuscular disease but also in patients with primary disease of the lung parenchyma or airways. Although such impairment is common, its recognition is usually delayed because its signs and symptoms are nonspecific and late. This delayed recognition, or even the lack thereof, occurs because the diagnostic tests used in the assessment of respiratory muscle strength are not widely known and available. There are various methods of assessing respiratory muscle strength during the inspiratory and expiratory phases. These methods are divided into two categories: volitional tests (which require patient understanding and cooperation); and non-volitional tests. Volitional tests, such as those that measure maximal inspiratory and expiratory pressures, are the most commonly used because they are readily available. Non-volitional tests depend on magnetic stimulation of the phrenic nerve accompanied by the measurement of inspiratory mouth pressure, inspiratory esophageal pressure, or inspiratory transdiaphragmatic pressure. Another method that has come to be widely used is ultrasound imaging of the diaphragm. We believe that pulmonologists involved in the care of patients with respiratory diseases should be familiar with the tests used in order to assess respiratory muscle function.Therefore, the aim of the present article is to describe the advantages, disadvantages, procedures, and clinical applicability of the main tests used in the assessment of respiratory muscle strength. PMID:25972965
DOT National Transportation Integrated Search
2017-01-01
This report summarizes the variety of methods used to estimate and evaluate exposure to risk in pedestrian and bicyclist safety analyses. In the literature, the most common definition of risk was a measure of the probability of a crash to occur given...
Testing common stream sampling methods for broad-scale, long-term monitoring
Eric K. Archer; Brett B. Roper; Richard C. Henderson; Nick Bouwes; S. Chad Mellison; Jeffrey L. Kershner
2004-01-01
We evaluated sampling variability of stream habitat sampling methods used by the USDA Forest Service and the USDI Bureau of Land Management monitoring program for the upper Columbia River Basin. Three separate studies were conducted to describe the variability of individual measurement techniques, variability between crews, and temporal variation throughout the summer...
Three-dimensional head anthropometric analysis
NASA Astrophysics Data System (ADS)
Enciso, Reyes; Shaw, Alex M.; Neumann, Ulrich; Mah, James
2003-05-01
Currently, two-dimensional photographs are most commonly used to facilitate visualization, assessment and treatment of facial abnormalities in craniofacial care but are subject to errors because of perspective, projection, lack metric and 3-dimensional information. One can find in the literature a variety of methods to generate 3-dimensional facial images such as laser scans, stereo-photogrammetry, infrared imaging and even CT however each of these methods contain inherent limitations and as such no systems are in common clinical use. In this paper we will focus on development of indirect 3-dimensional landmark location and measurement of facial soft-tissue with light-based techniques. In this paper we will statistically evaluate and validate a current three-dimensional image-based face modeling technique using a plaster head model. We will also develop computer graphics tools for indirect anthropometric measurements in a three-dimensional head model (or polygonal mesh) including linear distances currently used in anthropometry. The measurements will be tested against a validated 3-dimensional digitizer (MicroScribe 3DX).
Predicting links based on knowledge dissemination in complex network
NASA Astrophysics Data System (ADS)
Zhou, Wen; Jia, Yifan
2017-04-01
Link prediction is the task of mining the missing links in networks or predicting the next vertex pair to be connected by a link. A lot of link prediction methods were inspired by evolutionary processes of networks. In this paper, a new mechanism for the formation of complex networks called knowledge dissemination (KD) is proposed with the assumption of knowledge disseminating through the paths of a network. Accordingly, a new link prediction method-knowledge dissemination based link prediction (KDLP)-is proposed to test KD. KDLP characterizes vertex similarity based on knowledge quantity (KQ) which measures the importance of a vertex through H-index. Extensive numerical simulations on six real-world networks demonstrate that KDLP is a strong link prediction method which performs at a higher prediction accuracy than four well-known similarity measures including common neighbors, local path index, average commute time and matrix forest index. Furthermore, based on the common conclusion that an excellent link prediction method reveals a good evolving mechanism, the experiment results suggest that KD is a considerable network evolving mechanism for the formation of complex networks.
De Micco, Veronica; Ruel, Katia; Joseleau, Jean-Paul; Aronne, Giovanna
2010-08-01
During cell wall formation and degradation, it is possible to detect cellulose microfibrils assembled into thicker and thinner lamellar structures, respectively, following inverse parallel patterns. The aim of this study was to analyse such patterns of microfibril aggregation and cell wall delamination. The thickness of microfibrils and lamellae was measured on digital images of both growing and degrading cell walls viewed by means of transmission electron microscopy. To objectively detect, measure and classify microfibrils and lamellae into thickness classes, a method based on the application of computerized image analysis combined with graphical and statistical methods was developed. The method allowed common classes of microfibrils and lamellae in cell walls to be identified from different origins. During both the formation and degradation of cell walls, a preferential formation of structures with specific thickness was evidenced. The results obtained with the developed method allowed objective analysis of patterns of microfibril aggregation and evidenced a trend of doubling/halving lamellar structures, during cell wall formation/degradation in materials from different origin and which have undergone different treatments.
A Comparative Study of Distribution System Parameter Estimation Methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yannan; Williams, Tess L.; Gourisetti, Sri Nikhil Gup
2016-07-17
In this paper, we compare two parameter estimation methods for distribution systems: residual sensitivity analysis and state-vector augmentation with a Kalman filter. These two methods were originally proposed for transmission systems, and are still the most commonly used methods for parameter estimation. Distribution systems have much lower measurement redundancy than transmission systems. Therefore, estimating parameters is much more difficult. To increase the robustness of parameter estimation, the two methods are applied with combined measurement snapshots (measurement sets taken at different points in time), so that the redundancy for computing the parameter values is increased. The advantages and disadvantages of bothmore » methods are discussed. The results of this paper show that state-vector augmentation is a better approach for parameter estimation in distribution systems. Simulation studies are done on a modified version of IEEE 13-Node Test Feeder with varying levels of measurement noise and non-zero error in the other system model parameters.« less
Liu, Xikai; Ma, Dong; Chen, Liang; Liu, Xiangdong
2018-01-01
Tuning the stiffness balance is crucial to full-band common-mode rejection for a superconducting gravity gradiometer (SGG). A reliable method to do so has been proposed and experimentally tested. In the tuning scheme, the frequency response functions of the displacement of individual test mass upon common-mode accelerations were measured and thus determined a characteristic frequency for each test mass. A reduced difference in characteristic frequencies between the two test masses was utilized as the criterion for an effective tuning. Since the measurement of the characteristic frequencies does not depend on the scale factors of displacement detection, stiffness tuning can be done independently. We have tested this new method on a single-component SGG and obtained a reduction of two orders of magnitude in stiffness mismatch. PMID:29419796
Extended linear detection range for optical tweezers using image-plane detection scheme
NASA Astrophysics Data System (ADS)
Hajizadeh, Faegheh; Masoumeh Mousavi, S.; Khaksar, Zeinab S.; Reihani, S. Nader S.
2014-10-01
Ability to measure pico- and femto-Newton range forces using optical tweezers (OT) strongly relies on the sensitivity of its detection system. We show that the commonly used back-focal-plane detection method provides a linear response range which is shorter than that of the restoring force of OT for large beads. This limits measurable force range of OT. We show, both theoretically and experimentally, that utilizing a second laser beam for tracking could solve the problem. We also propose a new detection scheme in which the quadrant photodiode is positioned at the plane optically conjugate to the object plane (image plane). This method solves the problem without need for a second laser beam for the bead sizes that are commonly used in force spectroscopy applications of OT, such as biopolymer stretching.
Implementation of direct LSC method for diesel samples on the fuel market.
Krištof, Romana; Hirsch, Marko; Kožar Logar, Jasmina
2014-11-01
The European Union develops common EU policy and strategy on biofuels and sustainable bio-economy through several documents. The encouragement of biofuel's consumption is therefore the obligation of each EU member state. The situation in Slovenian fuel market is presented and compared with other EU countries in the frame of prescribed values from EU directives. Diesel is the most common fuel for transportation needs in Slovenia. The study was therefore performed on diesel. The sampling net was determined in accordance with the fuel consumption statistics of the country. 75 Sampling points were located on different types of roads. The quantity of bio-component in diesel samples was determined by direct LSC method through measurement of C-14 content. The measured values were in the range from 0 up to nearly 6 mass percentage of bio-component in fuel. The method has proved to be appropriate, suitable and effective for studies on the real fuel market. Copyright © 2014 Elsevier Ltd. All rights reserved.
Improved phase-ellipse method for in-situ geophone calibration.
Liu, Huaibao P.; Peselnick, L.
1986-01-01
For amplitude and phase response calibration of moving-coil electromagnetic geophones 2 parameters are needed, namely the geophone natural frequency, fo, and the geophone upper resonance frequency fu. The phase-ellipse method is commonly used for the in situ determination of these parameters. For a given signal-to-noise ratio, the precision of the measurement of fo and fu depends on the phase sensitivity, f(delta PHI/delta PHIf). For some commercial geophones (f(delta PHI/delta PHI) at fu can be an order of magnitude less than the sensitivity at fo. Presents an improved phase-ellipse method with increased precision. Compared to measurements made with the existing phase-ellipse methods, the method shows a 6- and 3-fold improvement in the precision, respectively, on measurements of fo and fu on a commercial geophone.-from Authors
Image based method for aberration measurement of lithographic tools
NASA Astrophysics Data System (ADS)
Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa
2018-01-01
Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.
[Automated procedure for volumetric measurement of metastases: estimation of tumor burden].
Fabel, M; Bolte, H
2008-09-01
Cancer is a common and increasing disease worldwide. Therapy monitoring in oncologic patient care requires accurate and reliable measurement methods for evaluation of the tumor burden. RECIST (response evaluation criteria in solid tumors) and WHO criteria are still the current standards for therapy response evaluation with inherent disadvantages due to considerable interobserver variation of the manual diameter estimations. Volumetric analysis of e.g. lung, liver and lymph node metastases, promises to be a more accurate, precise and objective method for tumor burden estimation.
2015-08-19
laboratory analysis using EPA TO-15, and collection of gas samples in sorbent tubes for later analysis of aldehydes using NIOSH Method 2016. Total VOCs...measurement can be a general qualitative indicator of IAQ problems; formaldehyde and other aldehydes are common organic gases emitted from OSB; and...table in the middle of the hut. 5.1.2.3 Formaldehyde and other aldehydes Aldehydes were measured using both Dräger-tubes and by NIOSH Method 2016. The
Comparing Hall Effect and Field Effect Measurements on the Same Single Nanowire.
Hultin, Olof; Otnes, Gaute; Borgström, Magnus T; Björk, Mikael; Samuelson, Lars; Storm, Kristian
2016-01-13
We compare and discuss the two most commonly used electrical characterization techniques for nanowires (NWs). In a novel single-NW device, we combine Hall effect and back-gated and top-gated field effect measurements and quantify the carrier concentrations in a series of sulfur-doped InP NWs. The carrier concentrations from Hall effect and field effect measurements are found to correlate well when using the analysis methods described in this work. This shows that NWs can be accurately characterized with available electrical methods, an important result toward better understanding of semiconductor NW doping.
Measuring transferring similarity via local information
NASA Astrophysics Data System (ADS)
Yin, Likang; Deng, Yong
2018-05-01
Recommender systems have developed along with the web science, and how to measure the similarity between users is crucial for processing collaborative filtering recommendation. Many efficient models have been proposed (i.g., the Pearson coefficient) to measure the direct correlation. However, the direct correlation measures are greatly affected by the sparsity of dataset. In other words, the direct correlation measures would present an inauthentic similarity if two users have a very few commonly selected objects. Transferring similarity overcomes this drawback by considering their common neighbors (i.e., the intermediates). Yet, the transferring similarity also has its drawback since it can only provide the interval of similarity. To break the limitations, we propose the Belief Transferring Similarity (BTS) model. The contributions of BTS model are: (1) BTS model addresses the issue of the sparsity of dataset by considering the high-order similarity. (2) BTS model transforms uncertain interval to a certain state based on fuzzy systems theory. (3) BTS model is able to combine the transferring similarity of different intermediates using information fusion method. Finally, we compare BTS models with nine different link prediction methods in nine different networks, and we also illustrate the convergence property and efficiency of the BTS model.
Predicting Envelope Leakage in Attached Dwellings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faakye, O.; Arena, L.; Griffiths, D.
2013-07-01
The most common method for measuring air leakage is to use a single blower door to pressurize and/or depressurize the test unit. In detached housing, the test unit is the entire home and the single blower door measures air leakage to the outside. In attached housing, this 'single unit', 'total', or 'solo' test method measures both the air leakage between adjacent units through common surfaces as well air leakage to the outside. Measuring and minimizing this total leakage is recommended to avoid indoor air quality issues between units, reduce energy losses to the outside, reduce pressure differentials between units, andmore » control stack effect. However, two significant limitations of the total leakage measurement in attached housing are: for retrofit work, if total leakage is assumed to be all to the outside, the energy benefits of air sealing can be significantly over predicted; for new construction, the total leakage values may result in failing to meet an energy-based house tightness program criterion. The scope of this research is to investigate an approach for developing a viable simplified algorithm that can be used by contractors to assess energy efficiency program qualification and/or compliance based upon solo test results.« less
A Comparative Study of Measuring Devices Used During Space Shuttle Processing for Inside Diameters
NASA Technical Reports Server (NTRS)
Rodriguez, Antonio
2006-01-01
During Space Shuttle processing, discrepancies between vehicle dimensions and per print dimensions determine if a part should be refurbished, replaced or accepted "as-is." The engineer's job is to address each discrepancy by choosing the most accurate procedure and tool available, sometimes with up to ten thousands of an inch tolerance. Four methods of measurement are commonly used at the Kennedy Space Center: 1) caliper, 2) mold impressions, 3) optical comparator, 4) dial bore gage. During a problem report evaluation, uncertainty arose between methods after measuring diameters with variations of up to 0.0004" inches. The results showed that computer based measuring devices are extremely accurate, but when human factor is involved in determining points of reference, the results may vary widely compared to more traditional methods. iv
MTF measurement and analysis of linear array HgCdTe infrared detectors
NASA Astrophysics Data System (ADS)
Zhang, Tong; Lin, Chun; Chen, Honglei; Sun, Changhong; Lin, Jiamu; Wang, Xi
2018-01-01
The slanted-edge technique is the main method for measurement detectors MTF, however this method is commonly used on planar array detectors. In this paper the authors present a modified slanted-edge method to measure the MTF of linear array HgCdTe detectors. Crosstalk is one of the major factors that degrade the MTF value of such an infrared detector. This paper presents an ion implantation guard-ring structure which was designed to effectively absorb photo-carriers that may laterally defuse between adjacent pixels thereby suppressing crosstalk. Measurement and analysis of the MTF of the linear array detectors with and without a guard-ring were carried out. The experimental results indicated that the ion implantation guard-ring structure effectively suppresses crosstalk and increases MTF value.
Measurement of the residual stress distribution in a thick pre-stretched aluminum plate
NASA Astrophysics Data System (ADS)
Yuan, S. X.; Li, X. Q.; M, S.; Zhang, Y. C.; Gong, Y. D.
2008-12-01
Thick pre-stretched aluminum alloy plates are widely used in aircraft, while machining distortion caused by initial residual stress release in thick plates is a common and serious problem. To reduce the distortion, the residual stress distribution in thick plate must be measured. According to the characteristics of the thick pre-stretched aluminum alloy plate, based the elastic mechanical theory, this article deduces the modified layer-removal strain method adapting two different strain situations, which are caused by tensile and compressive stress. To validate this method, the residual stresses distribution along the thick direction of plate 2D70T351 is measured by this method, it is shown that the new method deduced in this paper is simple and accurate, and is very useful in engineering.
Kataoka, Yohei; Watanabe, Takahiro; Hayashi, Tomoko; Teshima, Reiko; Matsuda, Rieko
2015-01-01
In this study, we developed methods to quantify lead, total arsenic and cadmium contained in various kinds of soft drinks, and we evaluated their performance. The samples were digested by common methods to prepare solutions for measurement by ICP-OES, ICP-MS and graphite furnace atomic absorption spectrometry (GF-AAS). After digestion, internal standard was added to the digestion solutions for measurements by ICP-OES and ICP-MS. For measurement by GF-AAS, additional purification of the digestion solution was conducted by back-extraction of the three metals into nitric acid solution after extraction into an organic solvent with ammonium pyrrolidine dithiocarbamate. Performance of the developed methods were evaluated for eight kinds of soft drinks.
Measurement Techniques for Hypervelocity Impact Test Fragments
NASA Technical Reports Server (NTRS)
Hill, Nicole E.
2008-01-01
The ability to classify the size and shape of individual orbital debris fragments provides a better understanding of the orbital debris environment as a whole. The characterization of breakup fragmentation debris has gradually evolved from a simplistic, spherical assumption towards that of describing debris in terms of size, material, and shape parameters. One of the goals of the NASA Orbital Debris Program Office is to develop high-accuracy techniques to measure these parameters and apply them to orbital debris observations. Measurement of the physical characteristics of debris resulting from groundbased, hypervelocity impact testing provides insight into the shapes and sizes of debris produced from potential impacts in orbit. Current techniques for measuring these ground-test fragments require determination of dimensions based upon visual judgment. This leads to reduced accuracy and provides little or no repeatability for the measurements. With the common goal of mitigating these error sources, allaying any misunderstandings, and moving forward in fragment shape determination, the NASA Orbital Debris Program Office recently began using a computerized measurement system. The goal of using these new techniques is to improve knowledge of the relation between commonly used dimensions and overall shape. The immediate objective is to scan a single fragment, measure its size and shape properties, and import the fragment into a program that renders a 3D model that adequately demonstrates how the object could appear in orbit. This information would then be used to aid optical methods in orbital debris shape determination. This paper provides a description of the measurement techniques used in this initiative and shows results of this work. The tradeoffs of the computerized methods are discussed, as well as the means of repeatability in the measurements of these fragments. This paper serves as a general description of methods for the measurement and shape analysis of orbital debris.
Field Techniques for Estimating Water Fluxes Between Surface Water and Ground Water
Rosenberry, Donald O.; LaBaugh, James W.
2008-01-01
This report focuses on measuring the flow of water across the interface between surface water and ground water, rather than the hydrogeological or geochemical processes that occur at or near this interface. The methods, however, that use hydrogeological and geochemical evidence to quantify water fluxes are described herein. This material is presented as a guide for those who have to examine the interaction of surface water and ground water. The intent here is that both the overview of the many available methods and the in-depth presentation of specific methods will enable the reader to choose those study approaches that will best meet the requirements of the environments and processes they are investigating, as well as to recognize the merits of using more than one approach. This report is designed to make the reader aware of the breadth of approaches available for the study of the exchange between surface and ground water. To accomplish this, the report is divided into four chapters. Chapter 1 describes many well-documented approaches for defining the flow between surface and ground waters. Subsequent chapters provide an in-depth presentation of particular methods. Chapter 2 focuses on three of the most commonly used methods to either calculate or directly measure flow of water between surface-water bodies and the ground-water domain: (1) measurement of water levels in well networks in combination with measurement of water level in nearby surface water to determine water-level gradients and flow; (2) use of portable piezometers (wells) or hydraulic potentiomanometers to measure hydraulic gradients; and (3) use of seepage meters to measure flow directly. Chapter 3 focuses on describing the techniques involved in conducting water-tracer tests using fluorescent dyes, a method commonly used in the hydrogeologic investigation and characterization of karst aquifers, and in the study of water fluxes in karst terranes. Chapter 4 focuses on heat as a tracer in hydrological investigations of the near-surface environment.
Technical brief: a comparison of two methods of euthanasia on retinal dopamine levels.
Hwang, Christopher K; Iuvone, P Michael
2013-01-01
Mice are commonly used in biomedical research, and euthanasia is an important part of mouse husbandry. Approved, humane methods of euthanasia are designed to minimize the potential for pain or discomfort, but may also influence the measurement of experimental variables. We compared the effects of two approved methods of mouse euthanasia on the levels of retinal dopamine. We examined the level of retinal dopamine, a commonly studied neuromodulator, following euthanasia by carbon dioxide (CO₂)-induced asphyxiation or by cervical dislocation. We found that the level of retinal dopamine in mice euthanized through CO₂ overdose substantially differed from that in mice euthanized through cervical dislocation. The use of CO₂ as a method of euthanasia could result in an experimental artifact that could compromise results when studying labile biologic processes.
Identification of hydraulic conductivity structure in sand and gravel aquifers: Cape Cod data set
Eggleston, J.R.; Rojstaczer, S.A.; Peirce, J.J.
1996-01-01
This study evaluates commonly used geostatistical methods to assess reproduction of hydraulic conductivity (K) structure and sensitivity under limiting amounts of data. Extensive conductivity measurements from the Cape Cod sand and gravel aquifer are used to evaluate two geostatistical estimation methods, conditional mean as an estimate and ordinary kriging, and two stochastic simulation methods, simulated annealing and sequential Gaussian simulation. Our results indicate that for relatively homogeneous sand and gravel aquifers such as the Cape Cod aquifer, neither estimation methods nor stochastic simulation methods give highly accurate point predictions of hydraulic conductivity despite the high density of collected data. Although the stochastic simulation methods yielded higher errors than the estimation methods, the stochastic simulation methods yielded better reproduction of the measured In (K) distribution and better reproduction of local contrasts in In (K). The inability of kriging to reproduce high In (K) values, as reaffirmed by this study, provides a strong instigation for choosing stochastic simulation methods to generate conductivity fields when performing fine-scale contaminant transport modeling. Results also indicate that estimation error is relatively insensitive to the number of hydraulic conductivity measurements so long as more than a threshold number of data are used to condition the realizations. This threshold occurs for the Cape Cod site when there are approximately three conductivity measurements per integral volume. The lack of improvement with additional data suggests that although fine-scale hydraulic conductivity structure is evident in the variogram, it is not accurately reproduced by geostatistical estimation methods. If the Cape Cod aquifer spatial conductivity characteristics are indicative of other sand and gravel deposits, then the results on predictive error versus data collection obtained here have significant practical consequences for site characterization. Heavily sampled sand and gravel aquifers, such as Cape Cod and Borden, may have large amounts of redundant data, while in more common real world settings, our results suggest that denser data collection will likely improve understanding of permeability structure.
Estimating evapotranspiration in natural and constructed wetlands
Lott, R. Brandon; Hunt, Randall J.
2001-01-01
Difficulties in accurately calculating evapotranspiration (ET) in wetlands can lead to inaccurate water balances—information important for many compensatory mitigation projects. Simple meteorological methods or off-site ET data often are used to estimate ET, but these approaches do not include potentially important site-specific factors such as plant community, root-zone water levels, and soil properties. The objective of this study was to compare a commonly used meterological estimate of potential evapotranspiration (PET) with direct measurements of ET (lysimeters and water-table fluctuations) and small-scale root-zone geochemistry in a natural and constructed wetland system. Unlike what has been commonly noted, the results of the study demonstrated that the commonly used Penman combination method of estimating PET underestimated the ET that was measured directly in the natural wetland over most of the growing season. This result is likely due to surface heterogeneity and related roughness efffects not included in the simple PET estimate. The meterological method more closely approximated season-long measured ET rates in the constructed wetland but may overestimate the ET rate late in the growing season. ET rates also were temporally variable in wetlands over a range of time scales because they can be influenced by the relation of the water table to the root zone and the timing of plant senescence. Small-scale geochemical sampling of the shallow root zone was able to provide an independent evaluation of ET rates, supporting the identification of higher ET rates in the natural wetlands and differences in temporal ET rates due to the timing of senescence. These discrepancies illustrate potential problems with extrapolating off-site estimates of ET or single measurements of ET from a site over space or time.
Yan, Kaihong; Dong, Zhaomin; Liu, Yanju; Naidu, Ravi
2016-04-01
Bioaccessibility to assess potential risks resulting from exposure to Pb-contaminated soils is commonly estimated using various in vitro methods. However, existing in vitro methods yield different results depending on the composition of the extractant as well as the contaminated soils. For this reason, the relationships between the five commonly used in vitro methods, the Relative Bioavailability Leaching Procedure (RBALP), the unified BioAccessibility Research Group Europe (BARGE) method (UBM), the Solubility Bioaccessibility Research Consortium assay (SBRC), a Physiologically Based Extraction Test (PBET), and the in vitro Digestion Model (RIVM) were quantified statistically using 10 soils from long-term Pb-contaminated mining and smelter sites located in Western Australia and South Australia. For all 10 soils, the measured Pb bioaccessibility regarding all in vitro methods varied from 1.9 to 106% for gastric phase, which is higher than that for intestinal phase: 0.2 ∼ 78.6%. The variations in Pb bioaccessibility depend on the in vitro models being used, suggesting that the method chosen for bioaccessibility assessment must be validated against in vivo studies prior to use for predicting risk. Regression studies between RBALP and SRBC, RBALP and RIVM (0.06) (0.06 g of soil in each tube, S:L ratios for gastric phase and intestinal phase are 1:375 and 1:958, respectively) showed that Pb bioaccessibility based on the three methods were comparable. Meanwhile, the slopes between RBALP and UBM, RBALP and RIVM (0.6) (0.6 g soil in each tube, S:L ratios for gastric phase and intestinal phase are 1:37.5 and 1:96, respectively) were 1.21 and 1.02, respectively. The findings presented in this study could help standardize in vitro bioaccessibility measurements and provide a scientific basis for further relating Pb bioavailability and soil properties.
Development of a Low-Noise High Common-Mode-Rejection Instrumentation Amplifier. M.S. Thesis
NASA Technical Reports Server (NTRS)
Rush, Kenneth; Blalock, T. V.; Kennedy, E. J.
1975-01-01
Several previously used instrumentation amplifier circuits were examined to find limitations and possibilities for improvement. One general configuration is analyzed in detail, and methods for improvement are enumerated. An improved amplifier circuit is described and analyzed with respect to common mode rejection and noise. Experimental data are presented showing good agreement between calculated and measured common mode rejection ratio and equivalent noise resistance. The amplifier is shown to be capable of common mode rejection in excess of 140 db for a trimmed circuit at frequencies below 100 Hz and equivalent white noise below 3.0 nv/square root of Hz above 1000 Hz.
MO-G-BRE-02: A Survey of IMRT QA Practices for More Than 800 Institutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pulliam, K; Kerns, J; Howell, R
Purpose: A wide range of techniques and measurement devices are employed for IMRT QA, causing a large variation of accepted action limits and potential follow up for failing plans. Such procedures are not well established or accepted in the medical physics community. To achieve the goal of proving insight into current IMRT QA practices, we created an electronic IMRT QA survey. The survey was open to a variety of the most common QA devices and assessed the type of comparison to measurement, action limits, delivery methods, and clinical action for failing QA plans. Methods: We conducted an online survey throughmore » the Radiological Physics Center's (RPC) annual survey with the goal of ascertaining elements of routine patient-specific IMRT QA. A total of 874 institutions responded to the survey. The questions ranged from asking for action limits, dosimeter type(s) used, delivery techniques, and actions taken when a plan fails IMRT QA. Results: The most common (52%) planar gamma criteria was 3%/3 mm with a 95% of pixels passing criteria. The most common QA device were diode arrays (48%). The most common first response to a plan failing QA was to re-measure at the same point the point dose (89%), second was to re-measure at a new point (13%), and third was to analyze the plan in relative instead of absolute mode (10%) (Does not add to 100% as not all institutions placed a response for each QA follow-up option). Some institutions, however, claimed that they had never observed a plan failure. Conclusion: The survey provided insights into the way the community currently performs IMRT QA. This information will help in the push to standardize action limits among dosimeters.« less
Stage measurement at gaging stations
Sauer, Vernon B.; Turnipseed, D. Phil
2010-01-01
Stream and reservoir stage are critical parameters in the computation of stream discharge and reservoir volume, respectively. In addition, a record of stream stage is useful in the design of structures that may be affected by stream elevation, as well as for the planning for various uses of flood plains. This report describes equipment and methodology for the observation, sensing, and recording of stage in streams and reservoirs. Although the U.S. Geological Survey (USGS) still uses the traditional, basic stilling-well float system as a predominant gaging station, modern electronic stage sensors and water-level recorders are now commonly used. Bubble gages coupled with nonsubmersible pressure transducers eliminate the need for stilling wells. Submersible pressure transducers have become common in use for the measurement of stage in both rivers and lakes. Furthermore, noncontact methods, such as radar, acoustic, and laser methods of sensing water levels, are being developed and tested, and in the case of radar, are commonly used for the measurement of stage. This report describes commonly used gaging-station structures, as well as the design and operation of gaging stations. Almost all of the equipment and instruments described in this report will meet the accuracy standard set by the USGS Office of Surface Water (OSW) for the measurement of stage for most applications, which is ±0.01 foot (ft) or 0.2 percent of the effective stage. Several telemetry systems are used to transmit stage data from the gaging station to the office, although satellite telemetry has become the standard. These telemetry systems provide near real-time stage data, as well as other information that alerts the hydrographer to extreme or abnormal events, and instrument malfunctions.
Standardized reporting of functioning information on ICF-based common metrics.
Prodinger, Birgit; Tennant, Alan; Stucki, Gerold
2018-02-01
In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians and researchers to continue using their tools while still being able to compare and aggregate the information within and across tools.
Reliability of the Colorado Family Support Assessment: A Self-Sufficiency Matrix for Families
ERIC Educational Resources Information Center
Richmond, Melissa K.; Pampel, Fred C.; Zarcula, Flavia; Howey, Virginia; McChesney, Brenda
2017-01-01
Purpose: Family support programs commonly use self-sufficiency matrices (SSMs) to measure family outcomes, however, validation research on SSMs is sparse. This study examined the reliability of the Colorado Family Support Assessment 2.0 (CFSA 2.0) to measure family self-reliance across 14 domains (e.g., employment). Methods: Ten written case…
Diagnostic Group Differences in Parent and Teacher Ratings on the BRIEF and Conners' Scales
ERIC Educational Resources Information Center
Sullivan, Jeremy R.; Riccio, Cynthia A.
2007-01-01
Objective: Behavioral rating scales are common instruments used in evaluations of ADHD and executive function. It is important to explore how different diagnostic groups perform on these measures, as this information can be used to provide criterion-related validity evidence for the measures. Method: Data from 92 children and adolescents were used…
ERIC Educational Resources Information Center
Hoffman, LaVae M.; Loeb, Diane Frome; Brandel, Jayne; Gillam, Ronald B.
2011-01-01
Purpose: This study investigated the psychometric properties of 2 oral language measures that are commonly used for diagnostic purposes with school-age children who have language impairments. Method: Two hundred sixteen children with specific language impairment were assessed with the Test of Language Development--Primary, Third Edition (TOLD-P:3;…
Endocrine-disrupting chemicals (EDCs) are exogenous substances that can lead to impacts on the reproduction of fish sometimes by altering circulating concentrations of 17â-estradiol (E2), testosterone (T) and 11-ketotestosterone (11-KT). Common methods to measure steroids in pla...
ERIC Educational Resources Information Center
Barth, Michael M.; Karagiannidis, Iordanis
2016-01-01
Many universities have implemented tuition differentials for certain undergraduate degree programs, citing higher degree costs or higher demand. However, most college accounting systems are unsuited for measuring cost differentials by degree program. This research outlines a method that can convert commonly available financial data to a more…
Marcelo Ard& #243; n; Catherine M. Pringle; Susan L. Eggert
2009-01-01
Comparisons of the effects of leaf litter chemistry on leaf breakdown rates in tropical vs temperate streams are hindered by incompatibility among studies and across sites of analytical methods used to measure leaf chemistry. We used standardized analytical techniques to measure chemistry and breakdown rate of leaves from common riparian tree species at 2 sites, 1...
Roberts, Sarah P.; Siegel, Michael B.; DeJong, William; Jernigan, David H.
2014-01-01
Background Adolescent alcohol consumption remains common and is associated with many negative health outcomes. Unfortunately, common alcohol surveillance methods often underestimate consumption. Improved alcohol use measures are needed to characterize the landscape of youth drinking. Objectives We aimed to compare a standard quantity-frequency measure of youth alcohol consumption to a novel brand-specific measure. Methods We recruited a sample of 1,031 respondents across the United States to complete an online survey. Analyses included 833 male and female underage drinkers ages 13–20. Respondents reported on how many of the past 30 days they consumed alcohol, and the number of drinks consumed on an average drinking day. Using our brand-specific measure, respondents identified which brands they consumed, how many days they consumed each brand, and how many drinks per brand they usually had. Results Youth reported consuming significantly more alcohol (on average, 11 drinks more per month) when responding to the brand-specific versus the standard measure (p<.001). The two major predictors of the difference between the two measures were being a heavy episodic drinker (p<.001, 95% CI = 4.1 to 12.0) and the total number of brands consumed (p<.001, 95% CI = 2.0 to 2.8). Conclusion This study contributes to the field of alcohol and adolescent research first by investigating a potentially more accurate alcohol surveillance method, and secondly by promoting the assessment of alcohol use among adolescents vulnerable to risky alcohol use. Finally, our survey addresses the potential impact of alcohol marketing on youth and their subsequent alcohol brand preferences and consumption. PMID:25062357
Inertia Compensation While Scanning Screw Threads on Coordinate Measuring Machines
NASA Astrophysics Data System (ADS)
Kosarevsky, Sergey; Latypov, Viktor
2010-01-01
Usage of scanning coordinate-measuring machines for inspection of screw threads has become a common practice nowadays. Compared to touch trigger probing, scanning capabilities allow to speed up the measuring process while still maintaining high accuracy. However, in some cases accuracy drastically depends on the scanning speed. In this paper a compensation method is proposed allowing to reduce the influence of inertia of the probing system while scanning screw threads on coordinate-measuring machines.
Measurement of Capillary Radius and Contact Angle within Porous Media.
Ravi, Saitej; Dharmarajan, Ramanathan; Moghaddam, Saeed
2015-12-01
The pore radius (i.e., capillary radius) and contact angle determine the capillary pressure generated in a porous medium. The most common method to determine these two parameters is through measurement of the capillary pressure generated by a reference liquid (i.e., a liquid with near-zero contact angle) and a test liquid. The rate of rise technique, commonly used to determine the capillary pressure, results in significant uncertainties. In this study, we utilize a recently developed technique for independently measuring the capillary pressure and permeability to determine the equivalent minimum capillary radii and contact angle of water within micropillar wick structures. In this method, the experimentally measured dryout threshold of a wick structure at different wicking lengths is fit to Darcy's law to extract the maximum capillary pressure generated by the test liquid. The equivalent minimum capillary radii of different wick geometries are determined by measuring the maximum capillary pressures generated using n-hexane as the working fluid. It is found that the equivalent minimum capillary radius is dependent on the diameter of pillars and the spacing between pillars. The equivalent capillary radii of micropillar wicks determined using the new method are found to be up to 7 times greater than the current geometry-based first-order estimates. The contact angle subtended by water at the walls of the micropillars is determined by measuring the capillary pressure generated by water within the arrays and the measured capillary radii for the different geometries. This mean contact angle of water is determined to be 54.7°.
USDA-ARS?s Scientific Manuscript database
Reverse Transcription quantitative Polymerase Chain Reaction (qRT-PCR) is a popular method for measuring transcript abundance. The most commonly used method of interpretation is relative quantification and thus necessitates the use of normalization controls (i.e. reference genes) to standardize tran...
Something Old, Something New: MBA Program Evaluation Using Shift-Share Analysis and Google Trends
ERIC Educational Resources Information Center
Davis, Sarah M.; Rodriguez, A. E.
2014-01-01
Shift-share analysis is a decomposition technique that is commonly used to measure attributes of regional change. In this method, regional change is decomposed into its relevant functional and competitive parts. This paper introduces traditional shift-share method and its extensions with examples of its applicability and usefulness for program…
Preboske, Gregory M; Gunter, Jeff L; Ward, Chadwick P; Jack, Clifford R
2006-05-01
Measuring rates of brain atrophy from serial magnetic resonance imaging (MRI) studies is an attractive way to assess disease progression in neurodegenerative disorders, particularly Alzheimer's disease (AD). A widely recognized approach is the boundary shift integral (BSI). The objective of this study was to evaluate how several common scan non-idealities affect the output of the BSI algorithm. We created three types of image non-idealities between the image volumes in a serial pair used to measure between-scan change: inconsistent image contrast between serial scans, head motion, and poor signal-to-noise (SNR). In theory the BSI volume difference measured between each pair of images should be zero and any deviation from zero should represent corruption of the BSI measurement by some non-ideality intentionally introduced into the second scan in the pair. Two different BSI measures were evaluated, whole brain and ventricle. As the severity of motion, noise, and non-congruent image contrast increased in the second scan, the calculated BSI values deviated progressively more from the expected value of zero. This study illustrates the magnitude of the error in measures of change in brain and ventricle volume across serial MRI scans that can result from commonly encountered deviations from ideal image quality. The magnitudes of some of the measurement errors seen in this study exceed the disease effect in AD shown in various publications, which range from 1% to 2.78% per year for whole brain atrophy and 5.4% to 13.8% per year for ventricle expansion (Table 1). For example, measurement error may exceed 100% if image contrast properties dramatically differ between the two scans in a measurement pair. Methods to maximize consistency of image quality over time are an essential component of any quantitative serial MRI study.
Photographic films as remote sensors for measuring albedos of terrestrial surfaces
NASA Technical Reports Server (NTRS)
Pease, S. R.; Pease, R. W.
1972-01-01
To test the feasibility of remotely measuring the albedos of terrestrial surfaces from photographic images, an inquiry was carried out at ground level using several representative common surface targets. Problems of making such measurements with a spectrally selective sensor, such as photographic film, have been compared to previous work utilizing silicon cells. Two photographic approaches have been developed: a multispectral method which utilizes two or three photographic images made through conventional multispectral filters and a single shot method which utilizes the broad spectral sensitivity of black and white infrared film. Sensitometry related to the methods substitutes a Log Albedo scale for the conventional Log Exposure for creating characteristic curves. Certain constraints caused by illumination goemetry are discussed.
NASA Astrophysics Data System (ADS)
Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin
2018-05-01
Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.
NASA Technical Reports Server (NTRS)
Oliver, A. Brandon
2017-01-01
Obtaining measurements of flight environments on ablative heat shields is both critical for spacecraft development and extremely challenging due to the harsh heating environment and surface recession. Thermocouples installed several millimeters below the surface are commonly used to measure the heat shield temperature response, but an ill-posed inverse heat conduction problem must be solved to reconstruct the surface heating environment from these measurements. Ablation can contribute substantially to the measurement response making solutions to the inverse problem strongly dependent on the recession model, which is often poorly characterized. To enable efficient surface reconstruction for recession model sensitivity analysis, a method for decoupling the surface recession evaluation from the inverse heat conduction problem is presented. The decoupled method is shown to provide reconstructions of equivalent accuracy to the traditional coupled method but with substantially reduced computational effort. These methods are applied to reconstruct the environments on the Mars Science Laboratory heat shield using diffusion limit and kinetically limited recession models.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You
2013-11-04
A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.
The interdependence between screening methods and screening libraries.
Shelat, Anang A; Guy, R Kiplin
2007-06-01
The most common methods for discovery of chemical compounds capable of manipulating biological function involves some form of screening. The success of such screens is highly dependent on the chemical materials - commonly referred to as libraries - that are assayed. Classic methods for the design of screening libraries have depended on knowledge of target structure and relevant pharmacophores for target focus, and on simple count-based measures to assess other properties. The recent proliferation of two novel screening paradigms, structure-based screening and high-content screening, prompts a profound rethink about the ideal composition of small-molecule screening libraries. We suggest that currently utilized libraries are not optimal for addressing new targets by high-throughput screening, or complex phenotypes by high-content screening.
Intra-grain Common Pb Correction and Detrital Apatite U-Pb Dating via LA-ICPMS Depth Profiling
NASA Astrophysics Data System (ADS)
Boyd, P. D.; Galster, F.; Stockli, D. F.
2017-12-01
Apatite is a common accessory phase in igneous and sedimentary rocks. While apatite is widely employed as a low-temperature thermochronometric tool, it has been increasingly utilized to constrain moderate temperature cooling histories by U-Pb dating. Apatite U-Pb is characterized by a thermal sensitivity window of 375-550°C. This unique temperature window recorded by the apatite U-Pb system, and the near-ubiquitous presence of apatite in igneous and clastic sedimentary rocks makes it a powerful tool able to illuminate mid-crustal tectono-thermal processes. However, as apatite incorporates only modest amounts of U and Th (1-10s of ppm) the significant amounts of non-radiogenic "common" Pb incorporated during its formation presents a major hurdle for apatite U-Pb dating. In bedrock samples common Pb in apatite can be corrected for by the measurement of Pb in a cogenetic mineral phase, such as feldspar, that does not incorporate U or from determination of a common Pb composition from multiple analyses in Tera-Wasserburg space. While these methods for common Pb correction in apatite can work for igneous samples, they cannot be applied to detrital apatite in sedimentary rocks with variable common Pb compositions. The obstacle of common Pb in apatite has hindered the application of detrital apatite U-Pb dating in provenance studies, despite the fact that it would be a powerful tool. This study presents a new method for the in situ correction of common Pb in apatite through the utilization of novel LA-ICP-MS depth profiling, which can recover U-Pb ratios at micron-scale spatial resolution during ablation of a grain. Due to the intra-grain U variability in apatite, a mixing line for a single grain can be generated in Tera-Wasserburg Concordia space. As a case study, apatite from a Variscan alpine granite were analyzed using both the single and multi-grain method, with both methods giving identical results. As a second case study the intra-grain method was then performed on detrital apatite from the Swiss Northern Alpine Foreland Basin, where the common Pb composition and age spectra of detrital apatite grains were elucidated. The novel intra-grain apatite method enables the correction for common Pb in detrital apatite, making it feasible to incorporate detrital apatite U-Pb dating in provenance and source-to-sink studies.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor.
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-03-31
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face.
Real-Time and High-Resolution 3D Face Measurement via a Smart Active Optical Sensor
You, Yong; Shen, Yang; Zhang, Guocai; Xing, Xiuwen
2017-01-01
The 3D measuring range and accuracy in traditional active optical sensing, such as Fourier transform profilometry, are influenced by the zero frequency of the captured patterns. The phase-shifting technique is commonly applied to remove the zero component. However, this phase-shifting method must capture several fringe patterns with phase difference, thereby influencing the real-time performance. This study introduces a smart active optical sensor, in which a composite pattern is utilized. The composite pattern efficiently combines several phase-shifting fringes and carrier frequencies. The method can remove zero frequency by using only one pattern. Model face reconstruction and human face measurement were employed to study the validity and feasibility of this method. Results show no distinct decrease in the precision of the novel method unlike the traditional phase-shifting method. The texture mapping technique was utilized to reconstruct a nature-appearance 3D digital face. PMID:28362349
Speckle reduction in optical coherence tomography by adaptive total variation method
NASA Astrophysics Data System (ADS)
Wu, Tong; Shi, Yaoyao; Liu, Youwen; He, Chongjun
2015-12-01
An adaptive total variation method based on the combination of speckle statistics and total variation restoration is proposed and developed for reducing speckle noise in optical coherence tomography (OCT) images. The statistical distribution of the speckle noise in OCT image is investigated and measured. With the measured parameters such as the mean value and variance of the speckle noise, the OCT image is restored by the adaptive total variation restoration method. The adaptive total variation restoration algorithm was applied to the OCT images of a volunteer's hand skin, which showed effective speckle noise reduction and image quality improvement. For image quality comparison, the commonly used median filtering method was also applied to the same images to reduce the speckle noise. The measured results demonstrate the superior performance of the adaptive total variation restoration method in terms of image signal-to-noise ratio, equivalent number of looks, contrast-to-noise ratio, and mean square error.
Golden angle based scanning for robust corneal topography with OCT
Wagner, Joerg; Goldblum, David; Cattin, Philippe C.
2017-01-01
Corneal topography allows the assessment of the cornea’s refractive power which is crucial for diagnostics and surgical planning. The use of optical coherence tomography (OCT) for corneal topography is still limited. One limitation is the susceptibility to disturbances like blinking of the eye. This can result in partially corrupted scans that cannot be evaluated using common methods. We present a new scanning method for reliable corneal topography from partial scans. Based on the golden angle, the method features a balanced scan point distribution which refines over measurement time and remains balanced when part of the scan is removed. The performance of the method is assessed numerically and by measurements of test surfaces. The results confirm that the method enables numerically well-conditioned and reliable corneal topography from partially corrupted scans and reduces the need for repeated measurements in case of abrupt disturbances. PMID:28270961
[METHODS OF EVALUATION OF MUSCLE MASS: A SYSTEMATIC REVIEW OF RANDOMIZED CONTROLLED TRIALS].
Moreira, Osvaldo Costa; de Oliveira, Cláudia Eliza Patrocínio; Candia-Luján, Ramón; Romero-Pérez, Ena Monserrat; de Paz Fernandez, José Antonio
2015-09-01
in recent years, research about muscle mass has gained popularity for their relationship to health. Thus precise measurement of muscle mass may have clinical application once may interfere with the diagnosis and prescription drug or drug treatment. to conduct a systematic review of the methods most used for evaluation of muscle mass in randomized controlled trials, with its advantages and disadvantages. we conducted a search of the data bases Pub- Med, Web of Science and Scopus, with words "muscle mass", "measurement", "assessment" and "evaluation", combined in this way: "muscle mass" AND (assessment OR measurement OR evaluation). 23 studies were recovered and analyzed, all in English. 69.56% only used a method for quantification of muscle mass; 69.57% used dual X-ray absorptiometry (DXA); in 45.46% the type of measure used was the body lean mass; and 51.61% chose the whole body as a site of measurement. in the randomized controlled trials analyzed the majority used just one method of assessment, with the DXA being the method most used, the body lean mass the measurement type most used and total body the most common site of measure. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.
Linking flowability and granulometry of lactose powders.
Boschini, F; Delaval, V; Traina, K; Vandewalle, N; Lumay, G
2015-10-15
The flowing properties of 10 lactose powders commonly used in pharmaceutical industries have been analyzed with three recently improved measurement methods. The first method is based on the heap shape measurement. This straightforward measurement method provides two physical parameters (angle of repose αr and static cohesive index σr) allowing to make a first screening of the powder properties. The second method allows to estimate the rheological properties of a powder by analyzing the powder flow in a rotating drum. This more advanced method gives a large set of physical parameters (flowing angle αf, dynamic cohesive index σf, angle of first avalanche αa and powder aeration %ae) leading to deeper interpretations. The third method is an improvement of the classical bulk and tapped density measurements. In addition to the improvement of the measurement precision, the densification dynamics of the powder bulk submitted to taps is analyzed. The link between the macroscopic physical parameters obtained with these methods and the powder granulometry is analyzed. Moreover, the correlations between the different flowability indexes are discussed. Finally, the link between grain shape and flowability is discussed qualitatively. Copyright © 2015 Elsevier B.V. All rights reserved.
A pseudo differential Gm—C complex filter with frequency tuning for IEEE802.15.4 applications
NASA Astrophysics Data System (ADS)
Xin, Cheng; Lungui, Zhong; Haigang, Yang; Fei, Liu; Tongqiang, Gao
2011-07-01
This paper presents a CMOS Gm—C complex filter for a low-IF receiver of the IEEE 802.15.4 standard. A pseudo differential OTA with reconfigurable common mode feedback and common mode feed-forward is proposed as well as the frequency tuning method based on a relaxation oscillator. A detailed analysis of non-ideality of the OTA and the frequency tuning method is elaborated. The analysis and measurement results have shown that the center frequency of the complex filter could be tuned accurately. The chip was fabricated in a standard 0.35 μm CMOS process, with a single 3.3 V power supply. The filter consumes 2.1mA current, has a measured in-band group delay ripple of less than 0.16 μs and an IRR larger than 28 dB at 2 MHz apart, which could meet the requirements oftheIEEE802.15.4 standard.
Analysis of electrical tomography sensitive field based on multi-terminal network and electric field
NASA Astrophysics Data System (ADS)
He, Yongbo; Su, Xingguo; Xu, Meng; Wang, Huaxiang
2010-08-01
Electrical tomography (ET) aims at the study of the conductivity/permittivity distribution of the interested field non-intrusively via the boundary voltage/current. The sensor is usually regarded as an electric field, and finite element method (FEM) is commonly used to calculate the sensitivity matrix and to optimize the sensor architecture. However, only the lumped circuit parameters can be measured by the data acquisition electronics, it's very meaningful to treat the sensor as a multi terminal network. Two types of multi terminal network with common node and common loop topologies are introduced. Getting more independent measurements and making more uniform current distribution are the two main ways to minimize the inherent ill-posed effect. By exploring the relationships of network matrixes, a general formula is proposed for the first time to calculate the number of the independent measurements. Additionally, the sensitivity distribution is analyzed with FEM. As a result, quasi opposite mode, an optimal single source excitation mode, that has the advantages of more uniform sensitivity distribution and more independent measurements, is proposed.
Correction of phase-shifting error in wavelength scanning digital holographic microscopy
NASA Astrophysics Data System (ADS)
Zhang, Xiaolei; Wang, Jie; Zhang, Xiangchao; Xu, Min; Zhang, Hao; Jiang, Xiangqian
2018-05-01
Digital holographic microscopy is a promising method for measuring complex micro-structures with high slopes. A quasi-common path interferometric apparatus is adopted to overcome environmental disturbances, and an acousto-optic tunable filter is used to obtain multi-wavelength holograms. However, the phase shifting error caused by the acousto-optic tunable filter reduces the measurement accuracy and, in turn, the reconstructed topographies are erroneous. In this paper, an accurate reconstruction approach is proposed. It corrects the phase-shifting errors by minimizing the difference between the ideal interferograms and the recorded ones. The restriction on the step number and uniformity of the phase shifting is relaxed in the interferometry, and the measurement accuracy for complex surfaces can also be improved. The universality and superiority of the proposed method are demonstrated by practical experiments and comparison to other measurement methods.
Measuring use of electronic health record functionality using system audit information.
Bowes, Watson A
2010-01-01
Meaningful and efficient methods for measuring Electronic Health Record (EHR) adoption and functional usage patterns have recently become important for hospitals, clinics, and health care networks in the United State due to recent government initiatives to increase EHR use. To date, surveys have been the method of choice to measure EHR adoption. This paper describes another method for measuring EHR adoption which capitalizes on audit logs, which are often common components of modern EHRs. An Audit Data Mart is described which identified EHR functionality within 836 Departments, within 22 Hospitals and 170 clinics at Intermountain Healthcare, a large integrated delivery system. The Audit Data Mart successfully identified important and differing EHR functional usage patterns. These patterns were useful in strategic planning, tracking EHR implementations, and will likely be utilized to assist in documentation of "Meaningful Use" of EHR functionality.
Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review
Chung, Stephanie T.; Chacko, Shaji K.; Sunehag, Agneta L.
2015-01-01
Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. PMID:26604176
Gupta, Munish; Kaplan, Heather C
2017-09-01
Quality improvement (QI) is based on measuring performance over time, and variation in data measured over time must be understood to guide change and make optimal improvements. Common cause variation is natural variation owing to factors inherent to any process; special cause variation is unnatural variation owing to external factors. Statistical process control methods, and particularly control charts, are robust tools for understanding data over time and identifying common and special cause variation. This review provides a practical introduction to the use of control charts in health care QI, with a focus on neonatology. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Potter, Jennifer L.
2011-12-01
Noise and vibration has long been sought to be reduced in major industries: automotive, aerospace and marine to name a few. Products must be tested and pass certain levels of federally regulated standards before entering the market. Vibration measurements are commonly acquired using accelerometers; however limitations of this method create a need for alternative solutions. Two methods for non-contact vibration measurements are compared: Laser Vibrometry, which directly measures the surface velocity of the aluminum plate, and Nearfield Acoustic Holography (NAH), which measures sound pressure in the nearfield, and using Green's Functions, reconstructs the surface velocity at the plate. The surface velocity from each method is then used in modal analysis to determine the comparability of frequency, damping and mode shapes. Frequency and mode shapes are also compared to an FEA model. Laser Vibrometry is a proven, direct method for determining surface velocity and subsequently calculating modal analysis results. NAH is an effective method in locating noise sources, especially those that are not well separated spatially. Little work has been done in incorporating NAH into modal analysis.
NASA Astrophysics Data System (ADS)
Hsu, Jung-Jiin
2015-08-01
In MRI, the flip angle (FA) of slice-selective excitation is not uniform across the slice-thickness dimension. This work investigates the effect of the non-uniform FA profile on the accuracy of a commonly-used method for the measurement, in which the T1 value, i.e., the longitudinal relaxation time, is determined from the steady-state signals of an equally-spaced RF pulse train. By using the numerical solutions of the Bloch equation, it is shown that, because of the non-uniform FA profile, the outcome of the T1 measurement depends significantly on T1 of the specimen and on the FA and the inter-pulse spacing τ of the pulse train. A new method to restore the accuracy of the T1 measurement is described. Different from the existing approaches, the new method also removes the FA profile effect for the measurement of the FA, which is normally a part of the T1 measurement. In addition, the new method does not involve theoretical modeling, approximation, or modification to the underlying principle of the T1 measurement. An imaging experiment is performed, which shows that the new method can remove the FA-, the τ-, and the T1-dependence and produce T1 measurements in excellent agreement with the ones obtained from a gold standard method (the inversion-recovery method).
Flexible, multi-measurement guided wave damage detection under varying temperatures
NASA Astrophysics Data System (ADS)
Douglass, Alexander C. S.; Harley, Joel B.
2018-04-01
Temperature compensation in structural health monitoring helps identify damage in a structure by removing data variations due to environmental conditions, such as temperature. Stretch-based methods are one of the most commonly used temperature compensation methods. To account for variations in temperature, stretch-based methods optimally stretch signals in time to optimally match a measurement to a baseline. All of the data is then compared with the single baseline to determine the presence of damage. Yet, for these methods to be effective, the measurement and the baseline must satisfy the inherent assumptions of the temperature compensation method. In many scenarios, these assumptions are wrong, the methods generate error, and damage detection fails. To improve damage detection, a multi-measurement damage detection method is introduced. By using each measurement in the dataset as a baseline, error caused by imperfect temperature compensation is reduced. The multi-measurement method increases the detection effectiveness of our damage metric, or damage indicator, over time and reduces the presence of additional peaks caused by temperature that could be mistaken for damage. By using many baselines, the variance of the damage indicator is reduced and the effects from damage are amplified. Notably, the multi-measurement improves damage detection over single-measurement methods. This is demonstrated through an increase in the maximum of our damage signature from 0.55 to 0.95 (where large values, up to a maximum of one, represent a statistically significant change in the data due to damage).
Balanced detection for self-mixing interferometry to improve signal-to-noise ratio
NASA Astrophysics Data System (ADS)
Zhao, Changming; Norgia, Michele; Li, Kun
2018-01-01
We apply balanced detection to self-mixing interferometry for displacement and vibration measurement, using two photodiodes for implementing a differential acquisition. The method is based on the phase opposition of the self-mixing signal measured between the two laser diode facet outputs. The balanced signal obtained by enlarging the self-mixing signal, also by canceling of the common-due noises mainly due to disturbances on laser supply and transimpedance amplifier. Experimental results demonstrate the signal-to-noise ratio significantly improves, with almost twice signals enhancement and more than half noise decreasing. This method allows for more robust, longer-distance measurement systems, especially using fringe-counting.
NASA Astrophysics Data System (ADS)
Egorov, A. V.; Kozlov, K. E.; Belogusev, V. N.
2018-01-01
In this paper, we propose a new method and instruments to identify the torque, the power, and the efficiency of internal combustion engines in transient conditions. This method, in contrast to the commonly used non-demounting methods based on inertia and strain gauge dynamometers, allows controlling the main performance parameters of internal combustion engines in transient conditions without inaccuracy connected with the torque loss due to its transfer to the driving wheels, on which the torque is measured with existing methods. In addition, the proposed method is easy to create, and it does not use strain measurement instruments, the application of which does not allow identifying the variable values of the measured parameters with high measurement rate; and therefore the use of them leads to the impossibility of taking into account the actual parameters when engineering the wheeled vehicles. Thus the use of this method can greatly improve the measurement accuracy and reduce costs and laboriousness during testing of internal combustion engines. The results of experiments showed the applicability of the proposed method for identification of the internal combustion engines performance parameters. In this paper, it was determined the most preferred transmission ratio when using the proposed method.
A new method of measuring gravitational acceleration in an undergraduate laboratory program
NASA Astrophysics Data System (ADS)
Wang, Qiaochu; Wang, Chang; Xiao, Yunhuan; Schulte, Jurgen; Shi, Qingfan
2018-01-01
This paper presents a high accuracy method to measure gravitational acceleration in an undergraduate laboratory program. The experiment is based on water in a cylindrical vessel rotating about its vertical axis at a constant speed. The water surface forms a paraboloid whose focal length is related to rotational period and gravitational acceleration. This experimental setup avoids classical source errors in determining the local value of gravitational acceleration, so prevalent in the common simple pendulum and inclined plane experiments. The presented method combines multiple physics concepts such as kinematics, classical mechanics and geometric optics, offering the opportunity for lateral as well as project-based learning.
Kwon, Dohyeon; Jeon, Chan-Gi; Shin, Junho; Heo, Myoung-Sun; Park, Sang Eon; Song, Youjian; Kim, Jungwon
2017-01-01
Timing jitter is one of the most important properties of femtosecond mode-locked lasers and optical frequency combs. Accurate measurement of timing jitter power spectral density (PSD) is a critical prerequisite for optimizing overall noise performance and further advancing comb applications both in the time and frequency domains. Commonly used jitter measurement methods require a reference mode-locked laser with timing jitter similar to or lower than that of the laser-under-test, which is a demanding requirement for many laser laboratories, and/or have limited measurement resolution. Here we show a high-resolution and reference-source-free measurement method of timing jitter spectra of optical frequency combs using an optical fibre delay line and optical carrier interference. The demonstrated method works well for both mode-locked oscillators and supercontinua, with 2 × 10−9 fs2/Hz (equivalent to −174 dBc/Hz at 10-GHz carrier frequency) measurement noise floor. The demonstrated method can serve as a simple and powerful characterization tool for timing jitter PSDs of various comb sources including mode-locked oscillators, supercontinua and recently emerging Kerr-frequency combs; the jitter measurement results enabled by our method will provide new insights for understanding and optimizing timing noise in such comb sources. PMID:28102352
Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay
2013-01-01
Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.
Absorbed dose measurement in low temperature samples:. comparative methods using simulated material
NASA Astrophysics Data System (ADS)
Garcia, Ruth; Harris, Anthony; Winters, Martell; Howard, Betty; Mellor, Paul; Patil, Deepak; Meiner, Jason
2004-09-01
There is a growing need to reliably measure absorbed dose in low temperature samples, especially in the pharmaceutical and tissue banking industries. All dosimetry systems commonly used in the irradiation industry are temperature sensitive. Radiation of low temperature samples, such as those packaged with dry ice, must therefore take these dosimeter temperature effects into consideration. This paper will suggest a method to accurately deliver an absorbed radiation dose using dosimetry techniques designed to abrogate the skewing effects of low temperature environments on existing dosimetry systems.
Characterization of Diesel Soot Aggregates by Scattering and Extinction Methods
NASA Astrophysics Data System (ADS)
Kamimoto, Takeyuki
2006-07-01
Characteristics of diesel soot particles sampled from diesel exhaust of a common-rail turbo-charged diesel engine are quantified by scattering and extinction diagnostics using newly build two laser-based instruments. The radius of gyration representing the aggregates size is measured by the angular distribution of scattering intensity, while the soot mass concentration is measured by a two-wavelength extinction method. An approach to estimate the refractive index of diesel soot by an analysis of the extinction and scattering data using an aggregates scattering theory is proposed.
Conceptual measurement framework for help-seeking for mental health problems
Rickwood, Debra; Thomas, Kerry
2012-01-01
Background Despite a high level of research, policy, and practice interest in help-seeking for mental health problems and mental disorders, there is currently no agreed and commonly used definition or conceptual measurement framework for help-seeking. Methods A systematic review of research activity in the field was undertaken to investigate how help-seeking has been conceptualized and measured. Common elements were used to develop a proposed conceptual measurement framework. Results The database search revealed a very high level of research activity and confirmed that there is no commonly applied definition of help-seeking and no psychometrically sound measures that are routinely used. The most common element in the help-seeking research was a focus on formal help-seeking sources, rather than informal sources, although studies did not assess a consistent set of professional sources; rather, each study addressed an idiosyncratic range of sources of professional health and community care. Similarly, the studies considered help-seeking for a range of mental health problems and no consistent terminology was applied. The most common mental health problem investigated was depression, followed by use of generic terms, such as mental health problem, psychological distress, or emotional problem. Major gaps in the consistent measurement of help-seeking were identified. Conclusion It is evident that an agreed definition that supports the comparable measurement of help-seeking is lacking. Therefore, a conceptual measurement framework is proposed to fill this gap. The framework maintains that the essential elements for measurement are: the part of the help-seeking process to be investigated and respective time frame, the source and type of assistance, and the type of mental health concern. It is argued that adopting this framework will facilitate progress in the field by providing much needed conceptual consistency. Results will then be able to be compared across studies and population groups, and this will significantly benefit understanding of policy and practice initiatives aimed at improving access to and engagement with services for people with mental health concerns. PMID:23248576
Methods for the accurate estimation of confidence intervals on protein folding ϕ-values
Ruczinski, Ingo; Sosnick, Tobin R.; Plaxco, Kevin W.
2006-01-01
ϕ-Values provide an important benchmark for the comparison of experimental protein folding studies to computer simulations and theories of the folding process. Despite the growing importance of ϕ measurements, however, formulas to quantify the precision with which ϕ is measured have seen little significant discussion. Moreover, a commonly employed method for the determination of standard errors on ϕ estimates assumes that estimates of the changes in free energy of the transition and folded states are independent. Here we demonstrate that this assumption is usually incorrect and that this typically leads to the underestimation of ϕ precision. We derive an analytical expression for the precision of ϕ estimates (assuming linear chevron behavior) that explicitly takes this dependence into account. We also describe an alternative method that implicitly corrects for the effect. By simulating experimental chevron data, we show that both methods accurately estimate ϕ confidence intervals. We also explore the effects of the commonly employed techniques of calculating ϕ from kinetics estimated at non-zero denaturant concentrations and via the assumption of parallel chevron arms. We find that these approaches can produce significantly different estimates for ϕ (again, even for truly linear chevron behavior), indicating that they are not equivalent, interchangeable measures of transition state structure. Lastly, we describe a Web-based implementation of the above algorithms for general use by the protein folding community. PMID:17008714
Empirical methods for assessing meaningful neuropsychological change following epilepsy surgery.
Sawrie, S M; Chelune, G J; Naugle, R I; Lüders, H O
1996-11-01
Traditional methods for assessing the neurocognitive effects of epilepsy surgery are confounded by practice effects, test-retest reliability issues, and regression to the mean. This study employs 2 methods for assessing individual change that allow direct comparison of changes across both individuals and test measures. Fifty-one medically intractable epilepsy patients completed a comprehensive neuropsychological battery twice, approximately 8 months apart, prior to any invasive monitoring or surgical intervention. First, a Reliable Change (RC) index score was computed for each test score to take into account the reliability of that measure, and a cutoff score was empirically derived to establish the limits of statistically reliable change. These indices were subsequently adjusted for expected practice effects. The second approach used a regression technique to establish "change norms" along a common metric that models both expected practice effects and regression to the mean. The RC index scores provide the clinician with a statistical means of determining whether a patient's retest performance is "significantly" changed from baseline. The regression norms for change allow the clinician to evaluate the magnitude of a given patient's change on 1 or more variables along a common metric that takes into account the reliability and stability of each test measure. Case data illustrate how these methods provide an empirically grounded means for evaluating neurocognitive outcomes following medical interventions such as epilepsy surgery.
Experimental studies of deposition at a debris-flow flume
Major, Jon J.
1995-01-01
Geologists commonly infer the flow conditions and the physical properties of debris flows from the sedimentologic, stratigraphic, and morphologic characteristics of their deposits. However, such inferences commonly lack corroboration by direct observation because the capricious nature of debris flows makes systematic observation and measurement of natural events both difficult and dangerous. Furthermore, in contrast to the numerous experimental studies of water flow and related fluvial deposition, few real-time observations and measurements of sediment deposition by large-scale mass flow of debris under controlled conditions have been made. Recent experiments at the U.S. Geological Survey debris-flow flume in the H. J. Andrews Experimental Forest, Oregon (Iverson and others, 1992) are shedding new insight on sediment deposition by debris flows and on the veracity of methods commonly used to reconstruct flow character from deposit characteristics.
Pharmacists' perspectives on monitoring adherence to treatment in Cystic Fibrosis.
Mooney, Karen; Ryan, Cristín; Downey, Damian G
2016-04-01
Cystic Fibrosis (CF) management requires complex treatment regimens but adherence to treatment is poor and has negative health implications. There are various methods of measuring adherence, but little is known regarding the extent of adherence measurement in CF centres throughout the UK and Ireland. To determine the adherence monitoring practices in CF centres throughout the UK and Ireland, and to establish CF pharmacists' views on these practices. UK and Ireland Cystic Fibrosis Pharmacists' Group's annual meeting (2014). A questionnaire was designed, piloted and distributed to pharmacists attending the UK and Ireland Cystic Fibrosis Pharmacists' Group's annual meeting (2014). The main outcome measures were the methods of inhaled/nebulised antibiotic supply and the methods used to measure treatment adherence in CF centres. The questionnaire also ascertained the demographic information of participating pharmacists. Closed question responses were analysed using descriptive statistics. Open questions were analysed using content analysis. Twenty-one respondents (84 % response) were included in the analysis and were mostly from English centres (66.7 %). Detailed records of patients receiving their inhaled/nebulised antibiotics were lacking. Adherence was most commonly described to be measured at 'every clinic visit' (28.6 %) and 'occasionally' (28.6 %). Patient self-reported adherence was the most commonly used method of measuring adherence in practice (90.5 %). The availability of electronic adherence monitoring in CF centres did not guarantee its use. Pharmacists attributed an equal professional responsibility for adherence monitoring in CF to Consultants, Nurses and Pharmacists. Seventy-six percent of pharmacists felt that the current adherence monitoring practices within their own unit were inadequate and associated with the absence of sufficient specialist CF pharmacist involvement. Many suggested that greater specialist pharmacist involvement could facilitate improved adherence monitoring. Current adherence knowledge is largely based on self-report. Further work is required to establish the most appropriate method of adherence monitoring in CF centres, to improve the recording of adherence and to understand the impact of increased specialist pharmacist involvement on that adherence.
Converting among log scaling methods : Scriber, International, and Doyle versus cubic
Henry Spelter
2004-01-01
Sawlogs in the United States, whether scaled on the ground or cruised on the stump, have traditionally been measured in terms of their lumber yield. The three commonly used measurement rules generally underestimate true recoveries. Moreover, they do so inconsistently, complicating the comparisons of volumes obtained by different board foot rules as well as by the cubic...
A Weighted Multipath Measurement Based on Gene Ontology for Estimating Gene Products Similarity
Liu, Lizhen; Dai, Xuemin; Song, Wei; Lu, Jingli
2014-01-01
Abstract Many different methods have been proposed for calculating the semantic similarity of term pairs based on gene ontology (GO). Most existing methods are based on information content (IC), and the methods based on IC are used more commonly than those based on the structure of GO. However, most IC-based methods not only fail to handle identical annotations but also show a strong bias toward well-annotated proteins. We propose a new method called weighted multipath measurement (WMM) for estimating the semantic similarity of gene products based on the structure of the GO. We not only considered the contribution of every path between two GO terms but also took the depth of the lowest common ancestors into account. We assigned different weights for different kinds of edges in GO graph. The similarity values calculated by WMM can be reused because they are only relative to the characteristics of GO terms. Experimental results showed that the similarity values obtained by WMM have a higher accuracy. We compared the performance of WMM with that of other methods using GO data and gene annotation datasets for yeast and humans downloaded from the GO database. We found that WMM is more suited for prediction of gene function than most existing IC-based methods and that it can distinguish proteins with identical annotations (two proteins are annotated with the same terms) from each other. PMID:25229994
Kullback-Leibler divergence for detection of rare haplotype common disease association.
Lin, Shili
2015-11-01
Rare haplotypes may tag rare causal variants of common diseases; hence, detection of such rare haplotypes may also contribute to our understanding of complex disease etiology. Because rare haplotypes frequently result from common single-nucleotide polymorphisms (SNPs), focusing on rare haplotypes is much more economical compared with using rare single-nucleotide variants (SNVs) from sequencing, as SNPs are available and 'free' from already amassed genome-wide studies. Further, associated haplotypes may shed light on the underlying disease causal mechanism, a feat unmatched by SNV-based collapsing methods. In recent years, data mining approaches have been adapted to detect rare haplotype association. However, as they rely on an assumed underlying disease model and require the specification of a null haplotype, results can be erroneous if such assumptions are violated. In this paper, we present a haplotype association method based on Kullback-Leibler divergence (hapKL) for case-control samples. The idea is to compare haplotype frequencies for the cases versus the controls by computing symmetrical divergence measures. An important property of such measures is that both the frequencies and logarithms of the frequencies contribute in parallel, thus balancing the contributions from rare and common, and accommodating both deleterious and protective, haplotypes. A simulation study under various scenarios shows that hapKL has well-controlled type I error rates and good power compared with existing data mining methods. Application of hapKL to age-related macular degeneration (AMD) shows a strong association of the complement factor H (CFH) gene with AMD, identifying several individual rare haplotypes with strong signals.
Roberts, Sarah P; Siegel, Michael B; DeJong, William; Jernigan, David H
2014-11-01
Adolescent alcohol consumption remains common and is associated with many negative health outcomes. Unfortunately, common alcohol surveillance methods often underestimate consumption. Improved alcohol use measures are needed to characterize the landscape of youth drinking. We aimed to compare a standard quantity-frequency measure of youth alcohol consumption to a novel brand-specific measure. We recruited a sample of 1031 respondents across the United States to complete an online survey. Analyses included 833 male and female underage drinkers ages 13-20. Respondents reported on how many of the past 30 days they consumed alcohol, and the number of drinks consumed on an average drinking day. Using our brand-specific measure, respondents identified which brands they consumed, how many days they consumed each brand, and how many drinks per brand they usually had. Youth reported consuming significantly more alcohol (on average, 11 drinks more per month) when responding to the brand-specific versus the standard measure (p < 0.001). The two major predictors of the difference between the two measures were being a heavy episodic drinker (p < 0.001, 95% CI = 4.1-12.0) and the total number of brands consumed (p < 0.001, 95% CI = 2.0-2.8). This study contributes to the field of alcohol and adolescent research first by investigating a potentially more accurate alcohol surveillance method, and secondly by promoting the assessment of alcohol use among adolescents vulnerable to risky alcohol use. Finally, our survey addresses the potential impact of alcohol marketing on youth and their subsequent alcohol brand preferences and consumption.
Fruit Quality Evaluation Using Spectroscopy Technology: A Review
Wang, Hailong; Peng, Jiyu; Xie, Chuanqi; Bao, Yidan; He, Yong
2015-01-01
An overview is presented with regard to applications of visible and near infrared (Vis/NIR) spectroscopy, multispectral imaging and hyperspectral imaging techniques for quality attributes measurement and variety discrimination of various fruit species, i.e., apple, orange, kiwifruit, peach, grape, strawberry, grape, jujube, banana, mango and others. Some commonly utilized chemometrics including pretreatment methods, variable selection methods, discriminant methods and calibration methods are briefly introduced. The comprehensive review of applications, which concentrates primarily on Vis/NIR spectroscopy, are arranged according to fruit species. Most of the applications are focused on variety discrimination or the measurement of soluble solids content (SSC), acidity and firmness, but also some measurements involving dry matter, vitamin C, polyphenols and pigments have been reported. The feasibility of different spectral modes, i.e., reflectance, interactance and transmittance, are discussed. Optimal variable selection methods and calibration methods for measuring different attributes of different fruit species are addressed. Special attention is paid to sample preparation and the influence of the environment. Areas where further investigation is needed and problems concerning model robustness and model transfer are identified. PMID:26007736
NASA Technical Reports Server (NTRS)
Szatkowski, George N.; Dudley, Kenneth L.; Koppen, Sandra V.; Ely, Jay J.; Nguyen, Truong X.; Ticatch, Larry A.; Mielnik, John J.; Mcneill, Patrick A.
2013-01-01
To support FAA certification airworthiness standards, composite substrates are subjected to lightning direct-effect electrical waveforms to determine performance characteristics of the lightning strike protection (LSP) conductive layers used to protect composite substrates. Test results collected from independent LSP studies are often incomparable due to variability in test procedures & applied practices at different organizations, which impairs performance correlations between different LSP data sets. Under a NASA supported contract, The Boeing Company developed technical procedures and documentation as guidance in order to facilitate a test method for conducting universal common practice lightning strike protection test procedures. The procedures obtain conformity in future lightning strike protection evaluations to allow meaningful performance correlations across data sets. This universal common practice guidance provides the manufacturing specifications to fabricate carbon fiber reinforced plastic (CFRP) test panels, including finish, grounding configuration, and acceptable methods for pretest nondestructive inspection (NDI) and posttest destructive inspection. The test operations guidance elaborates on the provisions contained in SAE ARP5416 to address inconsistencies in the generation of damage protection performance data, so as to provide for maximum achievable correlation across capable lab facilities. In addition, the guidance details a direct effects test bed design to aid in quantification of the multi-physical phenomena surrounding a lightning direct attachment supporting validation data requirements for the development of predictive computational modeling. The lightning test bed is designed to accommodate a repeatable installation procedure to secure the test panel and eliminate test installation uncertainty. It also facilitates a means to capture the electrical waveform parameters in 2 dimensions, along with the mechanical displacement and thermal heating parameters which occur during lightning attachment. Following guidance defined in the universal common practice LSP test documents, protected and unprotected CFRP panels were evaluated at 20, 40 and 100KAmps. This report presents analyzed data demonstrating the scientific usefulness of the common practice approach. Descriptions of the common practice CFRP test articles, LSP test bed fixture, and monitoring techniques to capture the electrical, mechanical and thermal parameters during lightning attachment are presented here. Two methods of measuring the electrical currents were evaluated, inductive current probes and a newly developed fiberoptic sensor. Two mechanical displacement methods were also examined, optical laser measurement sensors and a digital imaging correlation camera system. Recommendations are provided to help users implement the common practice test approach and obtain LSP test characterizations comparable across data sets.
Support for Alzheimer's Caregivers: Psychometric Evaluation of Familial and Friend Support Measures
ERIC Educational Resources Information Center
Wilks, Scott E.
2009-01-01
Objective: Information on the shortened, 20-item version of the Perceived Social Support Scale (S-PSSS) is scarce. The purpose of this study is to evaluate the psychometric properties of the S-PSSS Family (SSfa) and Friends (SSfr) subscales. Method: Because of their common coping method of social support, a cross-sectional sample of Alzheimer's…
A simple method for estimating potential relative radiation (PRR) for landscape-vegetation analysis.
Kenneth B. Jr. Pierce; Todd Lookingbill; Dean Urban
2005-01-01
Radiation is one of the primary influences on vegetation composition and spatial pattern. Topographic orientation is often used as a proxy for relative radiation load due to its effects on evaporative demand and local temperature. Common methods for incorporating this information (i.e., site measures of slope and aspect) fail to include daily or annual changes in solar...
Water vapor mass balance method for determining air infiltration rates in houses
David R. DeWalle; Gordon M. Heisler
1980-01-01
A water vapor mass balance technique that includes the use of common humidity-control equipment can be used to determine average air infiltration rates in buildings. Only measurements of the humidity inside and outside the home, the mass of vapor exchanged by a humidifier/dehumidifier, and the volume of interior air space are needed. This method gives results that...
ERIC Educational Resources Information Center
Newton, Warren P.; Lefebvre, Ann; Donahue, Katrina E.; Bacon, Thomas; Dobson, Allen
2010-01-01
Introduction: Little is known regarding how to accomplish large-scale health care improvement. Our goal is to improve the quality of chronic disease care in all primary care practices throughout North Carolina. Methods: Methods for improvement include (1) common quality measures and shared data system; (2) rapid cycle improvement principles; (3)…
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A
2008-09-01
The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).
Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study
Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.
2013-01-01
The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942
Varughese, J K; Wentzel-Larsen, T; Vassbotn, F; Moen, G; Lund-Johansen, M
2010-04-01
In this volumetric study of the vestibular schwannoma, we evaluated the accuracy and reliability of several approximation methods that are in use, and determined the minimum volume difference that needs to be measured for it to be attributable to an actual difference rather than a retest error. We also found empirical proportionality coefficients for the different methods. DESIGN/SETTING AND PARTICIPANTS: Methodological study with investigation of three different VS measurement methods compared to a reference method that was based on serial slice volume estimates. These volume estimates were based on: (i) one single diameter, (ii) three orthogonal diameters or (iii) the maximal slice area. Altogether 252 T1-weighted MRI images with gadolinium contrast, from 139 VS patients, were examined. The retest errors, in terms of relative percentages, were determined by undertaking repeated measurements on 63 scans for each method. Intraclass correlation coefficients were used to assess the agreement between each of the approximation methods and the reference method. The tendency for approximation methods to systematically overestimate/underestimate different-sized tumours was also assessed, with the help of Bland-Altman plots. The most commonly used approximation method, the maximum diameter, was the least reliable measurement method and has inherent weaknesses that need to be considered. This includes greater retest errors than area-based measurements (25% and 15%, respectively), and that it was the only approximation method that could not easily be converted into volumetric units. Area-based measurements can furthermore be more reliable for smaller volume differences than diameter-based measurements. All our findings suggest that the maximum diameter should not be used as an approximation method. We propose the use of measurement modalities that take into account growth in multiple dimensions instead.
Feature selection methods for big data bioinformatics: A survey from the search perspective.
Wang, Lipo; Wang, Yaoli; Chang, Qing
2016-12-01
This paper surveys main principles of feature selection and their recent applications in big data bioinformatics. Instead of the commonly used categorization into filter, wrapper, and embedded approaches to feature selection, we formulate feature selection as a combinatorial optimization or search problem and categorize feature selection methods into exhaustive search, heuristic search, and hybrid methods, where heuristic search methods may further be categorized into those with or without data-distilled feature ranking measures. Copyright © 2016 Elsevier Inc. All rights reserved.
Ali, Shehzad; Ronaldson, Sarah
2012-09-01
The predominant method of economic evaluation is cost-utility analysis, which uses cardinal preference elicitation methods, including the standard gamble and time trade-off. However, such approach is not suitable for understanding trade-offs between process attributes, non-health outcomes and health outcomes to evaluate current practices, develop new programmes and predict demand for services and products. Ordinal preference elicitation methods including discrete choice experiments and ranking methods are therefore commonly used in health economics and health service research. Cardinal methods have been criticized on the grounds of cognitive complexity, difficulty of administration, contamination by risk and preference attitudes, and potential violation of underlying assumptions. Ordinal methods have gained popularity because of reduced cognitive burden, lower degree of abstract reasoning, reduced measurement error, ease of administration and ability to use both health and non-health outcomes. The underlying assumptions of ordinal methods may be violated when respondents use cognitive shortcuts, or cannot comprehend the ordinal task or interpret attributes and levels, or use 'irrational' choice behaviour or refuse to trade-off certain attributes. CURRENT USE AND GROWING AREAS: Ordinal methods are commonly used to evaluate preference for attributes of health services, products, practices, interventions, policies and, more recently, to estimate utility weights. AREAS FOR ON-GOING RESEARCH: There is growing research on developing optimal designs, evaluating the rationalization process, using qualitative tools for developing ordinal methods, evaluating consistency with utility theory, appropriate statistical methods for analysis, generalizability of results and comparing ordinal methods against each other and with cardinal measures.
Reflecting on non-reflective action: An exploratory think-aloud study of self-report habit measures
Gardner, Benjamin; Tang, Vinca
2014-01-01
Objectives Within health psychology, habit – the tendency to enact action automatically as a learned response to contextual cues – is most commonly quantified using the ‘Self-Report Habit Index’, which assesses behavioural automaticity, or measures combining self-reported behaviour frequency and context stability. Yet, the use of self-report to capture habit has proven controversial. This study used ‘think-aloud’ methods to investigate problems experienced when completing these two measures. Design Cross-sectional survey with think-aloud study. Methods Twenty student participants narrated their thoughts while completing habit measures applied to four health-related behaviours (active commuting, unhealthy snacking, and one context-free and one context-specific variant of alcohol consumption). Data were coded using thematic analysis procedures. Results Problems were found in 10% of responses. Notable findings included participants lacking confidence in reporting automaticity, struggling to recall behaviour or cues, differing in interpretations of ‘commuting’, and misinterpreting items. Conclusions While most responses were unproblematic, and further work is needed to investigate habit self-reports among larger and more diverse samples, findings nonetheless question the sensitivity of the measures, and the conceptualization of habit underpinning common applications of them. We offer suggestions to minimize these problems. PMID:23869847
Rapid measurement of plasma free fatty acid concentration and isotopic enrichment using LC/MS
Persson, Xuan-Mai T.; Błachnio-Zabielska, Agnieszka Urszula; Jensen, Michael D.
2010-01-01
Measurements of plasma free fatty acids (FFA) concentration and isotopic enrichment are commonly used to evaluate FFA metabolism. Until now, gas chromatography-combustion-isotope ratio mass spectrometry (GC/C/IRMS) was the best method to measure isotopic enrichment in the methyl derivatives of 13C-labeled fatty acids. Although IRMS is excellent for analyzing enrichment, it requires time-consuming derivatization steps and is not optimal for measuring FFA concentrations. We developed a new, rapid, and reliable method for simultaneous quantification of 13C-labeled fatty acids in plasma using high-performance liquid chromatography-mass spectrometry (HPLC/MS). This method involves a very quick Dole extraction procedure and direct injection of the samples on the HPLC system. After chromatographic separation, the samples are directed to the mass spectrometer for electrospray ionization (ESI) and analysis in the negative mode using single ion monitoring. By employing equipment with two columns connected parallel to a mass spectrometer, we can double the throughput to the mass spectrometer, reducing the analysis time per sample to 5 min. Palmitate flux measured using this approach agreed well with the GC/C/IRMS method. This HPLC/MS method provides accurate and precise measures of FFA concentration and enrichment. PMID:20526002
NASA Astrophysics Data System (ADS)
Kieseler, Jan
2017-11-01
A method is discussed that allows combining sets of differential or inclusive measurements. It is assumed that at least one measurement was obtained with simultaneously fitting a set of nuisance parameters, representing sources of systematic uncertainties. As a result of beneficial constraints from the data all such fitted parameters are correlated among each other. The best approach for a combination of these measurements would be the maximization of a combined likelihood, for which the full fit model of each measurement and the original data are required. However, only in rare cases this information is publicly available. In absence of this information most commonly used combination methods are not able to account for these correlations between uncertainties, which can lead to severe biases as shown in this article. The method discussed here provides a solution for this problem. It relies on the public result and its covariance or Hessian, only, and is validated against the combined-likelihood approach. A dedicated software package implementing this method is also presented. It provides a text-based user interface alongside a C++ interface. The latter also interfaces to ROOT classes for simple combination of binned measurements such as differential cross sections.
2013-01-01
Background Plasma glucose levels are important measures in medical care and research, and are often obtained from oral glucose tolerance tests (OGTT) with repeated measurements over 2–3 hours. It is common practice to use simple summary measures of OGTT curves. However, different OGTT curves can yield similar summary measures, and information of physiological or clinical interest may be lost. Our mean aim was to extract information inherent in the shape of OGTT glucose curves, compare it with the information from simple summary measures, and explore the clinical usefulness of such information. Methods OGTTs with five glucose measurements over two hours were recorded for 974 healthy pregnant women in their first trimester. For each woman, the five measurements were transformed into smooth OGTT glucose curves by functional data analysis (FDA), a collection of statistical methods developed specifically to analyse curve data. The essential modes of temporal variation between OGTT glucose curves were extracted by functional principal component analysis. The resultant functional principal component (FPC) scores were compared with commonly used simple summary measures: fasting and two-hour (2-h) values, area under the curve (AUC) and simple shape index (2-h minus 90-min values, or 90-min minus 60-min values). Clinical usefulness of FDA was explored by regression analyses of glucose tolerance later in pregnancy. Results Over 99% of the variation between individually fitted curves was expressed in the first three FPCs, interpreted physiologically as “general level” (FPC1), “time to peak” (FPC2) and “oscillations” (FPC3). FPC1 scores correlated strongly with AUC (r=0.999), but less with the other simple summary measures (−0.42≤r≤0.79). FPC2 scores gave shape information not captured by simple summary measures (−0.12≤r≤0.40). FPC2 scores, but not FPC1 nor the simple summary measures, discriminated between women who did and did not develop gestational diabetes later in pregnancy. Conclusions FDA of OGTT glucose curves in early pregnancy extracted shape information that was not identified by commonly used simple summary measures. This information discriminated between women with and without gestational diabetes later in pregnancy. PMID:23327294
NASA Astrophysics Data System (ADS)
Qian, Xi-Yuan; Liu, Ya-Min; Jiang, Zhi-Qiang; Podobnik, Boris; Zhou, Wei-Xing; Stanley, H. Eugene
2015-06-01
When common factors strongly influence two power-law cross-correlated time series recorded in complex natural or social systems, using detrended cross-correlation analysis (DCCA) without considering these common factors will bias the results. We use detrended partial cross-correlation analysis (DPXA) to uncover the intrinsic power-law cross correlations between two simultaneously recorded time series in the presence of nonstationarity after removing the effects of other time series acting as common forces. The DPXA method is a generalization of the detrended cross-correlation analysis that takes into account partial correlation analysis. We demonstrate the method by using bivariate fractional Brownian motions contaminated with a fractional Brownian motion. We find that the DPXA is able to recover the analytical cross Hurst indices, and thus the multiscale DPXA coefficients are a viable alternative to the conventional cross-correlation coefficient. We demonstrate the advantage of the DPXA coefficients over the DCCA coefficients by analyzing contaminated bivariate fractional Brownian motions. We calculate the DPXA coefficients and use them to extract the intrinsic cross correlation between crude oil and gold futures by taking into consideration the impact of the U.S. dollar index. We develop the multifractal DPXA (MF-DPXA) method in order to generalize the DPXA method and investigate multifractal time series. We analyze multifractal binomial measures masked with strong white noises and find that the MF-DPXA method quantifies the hidden multifractal nature while the multifractal DCCA method fails.
McConnell, Mark D; Monroe, Adrian P; Burger, Loren Wes; Martin, James A
2017-02-01
Advances in understanding avian nesting ecology are hindered by a prevalent lack of agreement between nest-site characteristics and fitness metrics such as nest success. We posit this is a result of inconsistent and improper timing of nest-site vegetation measurements. Therefore, we evaluated how the timing of nest vegetation measurement influences the estimated effects of vegetation structure on nest survival. We simulated phenological changes in nest-site vegetation growth over a typical nesting season and modeled how the timing of measuring that vegetation, relative to nest fate, creates bias in conclusions regarding its influence on nest survival. We modeled the bias associated with four methods of measuring nest-site vegetation: Method 1-measuring at nest initiation, Method 2-measuring at nest termination regardless of fate, Method 3-measuring at nest termination for successful nests and at estimated completion for unsuccessful nests, and Method 4-measuring at nest termination regardless of fate while also accounting for initiation date. We quantified and compared bias for each method for varying simulated effects, ranked models for each method using AIC, and calculated the proportion of simulations in which each model (measurement method) was selected as the best model. Our results indicate that the risk of drawing an erroneous or spurious conclusion was present in all methods but greater with Method 2 which is the most common method reported in the literature. Methods 1 and 3 were similarly less biased. Method 4 provided no additional value as bias was similar to Method 2 for all scenarios. While Method 1 is seldom practical to collect in the field, Method 3 is logistically practical and minimizes inherent bias. Implementation of Method 3 will facilitate estimating the effect of nest-site vegetation on survival, in the least biased way, and allow reliable conclusions to be drawn.
NASA Astrophysics Data System (ADS)
Yu, Zhicheng; Peng, Kai; Liu, Xiaokang; Pu, Hongji; Chen, Ziran
2018-05-01
High-precision displacement sensors, which can measure large displacements with nanometer resolution, are key components in many ultra-precision fabrication machines. In this paper, a new capacitive nanometer displacement sensor with differential sensing structure is proposed for long-range linear displacement measurements based on an approach denoted time grating. Analytical models established using electric field coupling theory and an area integral method indicate that common-mode interference will result in a first-harmonic error in the measurement results. To reduce the common-mode interference, the proposed sensor design employs a differential sensing structure, which adopts a second group of induction electrodes spatially separated from the first group of induction electrodes by a half-pitch length. Experimental results based on a prototype sensor demonstrate that the measurement accuracy and the stability of the sensor are substantially improved after adopting the differential sensing structure. Finally, a prototype sensor achieves a measurement accuracy of ±200 nm over the full 200 mm measurement range of the sensor.
Internal consistency of the self-reporting questionnaire-20 in occupational groups
Santos, Kionna Oliveira Bernardes; Carvalho, Fernando Martins; de Araújo, Tânia Maria
2016-01-01
ABSTRACT OBJECTIVE To assess the internal consistency of the measurements of the Self-Reporting Questionnaire (SRQ-20) in different occupational groups. METHODS A validation study was conducted with data from four surveys with groups of workers, using similar methods. A total of 9,959 workers were studied. In all surveys, the common mental disorders were assessed via SRQ-20. The internal consistency considered the items belonging to dimensions extracted by tetrachoric factor analysis for each study. Item homogeneity assessment compared estimates of Cronbach’s alpha (KD-20), the alpha applied to a tetrachoric correlation matrix and stratified Cronbach’s alpha. RESULTS The SRQ-20 dimensions showed adequate values, considering the reference parameters. The internal consistency of the instrument items, assessed by stratified Cronbach’s alpha, was high (> 0.80) in the four studies. CONCLUSIONS The SRQ-20 showed good internal consistency in the professional categories evaluated. However, there is still a need for studies using alternative methods and additional information able to refine the accuracy of latent variable measurement instruments, as in the case of common mental disorders. PMID:27007682
NASA Astrophysics Data System (ADS)
Schneider, M.; Hase, F.; Blumenstock, T.
2006-10-01
We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.
NASA Astrophysics Data System (ADS)
Schneider, M.; Hase, F.; Blumenstock, T.
2006-06-01
We propose an innovative approach for analysing ground-based FTIR spectra which allows us to detect variabilities of lower and middle/upper tropospheric HDO/H2O ratios. We show that the proposed method is superior to common approaches. We estimate that lower tropospheric HDO/H2O ratios can be detected with a noise to signal ratio of 15% and middle/upper tropospheric ratios with a noise to signal ratio of 50%. The method requires the inversion to be performed on a logarithmic scale and to introduce an inter-species constraint. While common methods calculate the isotope ratio posterior to an independent, optimal estimation of the HDO and H2O profile, the proposed approach is an optimal estimator for the ratio itself. We apply the innovative approach to spectra measured continuously during 15 months and present, for the first time, an annual cycle of tropospheric HDO/H2O ratio profiles as detected by ground-based measurements. Outliers in the detected middle/upper tropospheric ratios are interpreted by backward trajectories.
New methods for the geometrical analysis of tubular organs.
Grélard, Florent; Baldacci, Fabien; Vialard, Anne; Domenger, Jean-Philippe
2017-12-01
This paper presents new methods to study the shape of tubular organs. Determining precise cross-sections is of major importance to perform geometrical measurements, such as diameter, wall-thickness estimation or area measurement. Our first contribution is a robust method to estimate orthogonal planes based on the Voronoi Covariance Measure. Our method is not relying on a curve-skeleton computation beforehand. This means our orthogonal plane estimator can be used either on the skeleton or on the volume. Another important step towards tubular organ characterization is achieved through curve-skeletonization, as skeletons allow to compare two tubular organs, and to perform virtual endoscopy. Our second contribution is dedicated to correcting common defects of the skeleton by new pruning and recentering methods. Finally, we propose a new method for curve-skeleton extraction. Various results are shown on different types of segmented tubular organs, such as neurons, airway-tree and blood vessels. Copyright © 2017 Elsevier B.V. All rights reserved.
Development of a test method for carbonyl compounds from stationary source emissions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhihua Fan; Peterson, M.R.; Jayanty, R.K.M.
1997-12-31
Carbonyl compounds have received increasing attention because of their important role in ground-level ozone formation. The common method used for the measurement of aldehydes and ketones is 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by high performance liquid chromatography and ultra violet (HPLC-UV) analysis. One of the problems associated with this method is the low recovery for certain compounds such as acrolein. This paper presents a study in the development of a test method for the collection and measurement of carbonyl compounds from stationary source emissions. This method involves collection of carbonyl compounds in impingers, conversion of carbonyl compounds to a stable derivativemore » with O-2,3,4,5,6-pentafluorobenzyl hydroxylamine hydrochloride (PFBHA), and separation and measurement by electron capture gas chromatography (GC-ECD). Eight compounds were selected for the evaluation of this method: formaldehyde, acetaldehyde, acrolein, acetone, butanal, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), and hexanal.« less
Armour, John A. L.; Palla, Raquel; Zeeuwen, Patrick L. J. M.; den Heijer, Martin; Schalkwijk, Joost; Hollox, Edward J.
2007-01-01
Recent work has demonstrated an unexpected prevalence of copy number variation in the human genome, and has highlighted the part this variation may play in predisposition to common phenotypes. Some important genes vary in number over a high range (e.g. DEFB4, which commonly varies between two and seven copies), and have posed formidable technical challenges for accurate copy number typing, so that there are no simple, cheap, high-throughput approaches suitable for large-scale screening. We have developed a simple comparative PCR method based on dispersed repeat sequences, using a single pair of precisely designed primers to amplify products simultaneously from both test and reference loci, which are subsequently distinguished and quantified via internal sequence differences. We have validated the method for the measurement of copy number at DEFB4 by comparison of results from >800 DNA samples with copy number measurements by MAPH/REDVR, MLPA and array-CGH. The new Paralogue Ratio Test (PRT) method can require as little as 10 ng genomic DNA, appears to be comparable in accuracy to the other methods, and for the first time provides a rapid, simple and inexpensive method for copy number analysis, suitable for application to typing thousands of samples in large case-control association studies. PMID:17175532
[Comparative measurement of urine specific gravity: reagent strips, refractometry and hydrometry].
Costa, Christian Elías; Bettendorff, Carolina; Bupo, Sol; Ayuso, Sandra; Vallejo, Graciela
2010-06-01
The urine specific gravity is commonly used in clinical practice to measure the renal concentration/dilution ability. Measurement can be performed by three methods: hydrometry, refractometry and reagent strips. To assess the accuracy of different methods to measure urine specific gravity. We analyzed 156 consecutive urine samples of pediatric patients during April and May 2007. Urine specific gravity was measured by hydrometry (UD), refractometry (RE) and reagent strips (TR), simultaneously. Urine osmolarity was considered as the gold standard and was measured by freezing point depression. Correlation between different methods was calculated by simple linear regression. A positive and acceptable correlation was found with osmolarity for the RE as for the UD (r= 0.81 and r= 0.86, respectively). The reagent strips presented low correlation (r= 0.46). Also, we found good correlation between measurements obtained by UD and RE (r= 0.89). Measurements obtained by TR, however, had bad correlation when compared to UD (r= 0.46). Higher values of specific gravity were observed when measured with RE with respect to UD. Reagent strips are not reliable for measuring urine specific gravity and should not be used as an usual test. However, hydrometry and refractometry are acceptable alternatives for measuring urine specific gravity, as long as the same method is used for follow-up.
NASA Astrophysics Data System (ADS)
Burgess, S. D.; Bowring, S. A.; Heaman, L. M.
2012-12-01
Accurate and precise U-Pb geochronology of accessory phases other than zircon are required for dating some LIP basalts or determining the temporal patterns of kimberlite pipes, for example. Advances in precision and accuracy lead directly to an increase in the complexity of questions that can be posed. U-Pb geochronology of perovskite (CaTiO3) has been applied to silica-undersaturated basalts, carbonatites, alkaline igneous rocks, and kimberlites. Most published IDTIMS perovskite dates have 2-sigma precisions at the ~0.2% level for weighted mean 206Pb/238U dates, much less than possible with IDTIMS analyses of zircons, which limits the applicability of perovskite in high-precision applications. Precision on perovskite dates is lower than zircon because of common Pb, which in some cases can be up to 50% of the total Pb and must be corrected for and accurately partitioned between blank and initial. Relatively small changes in the composition of common Pb can result in inaccurate but precise dates. In many cases minerals with significant common Pb are corrected using Stacey and Kramers (1975) two stage Pb evolution model. This can be done without serious consequence to the final date for minerals with high U/Pb ratios. In the more common case where U/Pb ratios are relatively low and the proportion of common Pb is large, applying a model-derived Pb isotopic composition rather than measuring it directly can introduce percent-level inaccuracy to dates calculated with precisely known U/Pb ratios. Direct measurement of the common Pb composition can be done on a U-poor mineral that co-crystallized with perovskite; feldspar and clinopyroxene are commonly used. Clinopyroxene can contain significant in-grown radiogenic Pb and our experiments indicate that it is not eliminated by aggressive step-wise leaching. The U/Pb ratio in clinopyroxene is generally low (20 < mu < 50) but significant. Other workers (e.g. Kamo et al., 2003; Corfu and Dahlgren, 2008), have used two methods to determine the amount of ingrown Pb. First, by measuring the U/Pb ratio in clinopyroxene and assuming a crystallization age the amount of ingrown Pb can be calculated. Second, by assuming that perovskite and clinopyroxene (± other phases) are isochronous, the initial Pb isotopic composition can be calculated using the y-intercept on 206Pb/238U, 207Pb/235U, and 3-D isochron diagrams. To further develop a perovskite mineral standard for use in high-precision dating applications, we have focused on single grains/fragments of perovskite and multi-grain clinopyroxene fractions from a melteigite sample (IR90.3) within the Ice River complex, a zoned alkaline-ultramafic intrusion in southeastern British Columbia. Perovskite from this sample has variable measured 206Pb/204Pb (22-263), making this an ideal sample on which to test the sensitivity of the date on grains with variable amounts of radiogenic Pb to changes in common Pb composition. Using co-existing clinopyroxene for the initial common Pb composition by both direct measurement and by the isochron method allows us to calculate an accurate weighted-mean 206Pb/238U date on perovskite at the < 0.1% level, which overlaps within uncertainty for the two different methods. We recommend the Ice River 90.3 perovskite as a suitable EARTHTIME standard for interlaboratory and intertechnique comparison.
An Ultrasonographic Periodontal Probe
NASA Astrophysics Data System (ADS)
Bertoncini, C. A.; Hinders, M. K.
2010-02-01
Periodontal disease, commonly known as gum disease, affects millions of people. The current method of detecting periodontal pocket depth is painful, invasive, and inaccurate. As an alternative to manual probing, an ultrasonographic periodontal probe is being developed to use ultrasound echo waveforms to measure periodontal pocket depth, which is the main measure of periodontal disease. Wavelet transforms and pattern classification techniques are implemented in artificial intelligence routines that can automatically detect pocket depth. The main pattern classification technique used here, called a binary classification algorithm, compares test objects with only two possible pocket depth measurements at a time and relies on dimensionality reduction for the final determination. This method correctly identifies up to 90% of the ultrasonographic probe measurements within the manual probe's tolerance.
Methods and techniques for measuring gas emissions from agricultural and animal feeding operations.
Hu, Enzhu; Babcock, Esther L; Bialkowski, Stephen E; Jones, Scott B; Tuller, Markus
2014-01-01
Emissions of gases from agricultural and animal feeding operations contribute to climate change, produce odors, degrade sensitive ecosystems, and pose a threat to public health. The complexity of processes and environmental variables affecting these emissions complicate accurate and reliable quantification of gas fluxes and production rates. Although a plethora of measurement technologies exist, each method has its limitations that exacerbate accurate quantification of gas fluxes. Despite a growing interest in gas emission measurements, only a few available technologies include real-time, continuous monitoring capabilities. Commonly applied state-of-the-art measurement frameworks and technologies were critically examined and discussed, and recommendations for future research to address real-time monitoring requirements for forthcoming regulation and management needs are provided.
Comparison of performance of some common Hartmann-Shack centroid estimation methods
NASA Astrophysics Data System (ADS)
Thatiparthi, C.; Ommani, A.; Burman, R.; Thapa, D.; Hutchings, N.; Lakshminarayanan, V.
2016-03-01
The accuracy of the estimation of optical aberrations by measuring the distorted wave front using a Hartmann-Shack wave front sensor (HSWS) is mainly dependent upon the measurement accuracy of the centroid of the focal spot. The most commonly used methods for centroid estimation such as the brightest spot centroid; first moment centroid; weighted center of gravity and intensity weighted center of gravity, are generally applied on the entire individual sub-apertures of the lens let array. However, these processes of centroid estimation are sensitive to the influence of reflections, scattered light, and noise; especially in the case where the signal spot area is smaller compared to the whole sub-aperture area. In this paper, we give a comparison of performance of the commonly used centroiding methods on estimation of optical aberrations, with and without the use of some pre-processing steps (thresholding, Gaussian smoothing and adaptive windowing). As an example we use the aberrations of the human eye model. This is done using the raw data collected from a custom made ophthalmic aberrometer and a model eye to emulate myopic and hyper-metropic defocus values up to 2 Diopters. We show that the use of any simple centroiding algorithm is sufficient in the case of ophthalmic applications for estimating aberrations within the typical clinically acceptable limits of a quarter Diopter margins, when certain pre-processing steps to reduce the impact of external factors are used.
Upper Limb Outcome Measures Used in Stroke Rehabilitation Studies: A Systematic Literature Review
Santisteban, Leire; Térémetz, Maxime; Bleton, Jean-Pierre; Baron, Jean-Claude; Maier, Marc A.; Lindberg, Påvel G.
2016-01-01
Background Establishing which upper limb outcome measures are most commonly used in stroke studies may help in improving consensus among scientists and clinicians. Objective In this study we aimed to identify the most commonly used upper limb outcome measures in intervention studies after stroke and to describe domains covered according to ICF, how measures are combined, and how their use varies geographically and over time. Methods Pubmed, CinHAL, and PeDRO databases were searched for upper limb intervention studies in stroke according to PRISMA guidelines and477 studies were included. Results In studies 48different outcome measures were found. Only 15 of these outcome measures were used in more than 5% of the studies. The Fugl-Meyer Test (FMT)was the most commonly used measure (in 36% of studies). Commonly used measures covered ICF domains of body function and activity to varying extents. Most studies (72%) combined multiple outcome measures: the FMT was often combined with the Motor Activity Log (MAL), the Wolf Motor Function Test and the Action Research Arm Test, but infrequently combined with the Motor Assessment Scale or the Nine Hole Peg Test. Key components of manual dexterity such as selective finger movements were rarely measured. Frequency of use increased over a twelve-year period for the FMT and for assessments of kinematics, whereas other measures, such as the MAL and the Jebsen Taylor Hand Test showed decreased use over time. Use varied largely between countries showing low international consensus. Conclusions The results showed a large diversity of outcome measures used across studies. However, a growing number of studies used the FMT, a neurological test with good psychometric properties. For thorough assessment the FMT needs to be combined with functional measures. These findings illustrate the need for strategies to build international consensus on appropriate outcome measures for upper limb function after stroke. PMID:27152853
NASA Astrophysics Data System (ADS)
Forsberg, Daniel; Lundström, Claes; Andersson, Mats; Vavruch, Ludvig; Tropp, Hans; Knutsson, Hans
2013-03-01
Reliable measurements of spinal deformities in idiopathic scoliosis are vital, since they are used for assessing the degree of scoliosis, deciding upon treatment and monitoring the progression of the disease. However, commonly used two dimensional methods (e.g. the Cobb angle) do not fully capture the three dimensional deformity at hand in scoliosis, of which axial vertebral rotation (AVR) is considered to be of great importance. There are manual methods for measuring the AVR, but they are often time-consuming and related with a high intra- and inter-observer variability. In this paper, we present a fully automatic method for estimating the AVR in images from computed tomography. The proposed method is evaluated on four scoliotic patients with 17 vertebrae each and compared with manual measurements performed by three observers using the standard method by Aaro-Dahlborn. The comparison shows that the difference in measured AVR between automatic and manual measurements are on the same level as the inter-observer difference. This is further supported by a high intraclass correlation coefficient (0.971-0.979), obtained when comparing the automatic measurements with the manual measurements of each observer. Hence, the provided results and the computational performance, only requiring approximately 10 to 15 s for processing an entire volume, demonstrate the potential clinical value of the proposed method.
Wilkins, Emma L; Morris, Michelle A; Radley, Duncan; Griffiths, Claire
2017-03-01
Geographic Information Systems (GIS) are widely used to measure retail food environments. However the methods used are hetrogeneous, limiting collation and interpretation of evidence. This problem is amplified by unclear and incomplete reporting of methods. This discussion (i) identifies common dimensions of methodological diversity across GIS-based food environment research (data sources, data extraction methods, food outlet construct definitions, geocoding methods, and access metrics), (ii) reviews the impact of different methodological choices, and (iii) highlights areas where reporting is insufficient. On the basis of this discussion, the Geo-FERN reporting checklist is proposed to support methodological reporting and interpretation. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Why is it important to improve dietary assessment methods?
Food frequency questionnaires, which measure a person's usual intake over a defined period of time, and 24-hour recalls, in which a person records everything eaten or drunk during the previous 24 hours, are commonly used to collect dietary information.
ERIC Educational Resources Information Center
Pritchard, Matt
2017-01-01
Magicians and scientists have a curious relationship, with both conflicting views and common ground. Magicians use natural means to construct supernatural illusions. They exploit surprise and misdirected focus in their tricks. Scientists like to deconstruct and explain marvels. They methodically measure, evaluate and repeat observations. However,…
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-27
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment.
Measurement of the residual stress in hot rolled strip using strain gauge method
NASA Astrophysics Data System (ADS)
Kumar, Lokendra; Majumdar, Shrabani; Sahu, Raj Kumar
2017-07-01
Measurement of the surface residual stress in a flat hot rolled steel strip using strain gauge method is considered in this paper. Residual stresses arise in the flat strips when the shear cut and laser cut is applied. Bending, twisting, central buckled and edge waviness is the common defects occur during the cutting and uncoiling process. These defects arise due to the non-uniform elastic-plastic deformation, phase transformation occurring during cooling and coiling-uncoiling process. The residual stress analysis is very important because with early detection it is possible to prevent an object from failure. The goal of this paper is to measure the surface residual stress in flat hot rolled strip using strain gauge method. The residual stress was measured in the head and tail end of hot rolled strip considering as a critical part of the strip.
Li, I-Hsum; Chen, Ming-Chang; Wang, Wei-Yen; Su, Shun-Feng; Lai, To-Wen
2014-01-01
A single-webcam distance measurement technique for indoor robot localization is proposed in this paper. The proposed localization technique uses webcams that are available in an existing surveillance environment. The developed image-based distance measurement system (IBDMS) and parallel lines distance measurement system (PLDMS) have two merits. Firstly, only one webcam is required for estimating the distance. Secondly, the set-up of IBDMS and PLDMS is easy, which only one known-dimension rectangle pattern is needed, i.e., a ground tile. Some common and simple image processing techniques, i.e., background subtraction are used to capture the robot in real time. Thus, for the purposes of indoor robot localization, the proposed method does not need to use expensive high-resolution webcams and complicated pattern recognition methods but just few simple estimating formulas. From the experimental results, the proposed robot localization method is reliable and effective in an indoor environment. PMID:24473282
High-Temperature Thermal Conductivity Measurement Apparatus Based on Guarded Hot Plate Method
NASA Astrophysics Data System (ADS)
Turzo-Andras, E.; Magyarlaki, T.
2017-10-01
An alternative calibration procedure has been applied using apparatus built in-house, created to optimize thermal conductivity measurements. The new approach compared to those of usual measurement procedures of thermal conductivity by guarded hot plate (GHP) consists of modified design of the apparatus, modified position of the temperature sensors and new conception in the calculation method, applying the temperature at the inlet section of the specimen instead of the temperature difference across the specimen. This alternative technique is suitable for eliminating the effect of thermal contact resistance arising between a rigid specimen and the heated plate, as well as accurate determination of the specimen temperature and of the heat loss at the lateral edge of the specimen. This paper presents an overview of the specific characteristics of the newly developed "high-temperature thermal conductivity measurement apparatus" based on the GHP method, as well as how the major difficulties are handled in the case of this apparatus, as compared to the common GHP method that conforms to current international standards.
Qin, Guoyou; Zhang, Jiajia; Zhu, Zhongyi; Fung, Wing
2016-12-20
Outliers, measurement error, and missing data are commonly seen in longitudinal data because of its data collection process. However, no method can address all three of these issues simultaneously. This paper focuses on the robust estimation of partially linear models for longitudinal data with dropouts and measurement error. A new robust estimating equation, simultaneously tackling outliers, measurement error, and missingness, is proposed. The asymptotic properties of the proposed estimator are established under some regularity conditions. The proposed method is easy to implement in practice by utilizing the existing standard generalized estimating equations algorithms. The comprehensive simulation studies show the strength of the proposed method in dealing with longitudinal data with all three features. Finally, the proposed method is applied to data from the Lifestyle Education for Activity and Nutrition study and confirms the effectiveness of the intervention in producing weight loss at month 9. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
An Adaptive Kalman Filter using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Hortness, J.E.
2004-01-01
The U.S. Geological Survey (USGS) measures discharge in streams using several methods. However, measurement of peak discharges is often impossible or impractical due to difficult access, inherent danger of making measurements during flood events, and timing often associated with flood events. Thus, many peak discharge values often are calculated after the fact by use of indirect methods. The most common indirect method for estimating peak dis- charges in streams is the slope-area method. This, like other indirect methods, requires measuring the flood profile through detailed surveys. Processing the survey data for efficient entry into computer streamflow models can be time demanding; SAM 2.1 is a program designed to expedite that process. The SAM 2.1 computer program is designed to be run in the field on a portable computer. The program processes digital surveying data obtained from an electronic surveying instrument during slope- area measurements. After all measurements have been completed, the program generates files to be input into the SAC (Slope-Area Computation program; Fulford, 1994) or HEC-RAS (Hydrologic Engineering Center-River Analysis System; Brunner, 2001) computer streamflow models so that an estimate of the peak discharge can be calculated.
Measuring zoo animal welfare: theory and practice.
Hill, Sonya P; Broom, Donald M
2009-11-01
The assessment of animal welfare relates to investigations of how animals try to cope with their environment, and how easy or how difficult it is for them to do so. The use of rigorous scientific methods to assess this has grown over the past few decades, and so our understanding of the needs of animals has improved during this time. Much of the work in the field of animal welfare has been conducted on farm animals, but it is important to consider how the methods and approaches used in assessing farm animal welfare have been, and can be, adapted and applied to the measurement of welfare in animals in other domains, such as in zoos. This is beneficial to our understanding of both the theoretical knowledge, and the practicability of methods. In this article, some of the commonly-used methods for measuring animal welfare will be discussed, as well as some practical considerations in assessing the welfare of zoo animals.
Cui, Xinyi; Mayer, Philipp; Gan, Jay
2013-01-01
Many important environmental contaminants are hydrophobic organic contaminants (HOCs), which include PCBs, PAHs, PBDEs, DDT and other chlorinated insecticides, among others. Owing to their strong hydrophobicity, HOCs have their final destination in soil or sediment, where their ecotoxicological effects are closely regulated by sorption and thus bioavailability. The last two decades has seen a dramatic increase in research efforts in developing and applying partitioning based methods and biomimetic extractions for measuring HOC bioavailability. However, the many variations of both analytical methods and associated measurement endpoints are often a source of confusion for users. In this review, we distinguish the most commonly used analytical approaches based on their measurement objectives, and illustrate their practical operational steps, strengths and limitations using simple flowcharts. This review may serve as guidance for new users on the selection and use of established methods, and a reference for experienced investigators to identify potential topics for further research. PMID:23064200
Li, Hui; Kayhanian, Masoud; Harvey, John T
2013-03-30
Fully permeable pavement is gradually gaining support as an alternative best management practice (BMP) for stormwater runoff management. As the use of these pavements increases, a definitive test method is needed to measure hydraulic performance and to evaluate clogging, both for performance studies and for assessment of permeability for construction quality assurance and maintenance needs assessment. Two of the most commonly used permeability measurement tests for porous asphalt and pervious concrete are the National Center for Asphalt Technology (NCAT) permeameter and ASTM C1701, respectively. This study was undertaken to compare measured values for both methods in the field on a variety of permeable pavements used in current practice. The field measurements were performed using six experimental section designs with different permeable pavement surface types including pervious concrete, porous asphalt and permeable interlocking concrete pavers. Multiple measurements were performed at five locations on each pavement test section. The results showed that: (i) silicone gel is a superior sealing material to prevent water leakage compared with conventional plumbing putty; (ii) both methods (NCAT and ASTM) can effectively be used to measure the permeability of all pavement types and the surface material type will not impact the measurement precision; (iii) the permeability values measured with the ASTM method were 50-90% (75% on average) lower than those measured with the NCAT method; (iv) the larger permeameter cylinder diameter used in the ASTM method improved the reliability and reduced the variability of the measured permeability. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Krupka, Jerzy; Aleshkevych, Pavlo; Salski, Bartlomiej; Kopyt, Pawel
2018-02-01
The mode of uniform precession, or Kittel mode, in a magnetized ferromagnetic sphere, has recently been proven to be the magnetic plasmon resonance. In this paper we show how to apply the electrodynamic model of the magnetic plasmon resonance for accurate measurements of the ferromagnetic resonance linewidth ΔH. Two measurement methods are presented. The first one employs Q-factor measurements of the magnetic plasmon resonance coupled to the resonance of an empty metallic cavity. Such coupled modes are known as magnon-polariton modes, i.e. hybridized modes between the collective spin excitation and the cavity excitation. The second one employs direct Q-factor measurements of the magnetic plasmon resonance in a filter setup with two orthogonal semi-loops used for coupling. Q-factor measurements are performed employing a vector network analyser. The methods presented in this paper allow one to extend the measurement range of the ferromagnetic resonance linewidth ΔH well beyond the limits of the commonly used measurement standards in terms of the size of the samples and the lowest measurable linewidths. Samples that can be measured with the newly proposed methods may have larger size as compared to the size of samples that were used in the standard methods restricted by the limits of perturbation theory.
An automated real-time free phenytoin assay to replace the obsolete Abbott TDx method.
Williams, Christopher; Jones, Richard; Akl, Pascale; Blick, Kenneth
2014-01-01
Phenytoin is a commonly used anticonvulsant that is highly protein bound with a narrow therapeutic range. The unbound fraction, free phenytoin (FP), is responsible for pharmacologic effects; therefore, it is essential to measure both FP and total serum phenytoin levels. Historically, the Abbott TDx method has been widely used for the measurement of FP and was the method used in our laboratory. However, the FP TDx assay was recently discontinued by the manufacturer, so we had to develop an alternative methodology. We evaluated the Beckman-Coulter DxC800 based FP method for linearity, analytical sensitivity, and precision. The analytical measurement range of the method was 0.41 to 5.30 microg/mL. Within-run and between-run precision studies yielded CVs of 3.8% and 5.5%, respectively. The method compared favorably with the TDx method, yielding the following regression equation: DxC800 = 0.9**TDx + 0.10; r2 = 0.97 (n = 97). The new FP assay appears to be an acceptable alternative to the TDx method.
USDA-ARS?s Scientific Manuscript database
The sulfur hexafluoride tracer technique (SF**6) is a commonly used method for measuring CH**4 enteric emissions in ruminants. Studies using SF**6 have shown large variation in CH**4 emissions data, inconsistencies in CH**4 emissions across studies, and potential methodological errors. Therefore, th...
ERIC Educational Resources Information Center
Seamster, Christina Lambert
2016-01-01
According to Molnar (2014), full time virtual school education lacks a measurement tool that accurately measures effective virtual teacher practice. Using both qualitative and quantitative methods, the current study sought to understand the common practices among full time K-8 virtual school teachers, the extent to which teachers believed such…
ERIC Educational Resources Information Center
Alvard, Michael; McGaffey, Ethan; Carlson, David
2015-01-01
We used global positioning system (GPS) technology and tracking analysis to measure fishing effort by marine, small-scale, fish aggregating device (FAD) fishers of the Commonwealth of Dominica. FADs are human-made structures designed to float on the surface of the water and attract fish. They are also prone to common pool resource problems. To…
Measuring the Orbital Period of the Moon Using a Digital Camera
ERIC Educational Resources Information Center
Hughes, Stephen W.
2006-01-01
A method of measuring the orbital velocity of the Moon around the Earth using a digital camera is described. Separate images of the Moon and stars taken 24 hours apart were loaded into Microsoft PowerPoint and the centre of the Moon marked on each image. Four stars common to both images were connected together to form a "home-made" constellation.…
Tracking Subpixel Targets with Critically Sampled Optical Sensors
2012-09-01
5 [32]. The Viterbi algorithm is a dynamic programming method for calculating the MAP in O(tn2) time . The most common use of this algorithm is in the... method to detect subpixel point targets using the sensor’s PSF as an identifying characteristic. Using matched filtering theory, a measure is defined to...ocean surface beneath the cloud will have a different distribution. While the basic methods will adapt to changes in cloud cover over time , it is also
Estimation of Alpine Skier Posture Using Machine Learning Techniques
Nemec, Bojan; Petrič, Tadej; Babič, Jan; Supej, Matej
2014-01-01
High precision Global Navigation Satellite System (GNSS) measurements are becoming more and more popular in alpine skiing due to the relatively undemanding setup and excellent performance. However, GNSS provides only single-point measurements that are defined with the antenna placed typically behind the skier's neck. A key issue is how to estimate other more relevant parameters of the skier's body, like the center of mass (COM) and ski trajectories. Previously, these parameters were estimated by modeling the skier's body with an inverted-pendulum model that oversimplified the skier's body. In this study, we propose two machine learning methods that overcome this shortcoming and estimate COM and skis trajectories based on a more faithful approximation of the skier's body with nine degrees-of-freedom. The first method utilizes a well-established approach of artificial neural networks, while the second method is based on a state-of-the-art statistical generalization method. Both methods were evaluated using the reference measurements obtained on a typical giant slalom course and compared with the inverted-pendulum method. Our results outperform the results of commonly used inverted-pendulum methods and demonstrate the applicability of machine learning techniques in biomechanical measurements of alpine skiing. PMID:25313492
Guo, Ying; Little, Roderick J; McConnell, Daniel S
2012-01-01
Covariate measurement error is common in epidemiologic studies. Current methods for correcting measurement error with information from external calibration samples are insufficient to provide valid adjusted inferences. We consider the problem of estimating the regression of an outcome Y on covariates X and Z, where Y and Z are observed, X is unobserved, but a variable W that measures X with error is observed. Information about measurement error is provided in an external calibration sample where data on X and W (but not Y and Z) are recorded. We describe a method that uses summary statistics from the calibration sample to create multiple imputations of the missing values of X in the regression sample, so that the regression coefficients of Y on X and Z and associated standard errors can be estimated using simple multiple imputation combining rules, yielding valid statistical inferences under the assumption of a multivariate normal distribution. The proposed method is shown by simulation to provide better inferences than existing methods, namely the naive method, classical calibration, and regression calibration, particularly for correction for bias and achieving nominal confidence levels. We also illustrate our method with an example using linear regression to examine the relation between serum reproductive hormone concentrations and bone mineral density loss in midlife women in the Michigan Bone Health and Metabolism Study. Existing methods fail to adjust appropriately for bias due to measurement error in the regression setting, particularly when measurement error is substantial. The proposed method corrects this deficiency.
Study of the Conservation of Mechanical Energy in the Motion of a Pendulum Using a Smartphone
ERIC Educational Resources Information Center
Pierratos, Theodoros; Polatoglou, Hariton M.
2018-01-01
A common method that scientists use to validate a theory is to utilize known principles and laws to produce results on specific settings, which can be assessed using the appropriate experimental methods and apparatuses. Smartphones have various sensors built-in and could be used for measuring and logging data in physics experiments. In this work,…
ERIC Educational Resources Information Center
Johnson, Eric R.
1988-01-01
Describes a laboratory experiment that measures the amount of ascorbic acid destroyed by food preparation methods (boiling and steaming). Points out that aqueous extracts of cooked green pepper samples can be analyzed for ascorbic acid by a relatively simple redox titration. Lists experimental procedure for four methods of preparation. (MVL)
Interferometric weak measurement of photon polarization
NASA Astrophysics Data System (ADS)
Iinuma, Masataka; Suzuki, Yutaro; Taguchi, Gen; Kadoya, Yutaka; Hofmann, Holger F.
2011-10-01
We realize a minimum back-action quantum non-demolition measurement of variable strength on photon polarization in the diagonal(PM) basis by two-mode path interference. This method uses the phase difference between the positive (P) and negative (M) superpositions in the interference between the horizontal (H) and vertical (V) polarized paths in the input beam. Although the interference can not occur when the H and V polarizations are distinguishable, a well-controlled amount of interference is induced by erasing the H and V information using a coherent rotation of polarization toward a common diagonal polarization. This method is particularly suitable for the realization of weak measurements, where the control of the back-action is essential.
Measuring top-quark polarization in top-pair + missing-energy events.
Berger, Edmond L; Cao, Qing-Hong; Yu, Jiang-Hao; Zhang, Hao
2012-10-12
The polarization of a top quark can be sensitive to new physics beyond the standard model. Since the charged lepton from top-quark decay is maximally correlated with the top-quark spin, it is common to measure the polarization from the distribution in the angle between the charged lepton and the top-quark directions. We propose a novel method based on the charged lepton energy fraction and illustrate the method with a detailed simulation of top-quark pairs produced in supersymmetric top squark pair production. We show that the lepton energy ratio distribution that we define is very sensitive to the top-quark polarization but insensitive to the precise measurement of the top-quark energy.
Oxygen transfer rate estimation in oxidation ditches from clean water measurements.
Abusam, A; Keesman, K J; Meinema, K; Van Straten, G
2001-06-01
Standard methods for the determination of oxygen transfer rate are based on assumptions that are not valid for oxidation ditches. This paper presents a realistic and simple new method to be used in the estimation of oxygen transfer rate in oxidation ditches from clean water measurements. The new method uses a loop-of-CSTRs model, which can be easily incorporated within control algorithms, for modelling oxidation ditches. Further, this method assumes zero oxygen transfer rates (KLa) in the unaerated CSTRs. Application of a formal estimation procedure to real data revealed that the aeration constant (k = KLaVA, where VA is the volume of the aerated CSTR) can be determined significantly more accurately than KLa and VA. Therefore, the new method estimates k instead of KLa. From application to real data, this method proved to be more accurate than the commonly used Dutch standard method (STORA, 1980).
Cantilever spring constant calibration using laser Doppler vibrometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ohler, Benjamin
2007-06-15
Uncertainty in cantilever spring constants is a critical issue in atomic force microscopy (AFM) force measurements. Though numerous methods exist for calibrating cantilever spring constants, the accuracy of these methods can be limited by both the physical models themselves as well as uncertainties in their experimental implementation. Here we report the results from two of the most common calibration methods, the thermal tune method and the Sader method. These were implemented on a standard AFM system as well as using laser Doppler vibrometry (LDV). Using LDV eliminates some uncertainties associated with optical lever detection on an AFM. It also offersmore » considerably higher signal to noise deflection measurements. We find that AFM and LDV result in similar uncertainty in the calibrated spring constants, about 5%, using either the thermal tune or Sader methods provided that certain limitations of the methods and instrumentation are observed.« less
Pruning artificial neural networks using neural complexity measures.
Jorgensen, Thomas D; Haynes, Barry P; Norlund, Charlotte C F
2008-10-01
This paper describes a new method for pruning artificial neural networks, using a measure of the neural complexity of the neural network. This measure is used to determine the connections that should be pruned. The measure computes the information-theoretic complexity of a neural network, which is similar to, yet different from previous research on pruning. The method proposed here shows how overly large and complex networks can be reduced in size, whilst retaining learnt behaviour and fitness. The technique proposed here helps to discover a network topology that matches the complexity of the problem it is meant to solve. This novel pruning technique is tested in a robot control domain, simulating a racecar. It is shown, that the proposed pruning method is a significant improvement over the most commonly used pruning method Magnitude Based Pruning. Furthermore, some of the pruned networks prove to be faster learners than the benchmark network that they originate from. This means that this pruning method can also help to unleash hidden potential in a network, because the learning time decreases substantially for a pruned a network, due to the reduction of dimensionality of the network.
Hocalar, A; Türker, M; Karakuzu, C; Yüzgeç, U
2011-04-01
In this study, previously developed five different state estimation methods are examined and compared for estimation of biomass concentrations at a production scale fed-batch bioprocess. These methods are i. estimation based on kinetic model of overflow metabolism; ii. estimation based on metabolic black-box model; iii. estimation based on observer; iv. estimation based on artificial neural network; v. estimation based on differential evaluation. Biomass concentrations are estimated from available measurements and compared with experimental data obtained from large scale fermentations. The advantages and disadvantages of the presented techniques are discussed with regard to accuracy, reproducibility, number of primary measurements required and adaptation to different working conditions. Among the various techniques, the metabolic black-box method seems to have advantages although the number of measurements required is more than that for the other methods. However, the required extra measurements are based on commonly employed instruments in an industrial environment. This method is used for developing a model based control of fed-batch yeast fermentations. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses
Stephen, Emily P.; Lepage, Kyle Q.; Eden, Uri T.; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S.; Guenther, Frank H.; Kramer, Mark A.
2014-01-01
The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty—both in the functional network edges and the corresponding aggregate measures of network topology—are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here—appropriate for static and dynamic network inference and different statistical measures of coupling—permits the evaluation of confidence in network measures in a variety of settings common to neuroscience. PMID:24678295
Assessing dynamics, spatial scale, and uncertainty in task-related brain network analyses.
Stephen, Emily P; Lepage, Kyle Q; Eden, Uri T; Brunner, Peter; Schalk, Gerwin; Brumberg, Jonathan S; Guenther, Frank H; Kramer, Mark A
2014-01-01
The brain is a complex network of interconnected elements, whose interactions evolve dynamically in time to cooperatively perform specific functions. A common technique to probe these interactions involves multi-sensor recordings of brain activity during a repeated task. Many techniques exist to characterize the resulting task-related activity, including establishing functional networks, which represent the statistical associations between brain areas. Although functional network inference is commonly employed to analyze neural time series data, techniques to assess the uncertainty-both in the functional network edges and the corresponding aggregate measures of network topology-are lacking. To address this, we describe a statistically principled approach for computing uncertainty in functional networks and aggregate network measures in task-related data. The approach is based on a resampling procedure that utilizes the trial structure common in experimental recordings. We show in simulations that this approach successfully identifies functional networks and associated measures of confidence emergent during a task in a variety of scenarios, including dynamically evolving networks. In addition, we describe a principled technique for establishing functional networks based on predetermined regions of interest using canonical correlation. Doing so provides additional robustness to the functional network inference. Finally, we illustrate the use of these methods on example invasive brain voltage recordings collected during an overt speech task. The general strategy described here-appropriate for static and dynamic network inference and different statistical measures of coupling-permits the evaluation of confidence in network measures in a variety of settings common to neuroscience.
Khangura, Jaspreet; Culleton, Bruce F; Manns, Braden J; Zhang, Jianguo; Barnieh, Lianne; Walsh, Michael; Klarenbach, Scott W; Tonelli, Marcello; Sarna, Magdalena; Hemmelgarn, Brenda R
2010-06-24
Left ventricular (LV) hypertrophy is common among patients on hemodialysis. While a relationship between blood pressure (BP) and LV hypertrophy has been established, it is unclear which BP measurement method is the strongest correlate of LV hypertrophy. We sought to determine agreement between various blood pressure measurement methods, as well as identify which method was the strongest correlate of LV hypertrophy among patients on hemodialysis. This was a post-hoc analysis of data from a randomized controlled trial. We evaluated the agreement between seven BP measurement methods: standardized measurement at baseline; single pre- and post-dialysis, as well as mean intra-dialytic measurement at baseline; and cumulative pre-, intra- and post-dialysis readings (an average of 12 monthly readings based on a single day per month). Agreement was assessed using Lin's concordance correlation coefficient (CCC) and the Bland Altman method. Association between BP measurement method and LV hypertrophy on baseline cardiac MRI was determined using receiver operating characteristic curves and area under the curve (AUC). Agreement between BP measurement methods in the 39 patients on hemodialysis varied considerably, from a CCC of 0.35 to 0.94, with overlapping 95% confidence intervals. Pre-dialysis measurements were the weakest predictors of LV hypertrophy while standardized, post- and inter-dialytic measurements had similar and strong (AUC 0.79 to 0.80) predictive power for LV hypertrophy. A single standardized BP has strong predictive power for LV hypertrophy and performs just as well as more resource intensive cumulative measurements, whereas pre-dialysis blood pressure measurements have the weakest predictive power for LV hypertrophy. Current guidelines, which recommend using pre-dialysis measurements, should be revisited to confirm these results.
Takács, Péter
2016-01-01
We compared the repeatability, reproducibility (intra- and inter-measurer similarity), separative power and subjectivity (measurer effect on results) of four morphometric methods frequently used in ichthyological research, the “traditional” caliper-based (TRA) and truss-network (TRU) distance methods and two geometric methods that compare landmark coordinates on the body (GMB) and scales (GMS). In each case, measurements were performed three times by three measurers on the same specimen of three common cyprinid species (roach Rutilus rutilus (Linnaeus, 1758), bleak Alburnus alburnus (Linnaeus, 1758) and Prussian carp Carassius gibelio (Bloch, 1782)) collected from three closely-situated sites in the Lake Balaton catchment (Hungary) in 2014. TRA measurements were made on conserved specimens using a digital caliper, while TRU, GMB and GMS measurements were undertaken on digital images of the bodies and scales. In most cases, intra-measurer repeatability was similar. While all four methods were able to differentiate the source populations, significant differences were observed in their repeatability, reproducibility and subjectivity. GMB displayed highest overall repeatability and reproducibility and was least burdened by measurer effect. While GMS showed similar repeatability to GMB when fish scales had a characteristic shape, it showed significantly lower reproducability (compared with its repeatability) for each species than the other methods. TRU showed similar repeatability as the GMS. TRA was the least applicable method as measurements were obtained from the fish itself, resulting in poor repeatability and reproducibility. Although all four methods showed some degree of subjectivity, TRA was the only method where population-level detachment was entirely overwritten by measurer effect. Based on these results, we recommend a) avoidance of aggregating different measurer’s datasets when using TRA and GMS methods; and b) use of image-based methods for morphometric surveys. Automation of the morphometric workflow would also reduce any measurer effect and eliminate measurement and data-input errors. PMID:27327896
Liquefaction assessment based on combined use of CPT and shear wave velocity measurements
NASA Astrophysics Data System (ADS)
Bán, Zoltán; Mahler, András; Győri, Erzsébet
2017-04-01
Soil liquefaction is one of the most devastating secondary effects of earthquakes and can cause significant damage in built infrastructure. For this reason liquefaction hazard shall be considered in all regions where moderate-to-high seismic activity encounters with saturated, loose, granular soil deposits. Several approaches exist to take into account this hazard, from which the in-situ test based empirical methods are the most commonly used in practice. These methods are generally based on the results of CPT, SPT or shear wave velocity measurements. In more complex or high risk projects CPT and VS measurement are often performed at the same location commonly in the form of seismic CPT. Furthermore, VS profile determined by surface wave methods can also supplement the standard CPT measurement. However, combined use of both in-situ indices in one single empirical method is limited. For this reason, the goal of this research was to develop such an empirical method within the framework of simplified empirical procedures where the results of CPT and VS measurements are used in parallel and can supplement each other. The combination of two in-situ indices, a small strain property measurement with a large strain measurement, can reduce uncertainty of empirical methods. In the first step by careful reviewing of the already existing liquefaction case history databases, sites were selected where the records of both CPT and VS measurement are available. After implementing the necessary corrections on the gathered 98 case histories with respect to fines content, overburden pressure and magnitude, a logistic regression was performed to obtain the probability contours of liquefaction occurrence. Logistic regression is often used to explore the relationship between a binary response and a set of explanatory variables. The occurrence or absence of liquefaction can be considered as binary outcome and the equivalent clean sand value of normalized overburden corrected cone tip resistance (qc1Ncs), the overburden corrected shear wave velocity (V S1), and the magnitude and effective stress corrected cyclic stress ratio (CSRM=7.5,σv'=1atm) were considered as input variables. In this case the graphical representation of the cyclic resistance ratio curve for a given probability has been replaced by a surface that separates the liquefaction and non-liquefaction cases.
A thioacidolysis method tailored for higher‐throughput quantitative analysis of lignin monomers
Foster, Cliff; Happs, Renee M.; Doeppke, Crissa; Meunier, Kristoffer; Gehan, Jackson; Yue, Fengxia; Lu, Fachuang; Davis, Mark F.
2016-01-01
Abstract Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β‐O‐4 linkages. Current thioacidolysis methods are low‐throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non‐chlorinated organic solvent and is tailored for higher‐throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1–2 mg of biomass per assay and has been quantified using fast‐GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, including standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day‐to‐day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. The method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses. PMID:27534715
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
A thioacidolysis method tailored for higher-throughput quantitative analysis of lignin monomers
Harman-Ware, Anne E.; Foster, Cliff; Happs, Renee M.; ...
2016-09-14
Thioacidolysis is a method used to measure the relative content of lignin monomers bound by β-O-4 linkages. Current thioacidolysis methods are low-throughput as they require tedious steps for reaction product concentration prior to analysis using standard GC methods. A quantitative thioacidolysis method that is accessible with general laboratory equipment and uses a non-chlorinated organic solvent and is tailored for higher-throughput analysis is reported. The method utilizes lignin arylglycerol monomer standards for calibration, requires 1-2 mg of biomass per assay and has been quantified using fast-GC techniques including a Low Thermal Mass Modular Accelerated Column Heater (LTM MACH). Cumbersome steps, includingmore » standard purification, sample concentrating and drying have been eliminated to help aid in consecutive day-to-day analyses needed to sustain a high sample throughput for large screening experiments without the loss of quantitation accuracy. As a result, the method reported in this manuscript has been quantitatively validated against a commonly used thioacidolysis method and across two different research sites with three common biomass varieties to represent hardwoods, softwoods, and grasses.« less
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
Lubner, Sean D; Choi, Jeunghwan; Wehmeyer, Geoff; Waag, Bastian; Mishra, Vivek; Natesan, Harishankar; Bischof, John C; Dames, Chris
2015-01-01
Accurate knowledge of the thermal conductivity (k) of biological tissues is important for cryopreservation, thermal ablation, and cryosurgery. Here, we adapt the 3ω method-widely used for rigid, inorganic solids-as a reusable sensor to measure k of soft biological samples two orders of magnitude thinner than conventional tissue characterization methods. Analytical and numerical studies quantify the error of the commonly used "boundary mismatch approximation" of the bi-directional 3ω geometry, confirm that the generalized slope method is exact in the low-frequency limit, and bound its error for finite frequencies. The bi-directional 3ω measurement device is validated using control experiments to within ±2% (liquid water, standard deviation) and ±5% (ice). Measurements of mouse liver cover a temperature ranging from -69 °C to +33 °C. The liver results are independent of sample thicknesses from 3 mm down to 100 μm and agree with available literature for non-mouse liver to within the measurement scatter.
How Prevalent Is Object-Based Attention?
Pilz, Karin S.; Roggeveen, Alexa B.; Creighton, Sarah E.; Bennett, Patrick J.; Sekuler, Allison B.
2012-01-01
Previous research suggests that visual attention can be allocated to locations in space (space-based attention) and to objects (object-based attention). The cueing effects associated with space-based attention tend to be large and are found consistently across experiments. Object-based attention effects, however, are small and found less consistently across experiments. In three experiments we address the possibility that variability in object-based attention effects across studies reflects low incidence of such effects at the level of individual subjects. Experiment 1 measured space-based and object-based cueing effects for horizontal and vertical rectangles in 60 subjects comparing commonly used target detection and discrimination tasks. In Experiment 2 we ran another 120 subjects in a target discrimination task in which rectangle orientation varied between subjects. Using parametric statistical methods, we found object-based effects only for horizontal rectangles. Bootstrapping methods were used to measure effects in individual subjects. Significant space-based cueing effects were found in nearly all subjects in both experiments, across tasks and rectangle orientations. However, only a small number of subjects exhibited significant object-based cueing effects. Experiment 3 measured only object-based attention effects using another common paradigm and again, using bootstrapping, we found only a small number of subjects that exhibited significant object-based cueing effects. Our results show that object-based effects are more prevalent for horizontal rectangles, which is in accordance with the theory that attention may be allocated more easily along the horizontal meridian. The fact that so few individuals exhibit a significant object-based cueing effect presumably is why previous studies of this effect might have yielded inconsistent results. The results from the current study highlight the importance of considering individual subject data in addition to commonly used statistical methods. PMID:22348018
Khalil, Sami F.; Mohktar, Mas S.; Ibrahim, Fatimah
2014-01-01
Bioimpedance analysis is a noninvasive, low cost and a commonly used approach for body composition measurements and assessment of clinical condition. There are a variety of methods applied for interpretation of measured bioimpedance data and a wide range of utilizations of bioimpedance in body composition estimation and evaluation of clinical status. This paper reviews the main concepts of bioimpedance measurement techniques including the frequency based, the allocation based, bioimpedance vector analysis and the real time bioimpedance analysis systems. Commonly used prediction equations for body composition assessment and influence of anthropometric measurements, gender, ethnic groups, postures, measurements protocols and electrode artifacts in estimated values are also discussed. In addition, this paper also contributes to the deliberations of bioimpedance analysis assessment of abnormal loss in lean body mass and unbalanced shift in body fluids and to the summary of diagnostic usage in different kinds of conditions such as cardiac, pulmonary, renal, and neural and infection diseases. PMID:24949644
Practice patterns when treating patients with low back pain: a survey of physical therapists.
Davies, Claire; Nitz, Arthur J; Mattacola, Carl G; Kitzman, Patrick; Howell, Dana; Viele, Kert; Baxter, David; Brockopp, Dorothy
2014-08-01
Low back pain (LBP), is a common musculoskeletal problem, affecting 75-85% of adults in their lifetime. Direct costs of LBP in the USA were estimated over 85 billion dollars in 2005 resulting in a significant economic burden for the healthcare system. LBP classification systems and outcome measures are available to guide physical therapy assessments and intervention. However, little is known about which, if any, physical therapists use in clinical practice. The purpose of this study was to identify the use of and barriers to LBP classification systems and outcome measures among physical therapists in one state. A mixed methods study using a cross-sectional cohort design with descriptive qualitative methods was performed. A survey collected both quantitative and qualitative data relevant to classification systems and outcome measures used by physical therapists working with patients with LBP. Physical therapists responded using classification systems designed to direct treatment predominantly. The McKenzie method was the most frequent approach to classify LBP. Barriers to use of classification systems and outcome measures were lack of knowledge, too limiting and time. Classification systems are being used for decision-making in physical therapy practice for patients with LBP. Lack of knowledge and training seems to be the main barrier to the use of classification systems in practice. The Oswestry Disability Index and Numerical Pain Scale were the most commonly used outcome measures. The main barrier to their use was lack of time. Continuing education and reading the literature were identified as important tools to teach evidence-based practice to physical therapists in practice.
Analytical model and error analysis of arbitrary phasing technique for bunch length measurement
NASA Astrophysics Data System (ADS)
Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji
2018-05-01
An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.
Informal employment in high-income countries for a health inequalities research: A scoping review.
Julià, Mireia; Tarafa, Gemma; O'Campo, Patricia; Muntaner, Carles; Jódar, Pere; Benach, Joan
2015-01-01
Informal employment (IE) is one of the least studied employment conditions in public health research, mainly due to the difficulty of its conceptualization and its measurement, producing a lack of a unique concept and a common method of measurement. The aim of this review is to identify literature on IE in order to improve its definition and methods of measurement, with special attention given to high-income countries, to be able to study the possible impact on health inequalities within and between countries. A scoping review of definitions and methods of measurement of IE was conducted reviewing relevant databases and grey literature and analyzing selected articles. We found a wide spectrum of terms for describing IE as well as definitions and methods of measurement. We provide a definition of IE to be used in health inequalities research in high-income countries. Direct methods such as surveys can capture more information about workers and firms in order to estimate IE. These results can be used in further investigations about the impacts of this IE on health inequalities. Public health research must improve monitoring and analysis of IE in order to know the impacts of this employment condition on health inequalities.
NASA Astrophysics Data System (ADS)
Zhang, N.; Quiring, S. M.; Ochsner, T. E.
2017-12-01
Each soil moisture monitoring network commonly adopts different sensor technologies. This results in different measurement units, depths and impedes large-scale soil moisture applications that seek to integrate data from multiple networks. Therefore, a comprehensive comparison of different sensors to identify the best approach for integrating and homogenizing measurements from different sensors is required. This study compares three commonly used sensors, including Stevens Water Hydra Probes, Campbell Scientific CS616 TDR and CS 229-L heat dissipation sensors based on data from May 2010 to December 2012 from the Marena, Oklahoma, In Situ Sensor Testbed (MOISST). All sensors are installed at common depths of 5, 10, 20, 50, 100 cm. The results reveal that the differences between the three sensors tends to increase with depth. The CDF plots showed CS 229 is most sensitive to moisture variation in dry condition and most easily saturated in wet condition, followed by Hydra probe and CS616. Our results show that calculating percentiles is a good normalization method for standardizing measurements from different sensors. Our preliminary results demonstrate that CDF matching can be used to convert measurements from one sensor to another.
Multi-capillary based optical sensors for highly sensitive protein detection
NASA Astrophysics Data System (ADS)
Okuyama, Yasuhira; Katagiri, Takashi; Matsuura, Yuji
2017-04-01
A fluorescence measuring method based on glass multi-capillary for detecting trace amounts of proteins is proposed. It promises enhancement of sensitivity due to effects of the adsorption area expansion and the longitudinal excitation. The sensitivity behavior of this method was investigated by using biotin-streptavidin binding. According to experimental examinations, it was found that the sensitivity was improved by a factor of 70 from common glass wells. We also confirmed our measuring system could detect 1 pg/mL of streptavidin. These results suggest that multi-capillary has a potential as a high-sensitive biosensor.
PAT-tools for process control in pharmaceutical film coating applications.
Knop, Klaus; Kleinebudde, Peter
2013-12-05
Recent development of analytical techniques to monitor the coating process of pharmaceutical solid dosage forms such as pellets and tablets are described. The progress from off- or at-line measurements to on- or in-line applications is shown for the spectroscopic methods near infrared (NIR) and Raman spectroscopy as well as for terahertz pulsed imaging (TPI) and image analysis. The common goal of all these methods is to control or at least to monitor the coating process and/or to estimate the coating end point through timely measurements. Copyright © 2013 Elsevier B.V. All rights reserved.
Applications of luminescent systems to infectious disease methodology
NASA Technical Reports Server (NTRS)
Picciolo, G. L.; Chappelle, E. W.; Deming, J. W.; Mcgarry, M. A.; Nibley, D. A.; Okrend, H.; Thomas, R. R.
1976-01-01
The characterization of a clinical sample by a simple, fast, accurate, automatable analytical measurement is important in the management of infectious disease. Luminescence assays offer methods rich with options for these measurements. The instrumentation is common to each assay, and the investment is reasonable. Three general procedures were developed to varying degrees of completeness which measure bacterial levels by measuring their ATP, FMN and iron porphyrins. Bacteriuria detection and antibiograms can be determined within half a day. The characterization of the sample for its soluble ATP, FMN or prophyrins was also performed.
Newborn Jaundice Technologies: Unbound Bilirubin and Bilirubin Binding Capacity In Neonates
Amin, Sanjiv B.; Lamola, Angelo A.
2011-01-01
Neonatal jaundice (hyperbilirubinemia), extremely common in neonates, can be associated with neurotoxicity. A safe level of bilirubin has not been defined in either premature or term infants. Emerging evidence suggest that the level of unbound (or “free”) bilirubin has a better sensitivity and specificity than total serum bilirubin for bilirubin-induced neurotoxicity. Although recent studies suggest the usefulness of free bilirubin measurements in managing high-risk neonates including premature infants, there currently exists no widely available method to assay the serum free bilirubin concentration. To keep pace with the growing demand, in addition to reevaluation of old methods, several promising new methods are being developed for sensitive, accurate, and rapid measurement of free bilirubin and bilirubin binding capacity. These innovative methods need to be validated before adopting for clinical use. We provide an overview of some promising methods for free bilirubin and binding capacity measurements with the goal to enhance research in this area of active interest and apparent need. PMID:21641486
Multiplex cDNA quantification method that facilitates the standardization of gene expression data
Gotoh, Osamu; Murakami, Yasufumi; Suyama, Akira
2011-01-01
Microarray-based gene expression measurement is one of the major methods for transcriptome analysis. However, current microarray data are substantially affected by microarray platforms and RNA references because of the microarray method can provide merely the relative amounts of gene expression levels. Therefore, valid comparisons of the microarray data require standardized platforms, internal and/or external controls and complicated normalizations. These requirements impose limitations on the extensive comparison of gene expression data. Here, we report an effective approach to removing the unfavorable limitations by measuring the absolute amounts of gene expression levels on common DNA microarrays. We have developed a multiplex cDNA quantification method called GEP-DEAN (Gene expression profiling by DCN-encoding-based analysis). The method was validated by using chemically synthesized DNA strands of known quantities and cDNA samples prepared from mouse liver, demonstrating that the absolute amounts of cDNA strands were successfully measured with a sensitivity of 18 zmol in a highly multiplexed manner in 7 h. PMID:21415008
Cell-free measurements of brightness of fluorescently labeled antibodies
Zhou, Haiying; Tourkakis, George; Shi, Dennis; Kim, David M.; Zhang, Hairong; Du, Tommy; Eades, William C.; Berezin, Mikhail Y.
2017-01-01
Validation of imaging contrast agents, such as fluorescently labeled imaging antibodies, has been recognized as a critical challenge in clinical and preclinical studies. As the number of applications for imaging antibodies grows, these materials are increasingly being subjected to careful scrutiny. Antibody fluorescent brightness is one of the key parameters that is of critical importance. Direct measurements of the brightness with common spectroscopy methods are challenging, because the fluorescent properties of the imaging antibodies are highly sensitive to the methods of conjugation, degree of labeling, and contamination with free dyes. Traditional methods rely on cell-based assays that lack reproducibility and accuracy. In this manuscript, we present a novel and general approach for measuring the brightness using antibody-avid polystyrene beads and flow cytometry. As compared to a cell-based method, the described technique is rapid, quantitative, and highly reproducible. The proposed method requires less than ten microgram of sample and is applicable for optimizing synthetic conjugation procedures, testing commercial imaging antibodies, and performing high-throughput validation of conjugation procedures. PMID:28150730
Occupational exposure assessment for crystalline silica dust: approach in Poland and worldwide.
Maciejewska, Aleksandra
2008-01-01
Crystalline silica is a health hazard commonly encountered in work environment. Occupational exposure to crystalline silica dust concerns workers employed in such industries as mineral, fuel-energy, metal, chemical and construction industry. It is estimated that over 2 million workers in the European Union are exposed to crystalline silica. In Poland, over 50 thousand people work under conditions of silica dust exposure exceeding the occupational exposure limit. The assessment of occupational exposure to crystalline silica is a multi-phase process, primarily dependent on workplace measurements, quantitative analyses of samples, and comparison of results with respective standards. The present article summarizes the approaches to and methods used for assessment of exposure to crystalline silica as adopted in different countries in the EU and worldwide. It also compares the occupational limit values in force in almost 40 countries. Further, it points out the consequences resulting from the fact that IARC has regarded the two most common forms of crystalline silica: quartz and cristobalite as human carcinogens. The article includes an inter-country review of the methods used for air sample collection, dust concentration measurements, and determination of crystalline silica. The selection was based on the GESTIS database which lists the methods approved by the European Union for the measurements and tests regarding hazardous agents. Special attention has been paid to the methods of determining crystalline silica. The author attempts to analyze the influence of analytical techniques, sample preparation and the reference materials on determination results. Also the operating parameters of the method, including limit of detection, limit of quantification, and precision, have been compared.
Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon.
Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo
2013-01-01
Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style.
Language Individuation and Marker Words: Shakespeare and His Maxwell's Demon
Marsden, John; Budden, David; Craig, Hugh; Moscato, Pablo
2013-01-01
Background Within the structural and grammatical bounds of a common language, all authors develop their own distinctive writing styles. Whether the relative occurrence of common words can be measured to produce accurate models of authorship is of particular interest. This work introduces a new score that helps to highlight such variations in word occurrence, and is applied to produce models of authorship of a large group of plays from the Shakespearean era. Methodology A text corpus containing 55,055 unique words was generated from 168 plays from the Shakespearean era (16th and 17th centuries) of undisputed authorship. A new score, CM1, is introduced to measure variation patterns based on the frequency of occurrence of each word for the authors John Fletcher, Ben Jonson, Thomas Middleton and William Shakespeare, compared to the rest of the authors in the study (which provides a reference of relative word usage at that time). A total of 50 WEKA methods were applied for Fletcher, Jonson and Middleton, to identify those which were able to produce models yielding over 90% classification accuracy. This ensemble of WEKA methods was then applied to model Shakespearean authorship across all 168 plays, yielding a Matthews' correlation coefficient (MCC) performance of over 90%. Furthermore, the best model yielded an MCC of 99%. Conclusions Our results suggest that different authors, while adhering to the structural and grammatical bounds of a common language, develop measurably distinct styles by the tendency to over-utilise or avoid particular common words and phrasings. Considering language and the potential of words as an abstract chaotic system with a high entropy, similarities can be drawn to the Maxwell's Demon thought experiment; authors subconsciously favour or filter certain words, modifying the probability profile in ways that could reflect their individuality and style. PMID:23826143
A statistical approach to root system classification
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for “plant functional type” identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential. PMID:23914200
A statistical approach to root system classification.
Bodner, Gernot; Leitner, Daniel; Nakhforoosh, Alireza; Sobotik, Monika; Moder, Karl; Kaul, Hans-Peter
2013-01-01
Plant root systems have a key role in ecology and agronomy. In spite of fast increase in root studies, still there is no classification that allows distinguishing among distinctive characteristics within the diversity of rooting strategies. Our hypothesis is that a multivariate approach for "plant functional type" identification in ecology can be applied to the classification of root systems. The classification method presented is based on a data-defined statistical procedure without a priori decision on the classifiers. The study demonstrates that principal component based rooting types provide efficient and meaningful multi-trait classifiers. The classification method is exemplified with simulated root architectures and morphological field data. Simulated root architectures showed that morphological attributes with spatial distribution parameters capture most distinctive features within root system diversity. While developmental type (tap vs. shoot-borne systems) is a strong, but coarse classifier, topological traits provide the most detailed differentiation among distinctive groups. Adequacy of commonly available morphologic traits for classification is supported by field data. Rooting types emerging from measured data, mainly distinguished by diameter/weight and density dominated types. Similarity of root systems within distinctive groups was the joint result of phylogenetic relation and environmental as well as human selection pressure. We concluded that the data-define classification is appropriate for integration of knowledge obtained with different root measurement methods and at various scales. Currently root morphology is the most promising basis for classification due to widely used common measurement protocols. To capture details of root diversity efforts in architectural measurement techniques are essential.
Heterogeneity measures in hydrological frequency analysis: review and new developments
NASA Astrophysics Data System (ADS)
Requena, Ana I.; Chebana, Fateh; Ouarda, Taha B. M. J.
2017-03-01
Some regional procedures to estimate hydrological quantiles at ungauged sites, such as the index-flood method, require the delineation of homogeneous regions as a basic step for their application. The homogeneity of these delineated regions is usually tested providing a yes/no decision. However, complementary measures that are able to quantify the degree of heterogeneity of a region are needed to compare regions, evaluate the impact of particular sites, and rank the performance of different delineating methods. Well-known existing heterogeneity measures are not well-defined for ranking regions, as they entail drawbacks such as assuming a given probability distribution, providing negative values and being affected by the region size. Therefore, a framework for defining and assessing desirable properties of a heterogeneity measure in the regional hydrological context is needed. In the present study, such a framework is proposed through a four-step procedure based on Monte Carlo simulations. Several heterogeneity measures, some of which commonly known and others which are derived from recent approaches or adapted from other fields, are presented and developed to be assessed. The assumption-free Gini index applied on the at-site L-variation coefficient (L-CV) over a region led to the best results. The measure of the percentage of sites for which the regional L-CV is outside the confidence interval of the at-site L-CV is also found to be relevant, as it leads to more stable results regardless of the regional L-CV value. An illustrative application is also presented for didactical purposes, through which the subjectivity of commonly used criteria to assess the performance of different delineation methods is underlined.
Measurements and analysis in imaging for biomedical applications
NASA Astrophysics Data System (ADS)
Hoeller, Timothy L.
2009-02-01
A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results to baseline data using F-statistics. Self-parings over time are also useful. Special and common causes are identified apart from aging in applying the statistical methods. In the future, implementation of imaging measurement methods by research staff, doctors, and concerned patient partners result in improved health diagnosis, reporting, and cause determination. The long-term prospects for quantified measurements are better quality in imaging analysis with applications of higher utility for heath care providers.
NASA Technical Reports Server (NTRS)
Colombo, Oscar L. (Editor)
1992-01-01
This symposium on space and airborne techniques for measuring gravity fields, and related theory, contains papers on gravity modeling of Mars and Venus at NASA/GSFC, an integrated laser Doppler method for measuring planetary gravity fields, observed temporal variations in the earth's gravity field from 16-year Starlette orbit analysis, high-resolution gravity models combining terrestrial and satellite data, the effect of water vapor corrections for satellite altimeter measurements of the geoid, and laboratory demonstrations of superconducting gravity and inertial sensors for space and airborne gravity measurements. Other papers are on airborne gravity measurements over the Kelvin Seamount; the accuracy of GPS-derived acceleration from moving platform tests; airborne gravimetry, altimetry, and GPS navigation errors; controlling common mode stabilization errors in airborne gravity gradiometry, GPS/INS gravity measurements in space and on a balloon, and Walsh-Fourier series expansion of the earth's gravitational potential.
Vachon, Celine M.; Scott, Christopher G.; Fasching, Peter A.; Hall, Per; Tamimi, Rulla M.; Li, Jingmei; Stone, Jennifer; Apicella, Carmel; Odefrey, Fabrice; Gierach, Gretchen L.; Jud, Sebastian M.; Heusinger, Katharina; Beckmann, Matthias W.; Pollan, Marina; Fernández-Navarro, Pablo; González-Neira, Anna; Benítez, Javier; van Gils, Carla H.; Lokate, Mariëtte; Onland-Moret, N. Charlotte; Peeters, Petra H.M.; Brown, Judith; Leyland, Jean; Varghese, Jajini S.; Easton, Douglas F.; Thompson, Deborah J.; Luben, Robert N.; Warren, Ruth ML; Wareham, Nicholas J.; Loos, Ruth JF; Khaw, Kay-Tee; Ursin, Giske; Lee, Eunjung; Gayther, Simon A.; Ramus, Susan J.; Eeles, Rosalind A.; Leach, Martin O.; Kwan-Lim, Gek; Couch, Fergus J.; Giles, Graham G.; Baglietto, Laura; Krishnan, Kavitha; Southey, Melissa C.; Le Marchand, Loic; Kolonel, Laurence N.; Woolcott, Christy; Maskarinec, Gertraud; Haiman, Christopher A; Walker, Kate; Johnson, Nichola; McCormack, Valerie A.; Biong, Margarethe; Alnæs, Grethe I.G.; Gram, Inger Torhild; Kristensen, Vessela N.; Børresen-Dale, Anne-Lise; Lindström, Sara; Hankinson, Susan E.; Hunter, David J.; Andrulis, Irene L.; Knight, Julia A.; Boyd, Norman F.; Figueroa, Jonine D.; Lissowska, Jolanta; Wesolowska, Ewa; Peplonska, Beata; Bukowska, Agnieszka; Reszka, Edyta; Liu, JianJun; Eriksson, Louise; Czene, Kamila; Audley, Tina; Wu, Anna H.; Pankratz, V. Shane; Hopper, John L.; dos-Santos-Silva, Isabel
2013-01-01
Background Mammographic density adjusted for age and body mass index (BMI) is a heritable marker of breast cancer susceptibility. Little is known about the biological mechanisms underlying the association between mammographic density and breast cancer risk. We examined whether common low-penetrance breast cancer susceptibility variants contribute to inter-individual differences in mammographic density measures. Methods We established an international consortium (DENSNP) of 19 studies from 10 countries, comprising 16,895 Caucasian women, to conduct a pooled cross-sectional analysis of common breast cancer susceptibility variants in 14 independent loci and mammographic density measures. Dense and non-dense areas, and percent density, were measured using interactive-thresholding techniques. Mixed linear models were used to assess the association between genetic variants and the square roots of mammographic density measures adjusted for study, age, case status, body mass index (BMI) and menopausal status. Results Consistent with their breast cancer associations, the C-allele of rs3817198 in LSP1 was positively associated with both adjusted dense area (p=0.00005) and adjusted percent density (p=0.001) whereas the A-allele of rs10483813 in RAD51L1 was inversely associated with adjusted percent density (p=0.003), but not with adjusted dense area (p=0.07). Conclusion We identified two common breast cancer susceptibility variants associated with mammographic measures of radio-dense tissue in the breast gland. Impact We examined the association of 14 established breast cancer susceptibility loci with mammographic density phenotypes within a large genetic consortium and identified two breast cancer susceptibility variants, LSP1-rs3817198 and RAD51L1-rs10483813, associated with mammographic measures and in the same direction as the breast cancer association. PMID:22454379
Residual stress measurement in a metal microdevice by micro Raman spectroscopy
NASA Astrophysics Data System (ADS)
Song, Chang; Du, Liqun; Qi, Leijie; Li, Yu; Li, Xiaojun; Li, Yuanqi
2017-10-01
Large residual stress induced during the electroforming process cannot be ignored to fabricate reliable metal microdevices. Accurate measurement is the basis for studying the residual stress. Influenced by the topological feature size of micron scale in the metal microdevice, residual stress in it can hardly be measured by common methods. In this manuscript, a methodology is proposed to measure the residual stress in the metal microdevice using micro Raman spectroscopy (MRS). To estimate the residual stress in metal materials, micron sized β-SiC particles were mixed in the electroforming solution for codeposition. First, the calculated expression relating the Raman shifts to the induced biaxial stress for β-SiC was derived based on the theory of phonon deformation potentials and Hooke’s law. Corresponding micro electroforming experiments were performed and the residual stress in Ni-SiC composite layer was both measured by x-ray diffraction (XRD) and MRS methods. Then, the validity of the MRS measurements was verified by comparing with the residual stress measured by XRD method. The reliability of the MRS method was further validated by the statistical student’s t-test. The MRS measurements were found to have no systematic error in comparison with the XRD measurements, which confirm that the residual stresses measured by the MRS method are reliable. Besides that, the MRS method, by which the residual stress in a micro inertial switch was measured, has been confirmed to be a convincing experiment tool for estimating the residual stress in metal microdevice with micron order topological feature size.
New Method Developed to Measure Contact Angles of a Sessile Drop
NASA Technical Reports Server (NTRS)
Chao, David F.; Zhang, Nengli
2002-01-01
The spreading of an evaporating liquid on a solid surface occurs in many practical processes and is of importance in a number of practical situations such as painting, textile dyeing, coating, gluing, and thermal engineering. Typical processes involving heat transfer where the contact angle plays an important role are film cooling, boiling, and the heat transfer through heat pipes. The biological phenomenon of cell spreading also is analogous to a drop spreading (ref. 1). In the study of spreading, the dynamic contact angle describes the interfacial properties on solid substrates and, therefore, has been studied by physicists and fluid mechanics investigators. The dynamic contact angle of a spreading nonvolatile liquid drop provides a simple tool in the study of the free-boundary problem, but the study of the spreading of a volatile liquid drop is of more practical interest because the evaporation of common liquids is inevitable in practical processes. The most common method to measure the contact angle, the contact radius, and the height of a sessile drop on a solid surface is to view the drop from its edge through an optical microscope. However, this method gives only local information in the view direction. Zhang and Yang (ref. 2) developed a laser shadowgraphy method to investigate the evaporation of sessile drop on a glass plate. As described here, Zhang and Chao (refs. 3 and 4) improved the method and suggested a new optical arrangement to measure the dynamic contact angle and the instant evaporation rate of a sessile drop with much higher accuracy (less than 1 percent). With this method, any fluid motion in the evaporating drop can be visualized through shadowgraphy without using a tracer, which often affects the field under investigation.
DOT National Transportation Integrated Search
2014-01-01
The Maine Department of Transportation (MaineDOT) has noted poor correlation between predicted pile resistances : calculated using commonly accepted design methods and measured pile resistance from dynamic pile load tests (also : referred to as high ...
This paper describes the application and method performance parameters of a Luminex xMAP™ bead-based, multiplex immunoassay for measuring specific antibody responses in saliva samples (n=5438) to antigens of six common waterborne pathogens (Campylobacter jejuni, Helicobacter pylo...
DOT National Transportation Integrated Search
1993-12-01
The Alternating Current Potential Drop (ACPD) method is investigated as a means of making measurements in laboratory experiments on the initiation and growth of multiple site damage (MSD) cracks in a common aluminum alloy used for aircraft constructi...
Plug-and-play web-based visualization of mobile air monitoring data
The collection of air measurements in real-time on moving platforms, such as wearable, bicycle-mounted, or vehicle-mounted air sensors, is becoming an increasingly common method to investigate local air quality. However, visualizing and analyzing geospatial air monitoring data r...
DOT National Transportation Integrated Search
2014-01-01
The Maine Department of Transportation (MaineDOT) has noted poor correlation between predicted pile resistances : calculated using commonly accepted design methods and measured pile resistance from dynamic pile load tests (also : referred to as high ...
Permeable pavement surfaces are infiltration based stormwater control measures (SCM) commonly applied in parking lots to decrease impervious area and reduce runoff volume. Many are not optimally designed however, as little attention is given to draining a large enough contributin...
Implementation of the UV-VIS method to measure organic content in clay soils : technical report.
DOT National Transportation Integrated Search
2011-05-01
The Texas Department of Transportation has been having problems with organic matter in soils that they : stabilize for use as subgrade layers in road construction. The organic matter reduces the effectiveness of : common soil additives (lime/cement) ...
Study of the conservation of mechanical energy in the motion of a pendulum using a smartphone
NASA Astrophysics Data System (ADS)
Pierratos, Theodoros; Polatoglou, Hariton M.
2018-01-01
A common method that scientists use to validate a theory is to utilize known principles and laws to produce results on specific settings, which can be assessed using the appropriate experimental methods and apparatuses. Smartphones have various sensors built-in and could be used for measuring and logging data in physics experiments. In this work, we propose the use of smartphones for students to study a simple pendulum’s conservation of mechanical energy. It is well known that common smartphones do not have a velocity sensor, which could make the verification of the conservation of mechanical energy a simpler task. To overcome this, one can use an accelerometer to measure the centripetal acceleration on the mass and from that, deduce the maximum velocity. In this study, we show that this can be achieved with reasonable uncertainty, using a mobile device. Thus, we developed an experiment which corroborates with the conservation of mechanical energy and can be performed in the classroom.
Reliability of the Inverse Water Volumetry Method to Measure the Volume of the Upper Limb.
Beek, Martinus A; te Slaa, Alexander; van der Laan, Lijckle; Mulder, Paul G H; Rutten, Harm J T; Voogd, Adri C; Luiten, Ernest J T; Gobardhan, Paul D
2015-06-01
Lymphedema of the upper extremity is a common side effect of lymph node dissection or irradiation of the axilla. Several techniques are being applied in order to examine the presence and severity of lymphedema. Measurement of circumference of the upper extremity is most frequently performed. An alternative is the water-displacement method. The aim of this study was to determine the reliability and the reproducibility of the "Inverse Water Volumetry apparatus" (IWV-apparatus) for the measurement of arm volumes. The IWV-apparatus is based on the water-displacement method. Measurements were performed by three breast cancer nurse practitioners on ten healthy volunteers in three weekly sessions. The intra-class correlation coefficient, defined as the ratio of the subject component to the total variance, equaled 0.99. The reliability index is calculated as 0.14 kg. This indicates that only changes in a patient's arm volume measurement of more than 0.14 kg would represent a true change in arm volume, which is about 6% of the mean arm volume of 2.3 kg. The IWV-apparatus proved to be a reliable and reproducible method to measure arm volume.
Measuring hemoglobin amount and oxygen saturation of skin with advancing age
NASA Astrophysics Data System (ADS)
Watanabe, Shumpei; Yamamoto, Satoshi; Yamauchi, Midori; Tsumura, Norimichi; Ogawa-Ochiai, Keiko; Akiba, Tetsuo
2012-03-01
We measured the oxygen saturation of skin at various ages using our previously proposed method that can rapidly simulate skin spectral reflectance with high accuracy. Oxygen saturation is commonly measured by a pulse oximeter to evaluate oxygen delivery for monitoring the functions of heart and lungs at a specific time. On the other hand, oxygen saturation of skin is expected to assess peripheral conditions. Our previously proposed method, the optical path-length matrix method (OPLM), is based on a Monte Carlo for multi-layered media (MCML), but can simulate skin spectral reflectance 27,000 times faster than MCML. In this study, we implemented an iterative simulation of OPLM with a nonlinear optimization technique such that this method can also be used for estimating hemoglobin concentration and oxygen saturation from the measured skin spectral reflectance. In the experiments, the skin reflectance spectra of 72 outpatients aged between 20 and 86 years were measured by a spectrophotometer. Three points were measured for each subject: the forearm, the thenar eminence, and the intermediate phalanx. The result showed that the oxygen saturation of skin remained constant at each point as the age varied.
Self Calibrated Wireless Distributed Environmental Sensory Networks
Fishbain, Barak; Moreno-Centeno, Erick
2016-01-01
Recent advances in sensory and communication technologies have made Wireless Distributed Environmental Sensory Networks (WDESN) technically and economically feasible. WDESNs present an unprecedented tool for studying many environmental processes in a new way. However, the WDESNs’ calibration process is a major obstacle in them becoming the common practice. Here, we present a new, robust and efficient method for aggregating measurements acquired by an uncalibrated WDESN, and producing accurate estimates of the observed environmental variable’s true levels rendering the network as self-calibrated. The suggested method presents novelty both in group-decision-making and in environmental sensing as it offers a most valuable tool for distributed environmental monitoring data aggregation. Applying the method on an extensive real-life air-pollution dataset showed markedly more accurate results than the common practice and the state-of-the-art. PMID:27098279
Holmes, W J M; Timmons, M J; Kauser, S
2015-10-01
Techniques used to estimate implant size for primary breast augmentation have evolved since the 1970s. Currently no consensus exists on the optimal method to select implant size for primary breast augmentation. In 2013 we asked United Kingdom consultant plastic surgeons who were full members of BAPRAS or BAAPS what was their technique for implant size selection for primary aesthetic breast augmentation. We also asked what was the range of implant sizes they commonly used. The answers to question one were grouped into four categories: experience, measurements, pre-operative external sizers and intra-operative sizers. The response rate was 46% (164/358). Overall, 95% (153/159) of all respondents performed some form of pre-operative assessment, the others relied on "experience" only. The most common technique for pre-operative assessment was by external sizers (74%). Measurements were used by 57% of respondents and 3% used intra-operative sizers only. A combination of measurements and sizers was used by 34% of respondents. The most common measurements were breast base (68%), breast tissue compliance (19%), breast height (15%), and chest diameter (9%). The median implant size commonly used in primary breast augmentation was 300cc. Pre-operative external sizers are the most common technique used by UK consultant plastic surgeons to select implant size for primary breast augmentation. We discuss the above findings in relation to the evolution of pre-operative planning techniques for breast augmentation. Copyright © 2015 British Association of Plastic, Reconstructive and Aesthetic Surgeons. Published by Elsevier Ltd. All rights reserved.
Testing Common Envelopes on Double White Dwarf Binaries
NASA Astrophysics Data System (ADS)
Nandez, Jose L. A.; Ivanova, Natalia; Lombardi, James C., Jr.
2015-06-01
The formation of a double white dwarf binary likely involves a common envelope (CE) event between a red giant and a white dwarf (WD) during the most recent episode of Roche lobe overflow mass transfer. We study the role of recombination energy with hydrodynamic simulations of such stellar interactions. We find that the recombination energy helps to expel the common envelope entirely, while if recombination energy is not taken into account, a significant fraction of the common envelope remains bound. We apply our numerical methods to constrain the progenitor system for WD 1101+364 - a double WD binary that has well-measured mass ratio of q=0.87±0.03 and an orbital period of 0.145 days. Our best-fit progenitor for the pre-common envelope donor is a 1.5 ⊙ red giant.
ERIC Educational Resources Information Center
Penfield, Randall D.; Giacobbi, Peter R., Jr.; Myers, Nicholas D.
2007-01-01
One aspect of construct validity is the extent to which the measurement properties of a rating scale are invariant across the groups being compared. An increasingly used method for assessing between-group differences in the measurement properties of items of a scale is the framework of differential item functioning (DIF). In this paper we…
Stuart, Elizabeth A.; Lee, Brian K.; Leacy, Finbarr P.
2013-01-01
Objective Examining covariate balance is the prescribed method for determining when propensity score methods are successful at reducing bias. This study assessed the performance of various balance measures, including a proposed balance measure based on the prognostic score (also known as the disease-risk score), to determine which balance measures best correlate with bias in the treatment effect estimate. Study Design and Setting The correlations of multiple common balance measures with bias in the treatment effect estimate produced by weighting by the odds, subclassification on the propensity score, and full matching on the propensity score were calculated. Simulated data were used, based on realistic data settings. Settings included both continuous and binary covariates and continuous covariates only. Results The standardized mean difference in prognostic scores, the mean standardized mean difference, and the mean t-statistic all had high correlations with bias in the effect estimate. Overall, prognostic scores displayed the highest correlations of all the balance measures considered. Prognostic score measure performance was generally not affected by model misspecification and performed well under a variety of scenarios. Conclusion Researchers should consider using prognostic score–based balance measures for assessing the performance of propensity score methods for reducing bias in non-experimental studies. PMID:23849158
Methods of measurement signal acquisition from the rotational flow meter for frequency analysis
NASA Astrophysics Data System (ADS)
Świsulski, Dariusz; Hanus, Robert; Zych, Marcin; Petryka, Leszek
One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw) rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.
Measuring Thermodynamic Properties of Metals and Alloys With Knudsen Effusion Mass Spectrometry
NASA Technical Reports Server (NTRS)
Copland, Evan H.; Jacobson, Nathan S.
2010-01-01
This report reviews Knudsen effusion mass spectrometry (KEMS) as it relates to thermodynamic measurements of metals and alloys. First, general aspects are reviewed, with emphasis on the Knudsen-cell vapor source and molecular beam formation, and mass spectrometry issues germane to this type of instrument are discussed briefly. The relationship between the vapor pressure inside the effusion cell and the measured ion intensity is the key to KEMS and is derived in detail. Then common methods used to determine thermodynamic quantities with KEMS are discussed. Enthalpies of vaporization, the fundamental measurement, are determined from the variation of relative partial pressure with temperature using the second-law method or by calculating a free energy of formation and subtracting the entropy contribution using the third-law method. For single-cell KEMS instruments, measurements can be used to determine the partial Gibbs free energy if the sensitivity factor remains constant over multiple experiments. The ion-current ratio method and dimer-monomer method are also viable in some systems. For a multiple-cell KEMS instrument, activities are obtained by direct comparison with a suitable component reference state or a secondary standard. Internal checks for correct instrument operation and general procedural guidelines also are discussed. Finally, general comments are made about future directions in measuring alloy thermodynamics with KEMS.
Yin, Xiaoming; Li, Xiang; Zhao, Liping; Fang, Zhongping
2009-11-10
A Shack-Hartmann wavefront sensor (SWHS) splits the incident wavefront into many subsections and transfers the distorted wavefront detection into the centroid measurement. The accuracy of the centroid measurement determines the accuracy of the SWHS. Many methods have been presented to improve the accuracy of the wavefront centroid measurement. However, most of these methods are discussed from the point of view of optics, based on the assumption that the spot intensity of the SHWS has a Gaussian distribution, which is not applicable to the digital SHWS. In this paper, we present a centroid measurement algorithm based on the adaptive thresholding and dynamic windowing method by utilizing image processing techniques for practical application of the digital SHWS in surface profile measurement. The method can detect the centroid of each focal spot precisely and robustly by eliminating the influence of various noises, such as diffraction of the digital SHWS, unevenness and instability of the light source, as well as deviation between the centroid of the focal spot and the center of the detection area. The experimental results demonstrate that the algorithm has better precision, repeatability, and stability compared with other commonly used centroid methods, such as the statistical averaging, thresholding, and windowing algorithms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fulvio, D., E-mail: daniele.fulvio@uni-jena.de, E-mail: dfu@oact.inaf.it; Brieva, A. C.; Jäger, C.
2014-07-07
Vacuum-Ultraviolet (VUV) radiation is responsible for the photo-processing of simple and complex molecules in several terrestrial and extraterrestrial environments. In the laboratory such radiation is commonly simulated by inexpensive and easy-to-use microwave-powered hydrogen discharge lamps. However, VUV flux measurements are not trivial and the methods/devices typically used for this purpose, mainly actinometry and calibrated VUV silicon photodiodes, are not very accurate or expensive and lack of general suitability to experimental setups. Here, we present a straightforward method for measuring the VUV photon flux based on the photoelectric effect and using a gold photodetector. This method is easily applicable to mostmore » experimental setups, bypasses the major problems of the other methods, and provides reliable flux measurements. As a case study, the method is applied to a microwave-powered hydrogen discharge lamp. In addition, the comparison of these flux measurements to those obtained by O{sub 2} actinometry experiments allow us to estimate the quantum yield (QY) values QY{sub 122} = 0.44 ± 0.16 and QY{sub 160} = 0.87 ± 0.30 for solid-phase O{sub 2} actinometry.« less
Im, Jeong-Soo; Choi, Soon Ho; Hong, Duho; Seo, Hwa Jeong; Park, Subin; Hong, Jin Pyo
2011-01-01
This study was conducted to examine differences in proximal risk factors and suicide methods by sex and age in the national suicide mortality data in Korea. Data were collected from the National Police Agency and the National Statistical Office of Korea on suicide completers from 2004 to 2006. The 31,711 suicide case records were used to analyze suicide rates, methods, and proximal risk factors by sex and age. Suicide rate increased with age, especially in men. The most common proximal risk factor for suicide was medical illness in both sexes. The most common proximal risk factor for subjects younger than 30 years was found to be a conflict in relationships with family members, partner, or friends. Medical illness was found to increase in prevalence as a risk factor with age. Hanging/Suffocation was the most common suicide method used by both sexes. The use of drug/pesticide poisoning to suicide increased with age. A fall from height or hanging/suffocation was more popular in the younger age groups. Because proximal risk factors and suicide methods varied with sex and age, different suicide prevention measures are required after consideration of both of these parameters. Copyright © 2011 Elsevier Inc. All rights reserved.
Hughes, Sarah A; Huang, Rongfu; Mahaffey, Ashley; Chelme-Ayala, Pamela; Klamerth, Nikolaus; Meshref, Mohamed N A; Ibrahim, Mohamed D; Brown, Christine; Peru, Kerry M; Headley, John V; Gamal El-Din, Mohamed
2017-11-01
There are several established methods for the determination of naphthenic acids (NAs) in waters associated with oil sands mining operations. Due to their highly complex nature, measured concentration and composition of NAs vary depending on the method used. This study compared different common sample preparation techniques, analytical instrument methods, and analytical standards to measure NAs in groundwater and process water samples collected from an active oil sands operation. In general, the high- and ultrahigh-resolution methods, namely high performance liquid chromatography time-of-flight mass spectrometry (UPLC-TOF-MS) and Orbitrap mass spectrometry (Orbitrap-MS), were within an order of magnitude of the Fourier transform infrared spectroscopy (FTIR) methods. The gas chromatography mass spectrometry (GC-MS) methods consistently had the highest NA concentrations and greatest standard error. Total NAs concentration was not statistically different between sample preparation of solid phase extraction and liquid-liquid extraction. Calibration standards influenced quantitation results. This work provided a comprehensive understanding of the inherent differences in the various techniques available to measure NAs and hence the potential differences in measured amounts of NAs in samples. Results from this study will contribute to the analytical method standardization for NA analysis in oil sands related water samples. Copyright © 2017 Elsevier Ltd. All rights reserved.
Automated measurement of office, home and ambulatory blood pressure in atrial fibrillation.
Kollias, Anastasios; Stergiou, George S
2014-01-01
1. Hypertension and atrial fibrillation (AF) often coexist and are strong risk factors for stroke. Current guidelines for blood pressure (BP) measurement in AF recommend repeated measurements using the auscultatory method, whereas the accuracy of the automated devices is regarded as questionable. This review presents the current evidence on the feasibility and accuracy of automated BP measurement in the presence of AF and the potential for automated detection of undiagnosed AF during such measurements. 2. Studies evaluating the use of automated BP monitors in AF are limited and have significant heterogeneity in methodology and protocols. Overall, the oscillometric method is feasible for static (office or home) and ambulatory use and appears to be more accurate for systolic than diastolic BP measurement. 3. Given that systolic hypertension is particularly common and important in the elderly, the automated BP measurement method may be acceptable for self-home and ambulatory monitoring, but not for professional office or clinic measurement. 4. An embedded algorithm for the detection of asymptomatic AF during routine automated BP measurement with high diagnostic accuracy has been developed and appears to be a useful screening tool for elderly hypertensives. © 2013 Wiley Publishing Asia Pty Ltd.
NASA Technical Reports Server (NTRS)
Uber, James G.
1988-01-01
Software itself is not hazardous, but since software and hardware share common interfaces there is an opportunity for software to create hazards. Further, these software systems are complex, and proven methods for the design, analysis, and measurement of software safety are not yet available. Some past software failures, future NASA software trends, software engineering methods, and tools and techniques for various software safety analyses are reviewed. Recommendations to NASA are made based on this review.
Martin F. Jurgensen; Deborah S. Page-Dumroese; Robert E. Brown; Joanne M. Tirocke; Chris A. Miller; James B. Pickens; Min Wang
2017-01-01
Soils with high rock content are common in many US forests, and contain large amounts of stored C. Accurate measurements of soil bulk density and rock content are critical for calculating and assessing changes in both C and nutrient pool size, but bulk density sampling methods have limitations and sources of variability. Therefore, we evaluated the use of small-...
A review of setup error in supine breast radiotherapy using cone-beam computed tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Batumalai, Vikneswary, E-mail: Vikneswary.batumalai@sswahs.nsw.gov.au; Liverpool and Macarthur Cancer Therapy Centres, New South Wales; Ingham Institute of Applied Medical Research, Sydney, New South Wales
2016-10-01
Setup error in breast radiotherapy (RT) measured with 3-dimensional cone-beam computed tomography (CBCT) is becoming more common. The purpose of this study is to review the literature relating to the magnitude of setup error in breast RT measured with CBCT. The different methods of image registration between CBCT and planning computed tomography (CT) scan were also explored. A literature search, not limited by date, was conducted using Medline and Google Scholar with the following key words: breast cancer, RT, setup error, and CBCT. This review includes studies that reported on systematic and random errors, and the methods used when registeringmore » CBCT scans with planning CT scan. A total of 11 relevant studies were identified for inclusion in this review. The average magnitude of error is generally less than 5 mm across a number of studies reviewed. The common registration methods used when registering CBCT scans with planning CT scan are based on bony anatomy, soft tissue, and surgical clips. No clear relationships between the setup errors detected and methods of registration were observed from this review. Further studies are needed to assess the benefit of CBCT over electronic portal image, as CBCT remains unproven to be of wide benefit in breast RT.« less
Measuring nanoscale viscoelastic parameters of cells directly from AFM force-displacement curves.
Efremov, Yuri M; Wang, Wen-Horng; Hardy, Shana D; Geahlen, Robert L; Raman, Arvind
2017-05-08
Force-displacement (F-Z) curves are the most commonly used Atomic Force Microscopy (AFM) mode to measure the local, nanoscale elastic properties of soft materials like living cells. Yet a theoretical framework has been lacking that allows the post-processing of F-Z data to extract their viscoelastic constitutive parameters. Here, we propose a new method to extract nanoscale viscoelastic properties of soft samples like living cells and hydrogels directly from conventional AFM F-Z experiments, thereby creating a common platform for the analysis of cell elastic and viscoelastic properties with arbitrary linear constitutive relations. The method based on the elastic-viscoelastic correspondence principle was validated using finite element (FE) simulations and by comparison with the existed AFM techniques on living cells and hydrogels. The method also allows a discrimination of which viscoelastic relaxation model, for example, standard linear solid (SLS) or power-law rheology (PLR), best suits the experimental data. The method was used to extract the viscoelastic properties of benign and cancerous cell lines (NIH 3T3 fibroblasts, NMuMG epithelial, MDA-MB-231 and MCF-7 breast cancer cells). Finally, we studied the changes in viscoelastic properties related to tumorigenesis including TGF-β induced epithelial-to-mesenchymal transition on NMuMG cells and Syk expression induced phenotype changes in MDA-MB-231 cells.
Interferometer for Measuring Displacement to Within 20 pm
NASA Technical Reports Server (NTRS)
Zhao, Feng
2003-01-01
An optical heterodyne interferometer that can be used to measure linear displacements with an error <=20 pm has been developed. The remarkable accuracy of this interferometer is achieved through a design that includes (1) a wavefront split that reduces (relative to amplitude splits used in other interferometers) self interference and (2) a common-optical-path configuration that affords common-mode cancellation of the interference effects of thermal-expansion changes in optical-path lengths. The most popular method of displacement- measuring interferometry involves two beams, the polarizations of which are meant to be kept orthogonal upstream of the final interference location, where the difference between the phases of the two beams is measured. Polarization leakages (deviations from the desired perfect orthogonality) contaminate the phase measurement with periodic nonlinear errors. In commercial interferometers, these phase-measurement errors result in displacement errors in the approximate range of 1 to 10 nm. Moreover, because prior interferometers lack compensation for thermal-expansion changes in optical-path lengths, they are subject to additional displacement errors characterized by a temperature sensitivity of about 100 nm/K. Because the present interferometer does not utilize polarization in the separation and combination of the two interfering beams and because of the common-mode cancellation of thermal-expansion effects, the periodic nonlinear errors and the sensitivity to temperature changes are much smaller than in other interferometers
An advanced analysis method of initial orbit determination with too short arc data
NASA Astrophysics Data System (ADS)
Li, Binzhe; Fang, Li
2018-02-01
This paper studies the initial orbit determination (IOD) based on space-based angle measurement. Commonly, these space-based observations have short durations. As a result, classical initial orbit determination algorithms give poor results, such as Laplace methods and Gauss methods. In this paper, an advanced analysis method of initial orbit determination is developed for space-based observations. The admissible region and triangulation are introduced in the method. Genetic algorithm is also used for adding some constraints of parameters. Simulation results show that the algorithm can successfully complete the initial orbit determination.
Digital micromirror device-based common-path quantitative phase imaging.
Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T C
2017-04-01
We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the "off" state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption.
Digital micromirror device-based common-path quantitative phase imaging
Zheng, Cheng; Zhou, Renjie; Kuang, Cuifang; Zhao, Guangyuan; Yaqoob, Zahid; So, Peter T. C.
2017-01-01
We propose a novel common-path quantitative phase imaging (QPI) method based on a digital micromirror device (DMD). The DMD is placed in a plane conjugate to the objective back-aperture plane for the purpose of generating two plane waves that illuminate the sample. A pinhole is used in the detection arm to filter one of the beams after sample to create a reference beam. Additionally, a transmission-type liquid crystal device, placed at the objective back-aperture plane, eliminates the specular reflection noise arising from all the “off” state DMD micromirrors, which is common in all DMD-based illuminations. We have demonstrated high sensitivity QPI, which has a measured spatial and temporal noise of 4.92 nm and 2.16 nm, respectively. Experiments with calibrated polystyrene beads illustrate the desired phase measurement accuracy. In addition, we have measured the dynamic height maps of red blood cell membrane fluctuations, showing the efficacy of the proposed system for live cell imaging. Most importantly, the DMD grants the system convenience in varying the interference fringe period on the camera to easily satisfy the pixel sampling conditions. This feature also alleviates the pinhole alignment complexity. We envision that the proposed DMD-based common-path QPI system will allow for system miniaturization and automation for a broader adaption. PMID:28362789
Overmars, Koen P; Helming, John; van Zeijts, Henk; Jansson, Torbjörn; Terluin, Ida
2013-09-15
In this paper we describe a methodology to model the impacts of policy measures within the Common Agricultural Policy (CAP) on farm production, income and prices, and on farmland biodiversity. Two stylised scenarios are used to illustrate how the method works. The effects of CAP measures, such as subsidies and regulations, are calculated and translated into changes in land use and land-use intensity. These factors are then used to model biodiversity with a species-based indicator on a 1 km scale in the EU27. The Common Agricultural Policy Regionalised Impact Modelling System (CAPRI) is used to conduct the economic analysis and Dyna-CLUE (Conversion of Land Use and its Effects) is used to model land use changes. An indicator that expresses the relative species richness was used as the indicator for biodiversity in agricultural areas. The methodology is illustrated with a baseline scenario and two scenarios that include a specific policy. The strength of the methodology is that impacts of economic policy instruments can be linked to changes in agricultural production, prices and incomes, on the one hand, and to biodiversity effects, on the other - with land use and land-use intensity as the connecting drivers. The method provides an overall assessment, but for detailed impact assessment at landscape, farm or field level, additional analysis would be required. Copyright © 2013 Elsevier Ltd. All rights reserved.
Measuring Diffusion of Liquids by Common-Path Interferometry
NASA Technical Reports Server (NTRS)
Rashidnia, Nasser
2003-01-01
A method of observing the interdiffusion of a pair of miscible liquids is based on the use of a common-path interferometer (CPI) to measure the spatially varying gradient of the index refraction in the interfacial region in which the interdiffusion takes place. Assuming that the indices of refraction of the two liquids are different and that the gradient of the index of refraction of the liquid is proportional to the gradient in the relative concentrations of either liquid, the diffusivity of the pair of liquids can be calculated from the temporal variation of the spatial variation of the index of refraction. This method yields robust measurements and does not require precise knowledge of the indices of refraction of the pure liquids. Moreover, the CPI instrumentation is compact and is optomechanically robust by virtue of its common- path design. The two liquids are placed in a transparent rectangular parallelepiped test cell. Initially, the interface between the liquids is a horizontal plane, above which lies pure liquid 2 (the less-dense liquid) and below which lies pure liquid 1 (the denser liquid). The subsequent interdiffusion of the liquids gives rise to a gradient of concentration and a corresponding gradient of the index of refraction in a mixing layer. For the purpose of observing the interdiffusion, the test cell is placed in the test section of the CPI, in which a collimated, polarized beam of light from a low-power laser is projected horizontally through a region that contains the mixing layer.
Hamel, Jean-Francois; Saulnier, Patrick; Pe, Madeline; Zikos, Efstathios; Musoro, Jammbe; Coens, Corneel; Bottomley, Andrew
2017-09-01
Over the last decades, Health-related Quality of Life (HRQoL) end-points have become an important outcome of the randomised controlled trials (RCTs). HRQoL methodology in RCTs has improved following international consensus recommendations. However, no international recommendations exist concerning the statistical analysis of such data. The aim of our study was to identify and characterise the quality of the statistical methods commonly used for analysing HRQoL data in cancer RCTs. Building on our recently published systematic review, we analysed a total of 33 published RCTs studying the HRQoL methods reported in RCTs since 1991. We focussed on the ability of the methods to deal with the three major problems commonly encountered when analysing HRQoL data: their multidimensional and longitudinal structure and the commonly high rate of missing data. All studies reported HRQoL being assessed repeatedly over time for a period ranging from 2 to 36 months. Missing data were common, with compliance rates ranging from 45% to 90%. From the 33 studies considered, 12 different statistical methods were identified. Twenty-nine studies analysed each of the questionnaire sub-dimensions without type I error adjustment. Thirteen studies repeated the HRQoL analysis at each assessment time again without type I error adjustment. Only 8 studies used methods suitable for repeated measurements. Our findings show a lack of consistency in statistical methods for analysing HRQoL data. Problems related to multiple comparisons were rarely considered leading to a high risk of false positive results. It is therefore critical that international recommendations for improving such statistical practices are developed. Copyright © 2017. Published by Elsevier Ltd.
Task exposures in an office environment: a comparison of methods.
Van Eerd, Dwayne; Hogg-Johnson, Sheilah; Mazumder, Anjali; Cole, Donald; Wells, Richard; Moore, Anne
2009-10-01
Task-related factors such as frequency and duration are associated with musculoskeletal disorders in office settings. The primary objective was to compare various task recording methods as measures of exposure in an office workplace. A total of 41 workers from different jobs were recruited from a large urban newspaper (71% female, mean age 41 years SD 9.6). Questionnaire, task diaries, direct observation and video methods were used to record tasks. A common set of task codes was used across methods. Different estimates of task duration, number of tasks and task transitions arose from the different methods. Self-report methods did not consistently result in longer task duration estimates. Methodological issues could explain some of the differences in estimates seen between methods observed. It was concluded that different task recording methods result in different estimates of exposure likely due to different exposure constructs. This work addresses issues of exposure measurement in office environments. It is of relevance to ergonomists/researchers interested in how to best assess the risk of injury among office workers. The paper discusses the trade-offs between precision, accuracy and burden in the collection of computer task-based exposure measures and different underlying constructs captures in each method.
Hassan, Wafaa El-Sayed
2008-08-01
Three rapid, simple, reproducible and sensitive extractive colorimetric methods (A--C) for assaying dothiepin hydrochloride (I) and risperidone (II) in bulk sample and in dosage forms were investigated. Methods A and B are based on the formation of an ion pair complexes with methyl orange (A) and orange G (B), whereas method C depends on ternary complex formation between cobalt thiocyanate and the studied drug I or II. The optimum reaction conditions were investigated and it was observed the calibration curves resulting from the measurements of absorbance concentration relations of the extracted complexes were linear over the concentration range 0.1--12 microg ml(-1) for method A, 0.5--11 mug ml(-1) for method B, and 3.2--80 microg ml(-1) for method C with a relative standard deviation (RSD) of 1.17 and 1.28 for drug I and II, respectively. The molar absorptivity, Sandell sensitivity, Ringbom optimum concentration ranges, and detection and quantification limits for all complexes were calculated and evaluated at maximum wavelengths of 423, 498, and 625 nm, using methods A, B, and C, respectively. The interference from excipients commonly present in dosage forms and common degradation products was studied. The proposed methods are highly specific for the determination of drugs I and II, in their dosage forms applying the standard additions technique without any interference from common excipients. The proposed methods have been compared statistically to the reference methods and found to be simple, accurate (t-test) and reproducible (F-value).
Gajnik, Davorin; Peternel, Renata
2009-12-01
The increase in ragweed mediated health problems has led to the development of defense strategies in the countries with the most serious ragweed pollution, namely Hungary, Italy and France. The aim of this paper is to define the frequency of allergic disturbances brought by ragweed pollen in the period between 2002 and 2004, and to devise an action plan for its eradication in the area of Zagreb, as well as in Zagreb County. Thanks to the analysis of common methods of ragweed eradication, even by stating biological ragweed eradication, the best efficiency in ragweed eradication would be achieved through a method that combines several common methods, i.e., a mixed method. In its order on taking measures for obligatory eradication of ragweed, it has stated that the Ministry will ensure funds so that Plant Protection Department in Agriculture and Forestry of the Republic of Croatia, and Department of Herbology of the Faculty of Agriculture of Zagreb could observe the beginning and continuity of ragweed blossom, make an evaluation on the degree of weed overgrowth, establish how widespread it is in the Republic of Croatia, inform the public via the media on control measures and some other duties, but there is no financial help to those who are selected to eradicate ragweed.
CameraHRV: robust measurement of heart rate variability using a camera
NASA Astrophysics Data System (ADS)
Pai, Amruta; Veeraraghavan, Ashok; Sabharwal, Ashutosh
2018-02-01
The inter-beat-interval (time period of the cardiac cycle) changes slightly for every heartbeat; this variation is measured as Heart Rate Variability (HRV). HRV is presumed to occur due to interactions between the parasym- pathetic and sympathetic nervous system. Therefore, it is sometimes used as an indicator of the stress level of an individual. HRV also reveals some clinical information about cardiac health. Currently, HRV is accurately measured using contact devices such as a pulse oximeter. However, recent research in the field of non-contact imaging Photoplethysmography (iPPG) has made vital sign measurements using just the video recording of any exposed skin (such as a person's face) possible. The current signal processing methods for extracting HRV using peak detection perform well for contact-based systems but have poor performance for the iPPG signals. The main reason for this poor performance is the fact that current methods are sensitive to large noise sources which are often present in iPPG data. Further, current methods are not robust to motion artifacts that are common in iPPG systems. We developed a new algorithm, CameraHRV, for robustly extracting HRV even in low SNR such as is common with iPPG recordings. CameraHRV combined spatial combination and frequency demodulation to obtain HRV from the instantaneous frequency of the iPPG signal. CameraHRV outperforms other current methods of HRV estimation. Ground truth data was obtained from FDA-approved pulse oximeter for validation purposes. CameraHRV on iPPG data showed an error of 6 milliseconds for low motion and varying skin tone scenarios. The improvement in error was 14%. In case of high motion scenarios like reading, watching and talking, the error was 10 milliseconds.
Content and Methods used to Train Tobacco Cessation Treatment Providers: An International Survey.
Kruse, Gina R; Rigotti, Nancy A; Raw, Martin; McNeill, Ann; Murray, Rachael; Piné-Abata, Hembadoon; Bitton, Asaf; McEwen, Andy
2017-12-01
There are limited existing data describing the training methods used to educate tobacco cessation treatment providers around the world. To measure the prevalence of tobacco cessation treatment content, skills training and teaching methods reported by tobacco treatment training programs across the world. Web-based survey in May-September 2013 among tobacco cessation training experts across six geographic regions and four World Bank income levels. Response rate was 73% (84 of 115 countries contacted). Of 104 individual programs from 84 countries, most reported teaching brief advice (78%) and one-to-one counseling (74%); telephone counseling was uncommon (33%). Overall, teaching of knowledge topics was more commonly reported than skills training. Programs in lower income countries less often reported teaching about medications, behavioral treatments and biomarkers and less often reported skills-based training about interviewing clients, medication management, biomarker measurement, assessing client outcomes, and assisting clients with co-morbidities. Programs reported a median 15 hours of training. Face-to-face training was common (85%); online programs were rare (19%). Almost half (47%) included no learner assessment. Only 35% offered continuing education. Nearly all programs reported teaching evidence-based treatment modalities in a face-to-face format. Few programs delivered training online or offered continuing education. Skills-based training was less common among low- and middle-income countries (LMICs). There is a large unmet need for tobacco treatment training protocols which emphasize practical skills, and which are more rapidly scalable than face-to-face training in LMICs.
Effect of clothing weight on body weight
USDA-ARS?s Scientific Manuscript database
Background: In clinical settings, it is common to measure weight of clothed patients and estimate a correction for the weight of clothing, but we can find no papers in the medical literature regarding the variability in clothing weight with weather, season, and gender. Methods: Fifty adults (35 wom...
A Psychometric Evaluation of the Digital Logic Concept Inventory
ERIC Educational Resources Information Center
Herman, Geoffrey L.; Zilles, Craig; Loui, Michael C.
2014-01-01
Concept inventories hold tremendous promise for promoting the rigorous evaluation of teaching methods that might remedy common student misconceptions and promote deep learning. The measurements from concept inventories can be trusted only if the concept inventories are evaluated both by expert feedback and statistical scrutiny (psychometric…
Bone histomorphometry using free and commonly available software
Egan, Kevin P.; Brennan, Tracy A.; Pignolo, Robert J.
2012-01-01
Aims Histomorphometric analysis is a widely used technique to assess changes in tissue structure and function. Commercially-available programs that measure histomorphometric parameters can be cost prohibitive. In this study, we compared an inexpensive method of histomorphometry to a current proprietary software program. Methods and results Image J and Adobe Photoshop® were used to measure static and kinetic bone histomorphometric parameters. Photomicrographs of Goldner’s Trichrome stained femurs were used to generate black and white image masks, representing bone and non-bone tissue, respectively, in Adobe Photoshop®. The masks were used to quantify histomorphometric parameters (bone volume, tissue volume, osteoid volume, mineralizing surface, and interlabel width) in Image J. The resultant values obtained using Image J and the proprietary software were compared and found to be statistically non-significant. Conclusions The wide ranging use of histomorphometric analysis for assessing the basic morphology of tissue components makes it important to have affordable and accurate measurement options that are available for a diverse range of applications. Here we have developed and validated an approach to histomorphometry using commonly and freely available software that is comparable to a much more costly, commercially-available software program. PMID:22882309
Pavlovic, Chris; Futamatsu, Hideki; Angiolillo, Dominick J; Guzman, Luis A; Wilke, Norbert; Siragusa, Daniel; Wludyka, Peter; Percy, Robert; Northrup, Martin; Bass, Theodore A; Costa, Marco A
2007-04-01
The purpose of this study is to evaluate the accuracy of semiautomated analysis of contrast enhanced magnetic resonance angiography (MRA) in patients who have undergone standard angiographic evaluation for peripheral vascular disease (PVD). Magnetic resonance angiography is an important tool for evaluating PVD. Although this technique is both safe and noninvasive, the accuracy and reproducibility of quantitative measurements of disease severity using MRA in the clinical setting have not been fully investigated. 43 lesions in 13 patients who underwent both MRA and digital subtraction angiography (DSA) of iliac and common femoral arteries within 6 months were analyzed using quantitative magnetic resonance angiography (QMRA) and quantitative vascular analysis (QVA). Analysis was repeated by a second operator and by the same operator in approximately 1 month time. QMRA underestimated percent diameter stenosis (%DS) compared to measurements made with QVA by 2.47%. Limits of agreement between the two methods were +/- 9.14%. Interobserver variability in measurements of %DS were +/- 12.58% for QMRA and +/- 10.04% for QVA. Intraobserver variability of %DS for QMRA was +/- 4.6% and for QVA was +/- 8.46%. QMRA displays a high level of agreement to QVA when used to determine stenosis severity in iliac and common femoral arteries. Similar levels of interobserver and intraobserver variability are present with each method. Overall, QMRA represents a useful method to quantify severity of PVD.
NASA Astrophysics Data System (ADS)
Gerde, Janice R.; Christens-Barry, William A.
2011-08-01
In a project to meet requirements for CBP Laboratory analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS), a hybrid metrology system comprising both optical and touch probe devices has been assembled. A unique requirement must be met: To identify the interface-typically obscured in samples of concern-of the "external surface area upper" (ESAU) and the sole without physically destroying the sample. The sample outer surface is determined by discrete point cloud coordinates obtained using laser scanner optical measurements. Measurements from the optically inaccessible insole region are obtained using a coordinate measuring machine (CMM). That surface similarly is defined by point cloud data. Mathematically, the individual CMM and scanner data sets are transformed into a single, common reference frame. Custom software then fits a polynomial surface to the insole data and extends it to intersect the mesh fitted to the outer surface point cloud. This line of intersection defines the required ESAU boundary, thus permitting further fractional area calculations to determine the percentage of materials present. With a draft method in place, and first-level method validation underway, we examine the transformation of the two dissimilar data sets into the single, common reference frame. We also will consider the six previously-identified potential error factors versus the method process. This paper reports our on-going work and discusses our findings to date.
A Causal Model for Joint Evaluation of Placebo and Treatment-Specific Effects in Clinical Trials
Zhang, Zhiwei; Kotz, Richard M.; Wang, Chenguang; Ruan, Shiling; Ho, Martin
2014-01-01
Summary Evaluation of medical treatments is frequently complicated by the presence of substantial placebo effects, especially on relatively subjective endpoints, and the standard solution to this problem is a randomized, double-blinded, placebo-controlled clinical trial. However, effective blinding does not guarantee that all patients have the same belief or mentality about which treatment they have received (or treatmentality, for brevity), making it difficult to interpret the usual intent-to-treat effect as a causal effect. We discuss the causal relationships among treatment, treatmentality and the clinical outcome of interest, and propose a causal model for joint evaluation of placebo and treatment-specific effects. The model highlights the importance of measuring and incorporating patient treatmentality and suggests that each treatment group should be considered a separate observational study with a patient's treatmentality playing the role of an uncontrolled exposure. This perspective allows us to adapt existing methods for dealing with confounding to joint estimation of placebo and treatment-specific effects using measured treatmentality data, commonly known as blinding assessment data. We first apply this approach to the most common type of blinding assessment data, which is categorical, and illustrate the methods using an example from asthma. We then propose that blinding assessment data can be collected as a continuous variable, specifically when a patient's treatmentality is measured as a subjective probability, and describe analytic methods for that case. PMID:23432119
Measures of Cultural Competence in Nurses: An Integrative Review
2013-01-01
Background. There is limited literature available identifying and describing the instruments that measure cultural competence in nursing students and nursing professionals. Design. An integrative review was undertaken to identify the characteristics common to these instruments, examine their psychometric properties, and identify the concepts these instruments are designed to measure. Method. There were eleven instruments identified that measure cultural competence in nursing. Of these eleven instruments, four had been thoroughly tested in either initial development or in subsequent testing, with developers providing extensive details of the testing. Results. The current literature identifies that the instruments to assess cultural competence in nurses and nursing students are self-administered and based on individuals' perceptions. The instruments are commonly utilized to test the effectiveness of educational programs designed to increase cultural competence. Conclusions. The reviewed instruments measure nurses' self-perceptions or self-reported level of cultural competence but offer no objective measure of culturally competent care from a patient's perspective which can be problematic. Comparison of instruments reveals that they are based on a variety of conceptual frameworks and that multiple factors should be considered when deciding which instrument to use. PMID:23818818
Stimulating innovations in the measurement of parenting constructs.
Mâsse, Louise C; Watts, Allison W
2013-08-01
Parents can play a crucial role in the development of children's behaviors associated with dietary habits, physical activity, and sedentary lifestyles. Many parenting practices and/or styles measures have been developed; however, there is little agreement as to how the influence of parenting should be measured. More importantly, our ability to relate parenting practices and/or styles to children's behaviors depends on its accurate assessment. While there is a need to standardize our assessment to further advance knowledge in this area, this article will discuss areas that may stimulate advances in the measurement of parenting constructs. Because self-report measures are important for the assessment of parenting, this article discusses whether solutions to improve self-report measures may lie in: (1) Improving the questions asked; (2) improving the methods used to correct for social desirability or measurement errors; (3) changing our measurement paradigm to assess implicit parenting behaviors; (4) changing how self-report is collected by taking advantage of ecological momentary assessment methods; (5) using better psychometric methods to validate parenting measures or alternatively using advances in psychometric methods, such as item banking and computerized adaptive testing, to solve common administration issues (i.e., response burden and comparability of results across studies); and (6) employing novel technologies to collect data such as portable technologies, gaming, and virtual reality simulation. This article will briefly discuss the potential of technologies to measure parenting constructs.
Stimulating Innovations in the Measurement of Parenting Constructs
Watts, Allison W.
2013-01-01
Abstract Parents can play a crucial role in the development of children's behaviors associated with dietary habits, physical activity, and sedentary lifestyles. Many parenting practices and/or styles measures have been developed; however, there is little agreement as to how the influence of parenting should be measured. More importantly, our ability to relate parenting practices and/or styles to children's behaviors depends on its accurate assessment. While there is a need to standardize our assessment to further advance knowledge in this area, this article will discuss areas that may stimulate advances in the measurement of parenting constructs. Because self-report measures are important for the assessment of parenting, this article discusses whether solutions to improve self-report measures may lie in: (1) Improving the questions asked; (2) improving the methods used to correct for social desirability or measurement errors; (3) changing our measurement paradigm to assess implicit parenting behaviors; (4) changing how self-report is collected by taking advantage of ecological momentary assessment methods; (5) using better psychometric methods to validate parenting measures or alternatively using advances in psychometric methods, such as item banking and computerized adaptive testing, to solve common administration issues (i.e., response burden and comparability of results across studies); and (6) employing novel technologies to collect data such as portable technologies, gaming, and virtual reality simulation. This article will briefly discuss the potential of technologies to measure parenting constructs. PMID:23944924
Saffar, Saber; Abdullah, Amir
2014-03-01
Vibration amplitude of transducer's elements is the influential parameters in the performance of high power airborne ultrasonic transducers to control the optimum vibration without material yielding. The vibration amplitude of elements of provided high power airborne transducer was determined by measuring temperature of the provided high power airborne transducer transducer's elements. The results showed that simple thermocouples can be used both to measure the vibration amplitude of transducer's element and an indicator to power transmission to the air. To verify our approach, the power transmission to the air has been investigated by other common method experimentally. The experimental results displayed good agreement with presented approach. Copyright © 2013 Elsevier B.V. All rights reserved.
Automated system for periodontal disease diagnosis
NASA Astrophysics Data System (ADS)
Albalat, Salvador E.; Alcaniz-Raya, Mariano L.; Juan, M. Carmen; Grau Colomer, Vincente; Monserrat, Carlos
1997-04-01
Evolution of periodontal disease is one of the most important data for the clinicians in order to achieve correct planning and treatment. Clinical measure of the periodontal sulcus depth is the most important datum to know the exact state of periodontal disease. These measures must be done periodically study bone resorption evolution around teeth. Time factor of resorption indicates aggressiveness of periodontitis. Manual probes are commonly used with direct reading. Mechanical probes give automatic signal but this method uses complicated and heavy probes that are only limited for University researchers. Probe position must be the same to have right diagnosis. Digital image analysis of periodontal probing provides practical, accurate and easy tool. Gum and plaque index could also be digitally measured with this method.
Bethell, Christina D; Carle, Adam; Hudziak, James; Gombojav, Narangerel; Powers, Kathleen; Wade, Roy; Braveman, Paula
Advances in human development sciences point to tremendous possibilities to promote healthy child development and well-being across life by proactively supporting safe, stable and nurturing family relationships (SSNRs), teaching resilience, and intervening early to promote healing the trauma and stress associated with disruptions in SSNRs. Assessing potential disruptions in SSNRs, such as adverse childhood experiences (ACEs), can contribute to assessing risk for trauma and chronic and toxic stress. Asking about ACEs can help with efforts to prevent and attenuate negative impacts on child development and both child and family well-being. Many methods to assess ACEs exist but have not been compared. The National Survey of Children's Health (NSCH) now measures ACEs for children, but requires further assessment and validation. We identified and compared methods to assess ACEs among children and families, evaluated the acceptability and validity of the new NSCH-ACEs measure, and identified implications for assessing ACEs in research and practice. Of 14 ACEs assessment methods identified, 5 have been used in clinical settings (vs public health assessment or research) and all but 1 require self or parent report (3 allow child report). Across methods, 6 to 20 constructs are assessed, 4 of which are common to all: parental incarceration, domestic violence, household mental illness/suicide, household alcohol or substance abuse. Common additional content includes assessing exposure to neighborhood violence, bullying, discrimination, or parental death. All methods use a numeric, cumulative risk scoring methodology. The NSCH-ACEs measure was acceptable to respondents as evidenced by few missing values and no reduction in response rate attributable to asking about children's ACEs. The 9 ACEs assessed in the NSCH co-occur, with most children with 1 ACE having additional ACEs. This measure showed efficiency and confirmatory factor analysis as well as latent class analysis supported a cumulative risk scoring method. Formative as well as reflective measurement models further support cumulative risk scoring and provide evidence of predictive validity of the NSCH-ACEs. Common effects of ACEs across household income groups confirm information distinct from economic status is provided and suggest use of population-wide versus high-risk approaches to assessing ACEs. Although important variations exist, available ACEs measurement methods are similar and show consistent associations with poorer health outcomes in absence of protective factors and resilience. All methods reviewed appear to coincide with broader goals to facilitate health education, promote health and, where needed, to mitigate the trauma, chronic stress, and behavioral and emotional sequelae that can arise with exposure to ACEs. Assessing ACEs appears acceptable to individuals and families when conducted in population-based and clinical research contexts. Although research to date and neurobiological findings compel early identification and health education about ACEs in clinical settings, further research to guide use in pediatric practice is required, especially as it relates to distinguishing ACEs assessment from identifying current family psychosocial risks and child abuse. The reflective as well as formative psychometric analyses conducted in this study confirm use of cumulative risk scoring for the NSCH-ACEs measure. Even if children have not been exposed to ACEs, assessing ACEs has value as an educational tool for engaging and educating families and children about the importance of SSNRs and how to recognize and manage stress and learn resilience. Copyright © 2017 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Walicka-Cupryś, Katarzyna; Drzał-Grabiec, Justyna; Mrozkowiak, Mirosław
2013-10-31
BACKGROUND. The photogrammetric method and inclinometer-based measurements are commonly employed to assess the anteroposterior curvatures of the spine. These methods are used both in clinical trials and for screening purposes. The aim of the study was to compare the parameters used to characterise the anteroposterior spinal curvatures as measured by photogrammetry and inclinometry. MATERIAL AND METHODS. The study enrolled 341 subjects: 169 girls and 172 boys, aged 4 to 9 years, from kindergartens and primary schools in Rzeszów. The anteroposterior spinal curvatures were examined by photogrammetry and with a mechanical inclinometer. RESULTS. There were significant differences in the α angle between the inclinometric and photogrammetric assessment in the Student t test (p=0.017) and the Fisher Snedecor test (p=0.0001), with similar differences in the β angle (Student's t p=0.0001, Fisher Snedecor p=0.007). For the γ angle, significant differences were revealed with Student's t test (p=0.0001), but not with the Fisher Snedecor test (p = 0.22). CONCLUSIONS. 1. Measurements of inclination of particular segments of the spine obtained with the photogrammetric method and the inclinometric method in the same study group revealed statistically significant differences. 2. The results of measurements obtained by photogrammetry and inclinometry are not comparable. 3. Further research on agreement between measurements of the anteroposterior spinal curvatures obtained using the available measurement equipment is recommended.
Shape measurement biases from underfitting and ellipticity gradients
Bernstein, Gary M.
2010-08-21
With this study, precision weak gravitational lensing experiments require measurements of galaxy shapes accurate to <1 part in 1000. We investigate measurement biases, noted by Voigt and Bridle (2009) and Melchior et al. (2009), that are common to shape measurement methodologies that rely upon fitting elliptical-isophote galaxy models to observed data. The first bias arises when the true galaxy shapes do not match the models being fit. We show that this "underfitting bias" is due, at root, to these methods' attempts to use information at high spatial frequencies that has been destroyed by the convolution with the point-spread function (PSF)more » and/or by sampling. We propose a new shape-measurement technique that is explicitly confined to observable regions of k-space. A second bias arises for galaxies whose ellipticity varies with radius. For most shape-measurement methods, such galaxies are subject to "ellipticity gradient bias". We show how to reduce such biases by factors of 20–100 within the new shape-measurement method. The resulting shear estimator has multiplicative errors < 1 part in 10 3 for high-S/N images, even for highly asymmetric galaxies. Without any training or recalibration, the new method obtains Q = 3000 in the GREAT08 Challenge of blind shear reconstruction on low-noise galaxies, several times better than any previous method.« less
Sanford, Dominic E; Woolsey, Cheryl A; Hall, Bruce L; Linehan, David C; Hawkins, William G; Fields, Ryan C; Strasberg, Steven M
2014-09-01
NSQIP and the Accordion Severity Grading System have recently been used to develop quantitative methods for measuring the burden of postoperative complications. However, other audit methods such as chart reviews and prospective institutional databases are commonly used to gather postoperative complications. The purpose of this study was to evaluate discordance between different audit methods in pancreatoduodenectomy--a common major surgical procedure. The chief aim was to determine how these different methods could affect quantitative evaluations of postoperative complications. Three common audit methods were compared with NSQIP in 84 patients who underwent pancreatoduodenectomy. The methods were use of a prospective database, a chart review based on discharge summaries only, and a detailed retrospective chart review. The methods were evaluated for discordance with NSQIP and among themselves. Severity grading was performed using the Modified Accordion System. Fifty-three complications were listed by NSQIP and 31 complications were identified that were not listed by NSQIP. There was poor agreement for NSQIP-type complications between NSQIP and the other audit methods for mild and moderate complications (kappa 0.381 to 0.744), but excellent agreement for severe complications (kappa 0.953 to 1.00). Discordance was usually due to variations in definition of the complications in non-NSQIP methods. There was good agreement among non-NSQIP methods for non-NSQIP complications for moderate and severe complications, but not for mild complications. There are important differences in perceived surgical outcomes based on the method of complication retrieval. The non-NSQIP methods used in this study could not be substituted for NSQIP in a quantitative analysis unless that analysis was limited to severe complications. Copyright © 2014 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Setting the equation: establishing value in spine care.
Resnick, Daniel K; Tosteson, Anna N A; Groman, Rachel F; Ghogawala, Zoher
2014-10-15
Topic review. Describe value measurement in spine care and discuss the motivation for, methods for, and limitations of such measurement. Spinal disorders are common and are an important cause of pain and disability. Numerous complementary and competing treatment strategies are used to treat spinal disorders, and the costs of these treatments is substantial and continue to rise despite clear evidence of improved health status as a result of these expenditures. The authors present the economic and legislative imperatives forcing the assessment of value in spine care. The definition of value in health care and methods to measure value specifically in spine care are presented. Limitations to the utility of value judgments and caveats to their use are presented. Examples of value calculations in spine care are presented and critiqued. Methods to improve and broaden the measurement of value across spine care are suggested, and the role of prospective registries in measuring value is discussed. Value can be measured in spine care through the use of appropriate economic measures and patient-reported outcomes measures. Value must be interpreted in light of the perspective of the assessor, the duration of the assessment period, the degree of appropriate risk stratification, and the relative value of treatment alternatives.
Measuring the Characteristic Topography of Brain Stiffness with Magnetic Resonance Elastography
Murphy, Matthew C.; Huston, John; Jack, Clifford R.; Glaser, Kevin J.; Senjem, Matthew L.; Chen, Jun; Manduca, Armando; Felmlee, Joel P.; Ehman, Richard L.
2013-01-01
Purpose To develop a reliable magnetic resonance elastography (MRE)-based method for measuring regional brain stiffness. Methods First, simulation studies were used to demonstrate how stiffness measurements can be biased by changes in brain morphometry, such as those due to atrophy. Adaptive postprocessing methods were created that significantly reduce the spatial extent of edge artifacts and eliminate atrophy-related bias. Second, a pipeline for regional brain stiffness measurement was developed and evaluated for test-retest reliability in 10 healthy control subjects. Results This technique indicates high test-retest repeatability with a typical coefficient of variation of less than 1% for global brain stiffness and less than 2% for the lobes of the brain and the cerebellum. Furthermore, this study reveals that the brain possesses a characteristic topography of mechanical properties, and also that lobar stiffness measurements tend to correlate with one another within an individual. Conclusion The methods presented in this work are resistant to noise- and edge-related biases that are common in the field of brain MRE, demonstrate high test-retest reliability, and provide independent regional stiffness measurements. This pipeline will allow future investigations to measure changes to the brain’s mechanical properties and how they relate to the characteristic topographies that are typical of many neurologic diseases. PMID:24312570
Strong, David R.; Schonbrun, Yael Chatav; Schaffran, Christine; Griesler, Pamela C.; Kandel, Denise
2012-01-01
Background An ongoing debate regarding the nature of Nicotine Dependence (ND) is whether the same instrument can be applied to measure ND among adults and adolescents. Using a hierarchical item response model (IRM), we examined evidence for a common continuum underlying ND symptoms among adults and adolescents. Method The analyses are based on two waves of interviews with subsamples of parents and adolescents from a multi-ethnic longitudinal cohort of 1,039 6th–10th graders from the Chicago Public Schools (CPS). Adults and adolescents who reported smoking cigarettes the last 30 days prior to waves 3 and 5 completed three common instruments measuring ND symptoms and one item measuring loss of autonomy. Results A stable continuum of ND, first identified among adolescents, was replicated among adults. However, some symptoms, such as tolerance and withdrawal, differed markedly across adults and adolescents. The majority of mFTQ items were observed within the highest levels of ND, the NDSS items within the lowest levels, and the DSM-IV items were arrayed in the middle and upper third of the continuum of dependence severity. Loss of Autonomy was positioned at the lower end of the continuum. We propose a ten-symptom measure of ND for adolescents and adults. Conclusions Despite marked differences in the relative severity of specific ND symptoms in each group, common instrumentation of ND can apply to adults and adolescents. The results increase confidence in the ability to describe phenotypic heterogeneity in ND across important developmental periods. PMID:21855236
An Accurate and Generic Testing Approach to Vehicle Stability Parameters Based on GPS and INS.
Miao, Zhibin; Zhang, Hongtian; Zhang, Jinzhu
2015-12-04
With the development of the vehicle industry, controlling stability has become more and more important. Techniques of evaluating vehicle stability are in high demand. As a common method, usually GPS sensors and INS sensors are applied to measure vehicle stability parameters by fusing data from the two system sensors. Although prior model parameters should be recognized in a Kalman filter, it is usually used to fuse data from multi-sensors. In this paper, a robust, intelligent and precise method to the measurement of vehicle stability is proposed. First, a fuzzy interpolation method is proposed, along with a four-wheel vehicle dynamic model. Second, a two-stage Kalman filter, which fuses the data from GPS and INS, is established. Next, this approach is applied to a case study vehicle to measure yaw rate and sideslip angle. The results show the advantages of the approach. Finally, a simulation and real experiment is made to verify the advantages of this approach. The experimental results showed the merits of this method for measuring vehicle stability, and the approach can meet the design requirements of a vehicle stability controller.
An Adaptive Kalman Filter Using a Simple Residual Tuning Method
NASA Technical Reports Server (NTRS)
Harman, Richard R.
1999-01-01
One difficulty in using Kalman filters in real world situations is the selection of the correct process noise, measurement noise, and initial state estimate and covariance. These parameters are commonly referred to as tuning parameters. Multiple methods have been developed to estimate these parameters. Most of those methods such as maximum likelihood, subspace, and observer Kalman Identification require extensive offline processing and are not suitable for real time processing. One technique, which is suitable for real time processing, is the residual tuning method. Any mismodeling of the filter tuning parameters will result in a non-white sequence for the filter measurement residuals. The residual tuning technique uses this information to estimate corrections to those tuning parameters. The actual implementation results in a set of sequential equations that run in parallel with the Kalman filter. A. H. Jazwinski developed a specialized version of this technique for estimation of process noise. Equations for the estimation of the measurement noise have also been developed. These algorithms are used to estimate the process noise and measurement noise for the Wide Field Infrared Explorer star tracker and gyro.
Nutritional assessment in intravenous drug users with HIV/AIDS.
Smit, E; Tang, A
2000-10-01
Studying metabolic, endocrine, and gastrointestinal (MEG) disorders in drug abuse and HIV infection is important. Equally important, however, are the tools we use to assess these disorders. Assessment of nutritional status may include any combination of biochemical and body composition measurements, dietary intake assessment, and metabolic studies. Each method has its strengths and weaknesses and there is no perfect tool. When assessing nutritional status in injection drug users (IDU) and in HIV-infected people, the decision on which method or methods to use becomes even more complex. A review of studies reported during the XII World Conference on AIDS reveals that of 64 abstracts on the topic of nutrition in HIV-infected adults, only 11 assessed diet, 41 assessed anthropometry, and 24 assessed some form of biochemical measure. The most commonly reported methods for dietary intake included 24-hour recalls, food records, and food frequencies. The commonest methods used for measuring body composition included height, weight, bioimpedance, and dual-energy x-ray absorptiometry (DEXA). Biochemical measurements included various blood nutrients, lipids, and albumin. Methods varied greatly between studies, and caution should be taken when trying to compare results across studies, especially among those using different methods. Currently, few studies deal with the development of methods that can be used for research in HIV-infected and IDU populations. We need to work toward better tools in dietary intake assessment, body composition, and biochemical measurements, especially methods that will allow us to track changes in nutritional status over time.
Blood Glucose Measurement in the Intensive Care Unit: What Is the Best Method?
Le, Huong T.; Harris, Neil S.; Estilong, Abby J.; Olson, Arvid; Rice, Mark J.
2013-01-01
Abnormal glucose measurements are common among intensive care unit (ICU) patients for numerous reasons and hypoglycemia is especially dangerous because these patients are often sedated and unable to relate the associated symptoms. Additionally, wide swings in blood glucose have been closely tied to increased mortality. Therefore, accurate and timely glucose measurement in this population is critical. Clinicians have several choices available to assess blood glucose values in the ICU, including central laboratory devices, blood gas analyzers, and point-of-care meters. In this review, the method of glucose measurement will be reviewed for each device, and the important characteristics, including accuracy, cost, speed of result, and sample volume, will be reviewed, specifically as these are used in the ICU environment. Following evaluation of the individual measurement devices and after considering the many features of each, recommendations are made for optimal ICU glucose determination. PMID:23567008
Intra-Ocular Pressure Measurement in a Patient with a Thin, Thick or Abnormal Cornea.
Clement, Colin I; Parker, Douglas G A; Goldberg, Ivan
2016-01-01
Accurate measurement of intra-ocular pressure is a fundamental component of the ocular examination. The most common method of measuring IOP is by Goldmann applanation tonometry, the accuracy of which is influenced by the thickness and biomechanical properties of the cornea. Algorithms devised to correct for corneal thickness to estimate IOP oversimplify the effects of corneal biomechanics. The viscous and elastic properties of the cornea influence IOP measurements in unpredictable ways, a finding borne out in studies of patients with inherently abnormal and surgically altered corneal biomechanics. Dynamic contour tonometry, rebound tonometry and the ocular response analyzer provide useful alternatives to GAT in patients with abnormal corneas, such as those who have undergone laser vision correction or keratoplasty. This article reviews the various methods of intra-ocular pressure measurement available to the clinician and the ways in which their utility is influenced by variations in corneal thickness and biomechanics.
Blekhman, Ran; Tang, Karen; Archie, Elizabeth A; Barreiro, Luis B; Johnson, Zachary P; Wilson, Mark E; Kohn, Jordan; Yuan, Michael L; Gesquiere, Laurence; Grieneisen, Laura E; Tung, Jenny
2016-08-16
Field studies of wild vertebrates are frequently associated with extensive collections of banked fecal samples-unique resources for understanding ecological, behavioral, and phylogenetic effects on the gut microbiome. However, we do not understand whether sample storage methods confound the ability to investigate interindividual variation in gut microbiome profiles. Here, we extend previous work on storage methods for gut microbiome samples by comparing immediate freezing, the gold standard of preservation, to three methods commonly used in vertebrate field studies: lyophilization, storage in ethanol, and storage in RNAlater. We found that the signature of individual identity consistently outweighed storage effects: alpha diversity and beta diversity measures were significantly correlated across methods, and while samples often clustered by donor, they never clustered by storage method. Provided that all analyzed samples are stored the same way, banked fecal samples therefore appear highly suitable for investigating variation in gut microbiota. Our results open the door to a much-expanded perspective on variation in the gut microbiome across species and ecological contexts.
Methods Used to Evaluate Pain Behaviors in Rodents
Deuis, Jennifer R.; Dvorakova, Lucie S.; Vetter, Irina
2017-01-01
Rodents are commonly used to study the pathophysiological mechanisms of pain as studies in humans may be difficult to perform and ethically limited. As pain cannot be directly measured in rodents, many methods that quantify “pain-like” behaviors or nociception have been developed. These behavioral methods can be divided into stimulus-evoked or non-stimulus evoked (spontaneous) nociception, based on whether or not application of an external stimulus is used to elicit a withdrawal response. Stimulus-evoked methods, which include manual and electronic von Frey, Randall-Selitto and the Hargreaves test, were the first to be developed and continue to be in widespread use. However, concerns over the clinical translatability of stimulus-evoked nociception in recent years has led to the development and increasing implementation of non-stimulus evoked methods, such as grimace scales, burrowing, weight bearing and gait analysis. This review article provides an overview, as well as discussion of the advantages and disadvantages of the most commonly used behavioral methods of stimulus-evoked and non-stimulus-evoked nociception used in rodents. PMID:28932184
Methanogenic activity tests by Infrared Tunable Diode Laser Absorption Spectroscopy.
Martinez-Cruz, Karla; Sepulveda-Jauregui, Armando; Escobar-Orozco, Nayeli; Thalasso, Frederic
2012-10-01
Methanogenic activity (MA) tests are commonly carried out to estimate the capability of anaerobic biomass to treat effluents, to evaluate anaerobic activity in bioreactors or natural ecosystems, or to quantify inhibitory effects on methanogenic activity. These activity tests are usually based on the measurement of the volume of biogas produced by volumetric, pressure increase or gas chromatography (GC) methods. In this study, we present an alternative method for non-invasive measurement of methane produced during activity tests in closed vials, based on Infrared Tunable Diode Laser Absorption Spectroscopy (MA-TDLAS). This new method was tested during model acetoclastic and hydrogenotrophic methanogenic activity tests and was compared to a more traditional method based on gas chromatography. From the results obtained, the CH(4) detection limit of the method was estimated to 60 ppm and the minimum measurable methane production rate was estimated to 1.09(.)10(-3) mg l(-1) h(-1), which is below CH(4) production rate usually reported in both anaerobic reactors and natural ecosystems. Additionally to sensitivity, the method has several potential interests compared to more traditional methods among which short measurements time allowing the measurement of a large number of MA test vials, non-invasive measurements avoiding leakage or external interferences and similar cost to GC based methods. It is concluded that MA-TDLAS is a promising method that could be of interest not only in the field of anaerobic digestion but also, in the field of environmental ecology where CH(4) production rates are usually very low. Copyright © 2012 Elsevier B.V. All rights reserved.
Grindle, Susan; Garganta, Cheryl; Sheehan, Susan; Gile, Joe; Lapierre, Andree; Whitmore, Harry; Paigen, Beverly; DiPetrillo, Keith
2006-12-01
Chronic kidney disease is a substantial medical and economic burden. Animal models, including mice, are a crucial component of kidney disease research; however, recent studies disprove the ability of autoanalyzer methods to accurately quantify plasma creatinine levels, an established marker of kidney disease, in mice. Therefore, we validated autoanalyzer methods for measuring blood urea nitrogen (BUN) and urinary albumin concentrations, 2 common markers of kidney disease, in samples from mice. We used high-performance liquid chromatography to validate BUN concentrations measured using an autoanalyzer, and we utilized mouse albumin standards to determine the accuracy of the autoanalyzer over a wide range of albumin concentrations. We observed a significant, linear correlation between BUN concentrations measured by autoanalyzer and high-performance liquid chromatography. We also found a linear relationship between known and measured albumin concentrations, although the autoanalyzer method underestimated the known amount of albumin by 3.5- to 4-fold. We confirmed that plasma and urine constituents do not interfere with the autoanalyzer methods for measuring BUN and urinary albumin concentrations. In addition, we verified BUN and albuminuria as useful markers to detect kidney disease in aged mice and mice with 5/6-nephrectomy. We conclude that autoanalyzer methods are suitable for high-throughput analysis of BUN and albumin concentrations in mice. The autoanalyzer accurately quantifies BUN concentrations in mouse plasma samples and is useful for measuring urinary albumin concentrations when used with mouse albumin standards.
Linearization of the bradford protein assay.
Ernst, Orna; Zor, Tsaffrir
2010-04-12
Determination of microgram quantities of protein in the Bradford Coomassie brilliant blue assay is accomplished by measurement of absorbance at 590 nm. This most common assay enables rapid and simple protein quantification in cell lysates, cellular fractions, or recombinant protein samples, for the purpose of normalization of biochemical measurements. However, an intrinsic nonlinearity compromises the sensitivity and accuracy of this method. It is shown that under standard assay conditions, the ratio of the absorbance measurements at 590 nm and 450 nm is strictly linear with protein concentration. This simple procedure increases the accuracy and improves the sensitivity of the assay about 10-fold, permitting quantification down to 50 ng of bovine serum albumin. Furthermore, the interference commonly introduced by detergents that are used to create the cell lysates is greatly reduced by the new protocol. A linear equation developed on the basis of mass action and Beer's law perfectly fits the experimental data.
Weathington, Bart L; Jones, Allan P
2006-11-01
Researchers have commonly assumed benefits that employees view as more valuable have a greater influence on their attitudes and behaviors. Researchers have used 2 common methods to measure benefit value: attaching a monetary value to benefits and using self-reports of benefit importance. The present authors propose that the 2 approaches are conceptually distinct and have different implications. They use a social exchange perspective to justify this distinction and integrate both approaches and benefit satisfaction into a more comprehensive model of benefit perception. Results suggest that both measures have practical applications depending on the nature of the exchange relationship between the organization and employees. However, this relationship depends on the specific benefit and on employee satisfaction with that benefit. Some benefits lend themselves to a monetary estimate, whereas others lend themselves more to a nonmonetary valuation.
Study on dynamic deformation synchronized measurement technology of double-layer liquid surfaces
NASA Astrophysics Data System (ADS)
Tang, Huiying; Dong, Huimin; Liu, Zhanwei
2017-11-01
Accurate measurement of the dynamic deformation of double-layer liquid surfaces plays an important role in many fields, such as fluid mechanics, biomechanics, petrochemical industry and aerospace engineering. It is difficult to measure dynamic deformation of double-layer liquid surfaces synchronously for traditional methods. In this paper, a novel and effective method for full-field static and dynamic deformation measurement of double-layer liquid surfaces has been developed, that is wavefront distortion of double-wavelength transmission light with geometric phase analysis (GPA) method. Double wavelength lattice patterns used here are produced by two techniques, one is by double wavelength laser, and the other is by liquid crystal display (LCD). The techniques combine the characteristics such as high transparency, low reflectivity and fluidity of liquid. Two color lattice patterns produced by laser and LCD were adjusted at a certain angle through the tested double-layer liquid surfaces simultaneously. On the basis of the refractive indexes difference of two transmitted lights, the double-layer liquid surfaces were decoupled with GPA method. Combined with the derived relationship between phase variation of transmission-lattice patterns and out-of plane heights of two surfaces, as well as considering the height curves of the liquid level, the double-layer liquid surfaces can be reconstructed successfully. Compared with the traditional measurement method, the developed method not only has the common advantages of the optical measurement methods, such as high-precision, full-field and non-contact, but also simple, low cost and easy to set up.
dftools: Distribution function fitting
NASA Astrophysics Data System (ADS)
Obreschkow, Danail
2018-05-01
dftools, written in R, finds the most likely P parameters of a D-dimensional distribution function (DF) generating N objects, where each object is specified by D observables with measurement uncertainties. For instance, if the objects are galaxies, it can fit a mass function (D=1), a mass-size distribution (D=2) or the mass-spin-morphology distribution (D=3). Unlike most common fitting approaches, this method accurately accounts for measurement in uncertainties and complex selection functions.
A. P. Sullivan; A. S. Holden; L. A. Patterson; G. R. McMeeking; S. M. Kreidenweis; W. C. Malm; W. M. Hao; C. E. Wold; J. L. Collett
2008-01-01
Biomass burning is an important source of particulate organic carbon (OC) in the atmosphere. Quantifying this contribution in time and space requires a means of routinely apportioning contributions of smoke from biomass burning to OC. Smoke marker (for example, levoglucosan) measurements provide the most common approach for making this determination. A lack of source...
Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.
2016-01-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Reconstructing networks from dynamics with correlated noise
NASA Astrophysics Data System (ADS)
Tam, H. C.; Ching, Emily S. C.; Lai, Pik-Yin
2018-07-01
Reconstructing the structure of complex networks from measurements of the nodes is a challenge in many branches of science. External influences are always present and act as a noise to the networks of interest. In this paper, we present a method for reconstructing networks from measured dynamics of the nodes subjected to correlated noise that cannot be approximated by a white noise. This method can reconstruct the links of both bidirectional and directed networks, the correlation time and strength of the noise, and also the relative coupling strength of the links when the coupling functions have certain properties. Our method is built upon theoretical relations between network structure and measurable quantities from the dynamics that we have derived for systems that have fixed point dynamics in the noise-free limit. Using these theoretical results, we can further explain the shortcomings of two common practices of inferring links for bidirectional networks using the Pearson correlation coefficient and the partial correlation coefficient.
Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E
2016-05-01
Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.
Sansom, P; Copley, V R; Naik, F C; Leach, S; Hall, I M
2013-01-01
Statistical methods used in spatio-temporal surveillance of disease are able to identify abnormal clusters of cases but typically do not provide a measure of the degree of association between one case and another. Such a measure would facilitate the assignment of cases to common groups and be useful in outbreak investigations of diseases that potentially share the same source. This paper presents a model-based approach, which on the basis of available location data, provides a measure of the strength of association between cases in space and time and which is used to designate and visualise the most likely groupings of cases. The method was developed as a prospective surveillance tool to signal potential outbreaks, but it may also be used to explore groupings of cases in outbreak investigations. We demonstrate the method by using a historical case series of Legionnaires’ disease amongst residents of England and Wales. PMID:23483594
Handwriting individualization using distance and rarity
NASA Astrophysics Data System (ADS)
Tang, Yi; Srihari, Sargur; Srinivasan, Harish
2012-01-01
Forensic individualization is the task of associating observed evidence with a specific source. The likelihood ratio (LR) is a quantitative measure that expresses the degree of uncertainty in individualization, where the numerator represents the likelihood that the evidence corresponds to the known and the denominator the likelihood that it does not correspond to the known. Since the number of parameters needed to compute the LR is exponential with the number of feature measurements, a commonly used simplification is the use of likelihoods based on distance (or similarity) given the two alternative hypotheses. This paper proposes an intermediate method which decomposes the LR as the product of two factors, one based on distance and the other on rarity. It was evaluated using a data set of handwriting samples, by determining whether two writing samples were written by the same/different writer(s). The accuracy of the distance and rarity method, as measured by error rates, is significantly better than the distance method.
NASA Astrophysics Data System (ADS)
Burman, Erik; Hansbo, Peter; Larson, Mats G.
2018-03-01
Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.
NASA Astrophysics Data System (ADS)
Faugeras, Blaise; Blum, Jacques; Heumann, Holger; Boulbe, Cédric
2017-08-01
The modelization of polarimetry Faraday rotation measurements commonly used in tokamak plasma equilibrium reconstruction codes is an approximation to the Stokes model. This approximation is not valid for the foreseen ITER scenarios where high current and electron density plasma regimes are expected. In this work a method enabling the consistent resolution of the inverse equilibrium reconstruction problem in the framework of non-linear free-boundary equilibrium coupled to the Stokes model equation for polarimetry is provided. Using optimal control theory we derive the optimality system for this inverse problem. A sequential quadratic programming (SQP) method is proposed for its numerical resolution. Numerical experiments with noisy synthetic measurements in the ITER tokamak configuration for two test cases, the second of which is an H-mode plasma, show that the method is efficient and that the accuracy of the identification of the unknown profile functions is improved compared to the use of classical Faraday measurements.
Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review.
Chung, Stephanie T; Chacko, Shaji K; Sunehag, Agneta L; Haymond, Morey W
2015-12-01
Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.
Measurement of aspheric mirror segments using Fizeau interferometry with CGH correction
NASA Astrophysics Data System (ADS)
Burge, James H.; Zhao, Chunyu; Dubin, Matt
2010-07-01
Large aspheric primary mirrors are proposed that use hundreds segments that all must be aligned and phased to approximate the desired continuous mirror. We present a method of measuring these concave segments with a Fizeau interferometer where a spherical convex reference surface is held a few millimeters from the aspheric segment. The aspheric shape is accommodated by a small computer generated hologram (CGH). Different segments are measured by replacing the CGH. As a Fizeau test, nearly all of the optical elements and air spaces are common to both the measurement and reference wavefront, so the sensitivities are not tight. Also, since the reference surface of the test plate is common to all tests, this system achieves excellent control for the radius of curvature variation from one part to another. This paper describes the test system design and analysis for such a test, and presents data from a similar 1.4-m test performed at the University of Arizona.
Adventure Education and Resilience Enhancement
ERIC Educational Resources Information Center
Beightol, Jesse; Jevertson, Jenn; Carter, Susan; Gray, Sky; Gass, Michael
2012-01-01
This study assessed the effect of an experiential, adventure-based program on levels of resilience in fifth-grade Latino students. A mixed methods, quasi-experimental design was used to measure the impact of the Santa Fe Mountain Center's Anti-Bullying Initiative on internal assets commonly associated with resilient individuals. Results indicated…
Perspectives: Using Critical Incidents to Understand ESL Student Satisfaction
ERIC Educational Resources Information Center
Walker, John
2015-01-01
In a marketized environment, ESL providers, in common with other postcompulsory educational institutions, canvass student satisfaction with their services. While the predominant method is likely to be based on tick-box questionnaires using Likert scales that measure degrees of satisfaction, qualitative methodology is an option when rich data is…
Associations among Text Messaging, Academic Performance, and Sexual Behaviors of Adolescents
ERIC Educational Resources Information Center
Perry, Raymond C. W.; Braun, Rebecca A.; Cantu, Michelle; Dudovitz, Rebecca N.; Sheoran, Bhupendra; Chung, Paul J.
2014-01-01
Background: Text messaging is an increasingly common mode of communication, especially among adolescents, and frequency of texting may be a measure of one's sociability. This study examined how text messaging ("texting") frequency and academic performance are associated with adolescent sexual behaviors. Methods: A cross-sectional survey…
Adverse Childhood Experiences and Childhood Autobiographical Memory Disturbance
ERIC Educational Resources Information Center
Brown, David W.; Anda, Robert F.; Edwards, Valerie J.; Felitti, Vincent J.; Dube, Shanta R.; Giles, Wayne H.
2007-01-01
Objective: To examine relationships between childhood autobiographical memory disturbance (CAMD) and adverse childhood experiences (ACEs) which are defined as common forms of child maltreatment and related traumatic stressors. Methods: We use the ACE score (an integer count of eight different categories of ACEs) as a measure of cumulative exposure…
Tips, Tropes, and Trivia: Ideas for Teaching Educational Research.
ERIC Educational Resources Information Center
Stallings, William M.; And Others
The collective experience of more than 50 years has led to the development of approaches that have enhanced student comprehension in the teaching of educational research methods, statistics, and measurement. Tips for teachers include using illustrative problems with one-digit numbers, using common situations and everyday objects to illustrate…
Measurement of semiochemical release rates with a dedicated environmental control system
Heping Zhu; Harold W. Thistle; Christopher M. Ranger; Hongping Zhou; Brian L. Strom
2015-01-01
Insect semiochemical dispensers are commonly deployed under variable environmental conditions over a specified period. Predictions of their longevity are hampered by a lack of methods to accurately monitor and predict how primary variables affect semiochemical release rate. A system was constructed to precisely determine semiochemical release rates under...
Measurement of cell viability in in vitro cultures.
Castro-Concha, Lizbeth A; Escobedo, Rosa María; Miranda-Ham, María de Lourdes
2006-01-01
An overview of the methods for assessing cell viability in in vitro cultures is presented. The protocols of four of the most commonly used assays are described in detail, so the readers may be able to determine which assay is suitable for their own projects using plant cell cultures.
The Language, Working Memory, and Other Cognitive Demands of Verbal Tasks
ERIC Educational Resources Information Center
Archibald, Lisa M. D.
2013-01-01
Purpose: To gain a better understanding of the cognitive processes supporting verbal abilities, the underlying structure and interrelationships between common verbal measures were investigated. Methods: An epidemiological sample (n = 374) of school-aged children completed standardized tests of language, intelligence, and short-term and working…
Evaluating the Accuracy of Common Runoff Estimation Methods for New Impervious Hot-Mix Asphalt
Accurately predicting runoff volume from impervious surfaces for water quality design events (e.g., 25.4 mm) is important for sizing green infrastructure stormwater control measures to meet water quality and infiltration design targets. The objective of this research was to quan...
Empirical Performance of Covariates in Education Observational Studies
ERIC Educational Resources Information Center
Wong, Vivian C.; Valentine, Jeffrey C.; Miller-Bains, Kate
2017-01-01
This article summarizes results from 12 empirical evaluations of observational methods in education contexts. We look at the performance of three common covariate-types in observational studies where the outcome is a standardized reading or math test. They are: pretest measures, local geographic matching, and rich covariate sets with a strong…
Total organic halide (TOX) analyzers are commonly used to measure the amount of dissolved halogenated organic byproducts in disinfected waters. Because of the lack of information on the identity of disinfection byproducts, rigorous testing of the dissolved organic halide (DOX) pr...
Can Value-Added Measures of Teacher Performance Be Trusted?
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Wooldridge, Jeffrey M.
2015-01-01
We investigate whether commonly used value-added estimation strategies produce accurate estimates of teacher effects under a variety of scenarios. We estimate teacher effects in simulated student achievement data sets that mimic plausible types of student grouping and teacher assignment scenarios. We find that no one method accurately captures…
Differences That Make a Difference: A Study in Collaborative Learning
ERIC Educational Resources Information Center
Touchman, Stephanie
2012-01-01
Collaborative learning is a common teaching strategy in classrooms across age groups and content areas. It is important to measure and understand the cognitive process involved during collaboration to improve teaching methods involving interactive activities. This research attempted to answer the question: why do students learn more in…
Three common finishing treatments of stainless steel that are used for equipment during poultry processing were tested for resistance to bacterial contamination. Methods were developed to measure attached bacteria and to identify factors that make surface finishes susceptible or ...
Rethinking Traditional Methods of Survey Validation
ERIC Educational Resources Information Center
Maul, Andrew
2017-01-01
It is commonly believed that self-report, survey-based instruments can be used to measure a wide range of psychological attributes, such as self-control, growth mindsets, and grit. Increasingly, such instruments are being used not only for basic research but also for supporting decisions regarding educational policy and accountability. The…
Air Quality Criteria for Sulfur Oxides.
ERIC Educational Resources Information Center
National Air Pollution Control Administration (DHEW), Washington, DC.
Included is a literature review which comprehensively discusses knowledge of the sulfur oxides commonly found in the atmosphere. The subject content is represented by the 10 chapter titles: Physical and Chemical Properties and the Atmospheric Reactions of the Oxides of Sulfur; Sources and Methods of Measurements of Sulfur Oxides in the Atmosphere;…
This project involves development, validation testing and application of a fast, efficient method of quantitatively measuring occurrence and concentration of common human viral pathogens, enterovirus and hepatitis A virus, in ground water samples using real-time reverse transcrip...
DESIGN AND ANALYSIS OF AN EXPERIMENT FOR ASSESSING CYANIDE IN GOLD MINING WASTES
Gold mining wastes treated by heap leaching cyanidization typically contain several metallo-cyanide species. Accurate measurement of total cyanide by the most common methods in such a case may be hampered by the inadequate recoveries that occur for certain cyanide compounds (e.g....
The Use of Conversational Repairs by African American Preschoolers
ERIC Educational Resources Information Center
Stockman, Ida J.; Karasinski, Laura; Guillory, Barbara
2008-01-01
Purpose: This study aimed to describe the types and frequency of conversational repairs used by African American (AA) children in relationship to their geographic locations and levels of performance on commonly used speech-language measures. Method: The strategies used to initiate repairs and respond to repair requests were identified in…
Complexity-Entropy Causality Plane as a Complexity Measure for Two-Dimensional Patterns
Ribeiro, Haroldo V.; Zunino, Luciano; Lenzi, Ervin K.; Santoro, Perseu A.; Mendes, Renio S.
2012-01-01
Complexity measures are essential to understand complex systems and there are numerous definitions to analyze one-dimensional data. However, extensions of these approaches to two or higher-dimensional data, such as images, are much less common. Here, we reduce this gap by applying the ideas of the permutation entropy combined with a relative entropic index. We build up a numerical procedure that can be easily implemented to evaluate the complexity of two or higher-dimensional patterns. We work out this method in different scenarios where numerical experiments and empirical data were taken into account. Specifically, we have applied the method to fractal landscapes generated numerically where we compare our measures with the Hurst exponent; liquid crystal textures where nematic-isotropic-nematic phase transitions were properly identified; 12 characteristic textures of liquid crystals where the different values show that the method can distinguish different phases; and Ising surfaces where our method identified the critical temperature and also proved to be stable. PMID:22916097
The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement
Hua, Zilong; Ban, Heng; Hurley, David H.
2015-05-05
A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less
The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hua, Zilong; Ban, Heng; Hurley, David H.
A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less
NASA Astrophysics Data System (ADS)
Tajaddodianfar, Farid; Moheimani, S. O. Reza; Owen, James; Randall, John N.
2018-01-01
A common cause of tip-sample crashes in a Scanning Tunneling Microscope (STM) operating in constant current mode is the poor performance of its feedback control system. We show that there is a direct link between the Local Barrier Height (LBH) and robustness of the feedback control loop. A method known as the "gap modulation method" was proposed in the early STM studies for estimating the LBH. We show that the obtained measurements are affected by controller parameters and propose an alternative method which we prove to produce LBH measurements independent of the controller dynamics. We use the obtained LBH estimation to continuously update the gains of a STM proportional-integral (PI) controller and show that while tuning the PI gains, the closed-loop system tolerates larger variations of LBH without experiencing instability. We report experimental results, conducted on two STM scanners, to establish the efficiency of the proposed PI tuning approach. Improved feedback stability is believed to help in avoiding the tip/sample crash in STMs.
A Method to Determine Lysine Acetylation Stoichiometries
Nakayasu, Ernesto S.; Wu, Si; Sydor, Michael A.; ...
2014-01-01
Lysine acetylation is a common protein posttranslational modification that regulates a variety of biological processes. A major bottleneck to fully understanding the functional aspects of lysine acetylation is the difficulty in measuring the proportion of lysine residues that are acetylated. Here we describe a mass spectrometry method using a combination of isotope labeling and detection of a diagnostic fragment ion to determine the stoichiometry of protein lysine acetylation. Using this technique, we determined the modification occupancy for ~750 acetylated peptides from mammalian cell lysates. Furthermore, the acetylation on N-terminal tail of histone H4 was cross-validated by treating cells with sodiummore » butyrate, a potent deacetylase inhibitor, and comparing changes in stoichiometry levels measured by our method with immunoblotting measurements. Of note we observe that acetylation stoichiometry is high in nuclear proteins, but very low in mitochondrial and cytosolic proteins. In summary, our method opens new opportunities to study in detail the relationship of lysine acetylation levels of proteins with their biological functions.« less
Cui, Xinyi; Mayer, Philipp; Gan, Jay
2013-01-01
Many important environmental contaminants are hydrophobic organic contaminants (HOCs), which include PCBs, PAHs, PBDEs, DDT and other chlorinated insecticides, among others. Owing to their strong hydrophobicity, HOCs have their final destination in soil or sediment, where their ecotoxicological effects are closely regulated by sorption and thus bioavailability. The last two decades have seen a dramatic increase in research efforts in developing and applying partitioning based methods and biomimetic extractions for measuring HOC bioavailability. However, the many variations of both analytical methods and associated measurement endpoints are often a source of confusion for users. In this review, we distinguish the most commonly used analytical approaches based on their measurement objectives, and illustrate their practical operational steps, strengths and limitations using simple flowcharts. This review may serve as guidance for new users on the selection and use of established methods, and a reference for experienced investigators to identify potential topics for further research. Copyright © 2012 Elsevier Ltd. All rights reserved.
Ohuchi, Hiroko
2007-11-01
A novel method that can greatly improve the dosimetric sensitivity limit of a radiochromic film (RCF) through use of a set of color components, e.g., red and green, outputs from a RGB color scanner has been developed. RCFs are known to have microscopic and macroscopic nonuniformities, which come from the thickness variations in the film's active radiochromic layer and coating. These variations in the response make the optical signal-to-noise ratio lower, resulting in lower film sensitivity. To mitigate the effects of RCF nonuniform response, an optical common-mode rejection (CMR) was developed. The CMR compensates nonuniform response by creating a ratio of the two signals where the factors common to both numerator and denominator cancel out. The CMR scheme was applied to the mathematical operation of creating a ratio using two components, red and green outputs from a scanner. The two light component lights are neighboring wavebands about 100 nm apart and suffer a common fate, with the exception of wavelength-dependent events, having passed together along common attenuation paths. Two types of dose-response curves as a function of delivered dose ranging from 3.7 mGy to 8.1 Gy for 100 kV x-ray beams were obtained with the optical CMR scheme and the conventional analysis method using red component, respectively. In the range of 3.7 mGy to 81 mGy, the optical densities obtained with the optical CMR showed a good consistency among eight measured samples and an improved consistency with a linear fit within 1 standard deviation of each measured optical densities, while those with the conventional analysis exhibited a large discrepancy among eight samples and did not show a consistency with a linear fit.
Kainth, Daraspreet; Salazar, Pascal; Safinia, Cyrus; Chow, Ricky; Bachour, Ornina; Andalib, Sasan; McKinney, Alexander M; Divani, Afshin A
2017-01-01
Rabbit models of intracranial aneurysms are frequently used in pre-clinical settings. This study aimed to demonstrate an alternative, extravascular method for creating elastase-induced aneurysms, and how ligation of the right common carotid arteries (RCCA) can impact flow redistribution into left CCA (LCCA). Elastase-induced aneurysms in 18 New Zealand rabbits (4.14 ± 0.314 kg) were created by applying 3-5 U of concentrated elastase solution to the exterior of the right and left CCA roots (RCCA and LCCA). After the induction of the aneurysm, the aneurysm was either kept intact to the rest of the corresponding CCA, severed from the rest of the CCA to allow for a free standing aneurysm, or was anchored to nearby tissue to influence the angle and orientation of the aneurysm with respect to the parent vessel. Ultrasound studies were performed before and after creation of aneurysms to collect blood flow measurements inside the aneurysm pouch and surrounding arteries. Prior to sacrificing the animals, computed tomography angiography studies were performed. Harvested aneurysmal tissues were used for histological analysis. Elastase-induced aneurysms were successfully created by the extravascular approach. Histological studies showed that the biological response was similar to human cerebral aneurysms and previously published elastase-induced rabbit aneurysm models. Ultrasound measurements indicated that after the RCCA was ligated, blood flow significantly increased in the LCCA at one-month follow-up. An alternate method for creating elastase-induced aneurysms has been demonstrated. The novel aspects of our method allow for ligation of one or both common carotid arteries to create a single or bilateral aneurysm with an ability to control the orientation of the induced aneurysm.
OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis
Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J
2015-01-01
Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344
Regression-assisted deconvolution.
McIntyre, Julie; Stefanski, Leonard A
2011-06-30
We present a semi-parametric deconvolution estimator for the density function of a random variable biX that is measured with error, a common challenge in many epidemiological studies. Traditional deconvolution estimators rely only on assumptions about the distribution of X and the error in its measurement, and ignore information available in auxiliary variables. Our method assumes the availability of a covariate vector statistically related to X by a mean-variance function regression model, where regression errors are normally distributed and independent of the measurement errors. Simulations suggest that the estimator achieves a much lower integrated squared error than the observed-data kernel density estimator when models are correctly specified and the assumption of normal regression errors is met. We illustrate the method using anthropometric measurements of newborns to estimate the density function of newborn length. Copyright © 2011 John Wiley & Sons, Ltd.
Falaggis, Konstantinos; Towers, David P; Towers, Catherine E
2012-09-20
Multiwavelength interferometry (MWI) is a well established technique in the field of optical metrology. Previously, we have reported a theoretical analysis of the method of excess fractions that describes the mutual dependence of unambiguous measurement range, reliability, and the measurement wavelengths. In this paper wavelength, selection strategies are introduced that are built on the theoretical description and maximize the reliability in the calculated fringe order for a given measurement range, number of wavelengths, and level of phase noise. Practical implementation issues for an MWI interferometer are analyzed theoretically. It is shown that dispersion compensation is best implemented by use of reference measurements around absolute zero in the interferometer. Furthermore, the effects of wavelength uncertainty allow the ultimate performance of an MWI interferometer to be estimated.
Analysis of problems and failures in the measurement of soil-gas radon concentration.
Neznal, Martin; Neznal, Matěj
2014-07-01
Long-term experience in the field of soil-gas radon concentration measurements allows to describe and explain the most frequent causes of failures, which can appear in practice when various types of measurement methods and soil-gas sampling techniques are used. The concept of minimal sampling depth, which depends on the volume of the soil-gas sample and on the soil properties, is shown in detail. Consideration of minimal sampling depth at the time of measurement planning allows to avoid the most common mistakes. The ways how to identify influencing parameters, how to avoid a dilution of soil-gas samples by the atmospheric air, as well as how to recognise inappropriate sampling methods are discussed. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Bone optical spectroscopy for the measurement of hemoglobin content
NASA Astrophysics Data System (ADS)
Hollmann, Joseph L.; Arambel, Paula; Piet, Judith; Shefelbine, Sandra; Markovic, Stacey; Niedre, Mark; DiMarzio, Charles A.
2014-05-01
Osteoporosis is a common side effect of spinal cord injuries. Blood perfusion in the bone provides an indication of bone health and may help to evaluate therapies addressing bone loss. Current methods for measuring blood perfusion of bone use dyes and ionizing radiation, and yield qualitative results. We present a device capable of measuring blood oxygenation in the tibia. The device illuminates the skin directly over the tibia with a white light source and measures the diffusely reflected light in the near infrared spectrum. Multiple source-detector distances are utilized so that the blood perfusion in skin and bone may be differentiated.
A Triaxial Applicator for the Measurement of the Electromagnetic Properties of Materials
2018-01-01
The design, analysis, and fabrication of a prototype triaxial applicator is described. The applicator provides both reflected and transmitted signals that can be used to characterize the electromagnetic properties of materials in situ. A method for calibrating the probe is outlined and validated using simulated data. Fabrication of the probe is discussed, and measured data for typical absorbing materials and for the probe situated in air are presented. The simulations and measurements suggest that the probe should be useful for measuring the properties of common radar absorbing materials under usual in situ conditions. PMID:29382122
Consistent and efficient processing of ADCP streamflow measurements
Mueller, David S.; Constantinescu, George; Garcia, Marcelo H.; Hanes, Dan
2016-01-01
The use of Acoustic Doppler Current Profilers (ADCPs) from a moving boat is a commonly used method for measuring streamflow. Currently, the algorithms used to compute the average depth, compute edge discharge, identify invalid data, and estimate velocity and discharge for invalid data vary among manufacturers. These differences could result in different discharges being computed from identical data. Consistent computational algorithm, automated filtering, and quality assessment of ADCP streamflow measurements that are independent of the ADCP manufacturer are being developed in a software program that can process ADCP moving-boat discharge measurements independent of the ADCP used to collect the data.
Study to design and develop remote manipulator system
NASA Technical Reports Server (NTRS)
Hill, J. W.; Sword, A. J.
1973-01-01
Human performance measurement techniques for remote manipulation tasks and remote sensing techniques for manipulators are described for common manipulation tasks, performance is monitored by means of an on-line computer capable of measuring the joint angles of both master and slave arms as a function of time. The computer programs allow measurements of the operator's strategy and physical quantities such as task time and power consumed. The results are printed out after a test run to compare different experimental conditions. For tracking tasks, we describe a method of displaying errors in three dimensions and measuring the end-effector position in three dimensions.
NASA Astrophysics Data System (ADS)
Gu, Tingwei; Kong, Deren; Shang, Fei; Chen, Jing
2017-12-01
We present an optimization algorithm to obtain low-uncertainty dynamic pressure measurements from a force-transducer-based device. In this paper, the advantages and disadvantages of the methods that are commonly used to measure the propellant powder gas pressure, the applicable scope of dynamic pressure calibration devices, and the shortcomings of the traditional comparison calibration method based on the drop-weight device are firstly analysed in detail. Then, a dynamic calibration method for measuring pressure using a force sensor based on a drop-weight device is introduced. This method can effectively save time when many pressure sensors are calibrated simultaneously and extend the life of expensive reference sensors. However, the force sensor is installed between the drop-weight and the hammerhead by transition pieces through the connection mode of bolt fastening, which causes adverse effects such as additional pretightening and inertia forces. To solve these effects, the influence mechanisms of the pretightening force, the inertia force and other influence factors on the force measurement are theoretically analysed. Then a measurement correction method for the force measurement is proposed based on an artificial neural network optimized by a genetic algorithm. The training and testing data sets are obtained from calibration tests, and the selection criteria for the key parameters of the correction model is discussed. The evaluation results for the test data show that the correction model can effectively improve the force measurement accuracy of the force sensor. Compared with the traditional high-accuracy comparison calibration method, the percentage difference of the impact-force-based measurement is less than 0.6% and the relative uncertainty of the corrected force value is 1.95%, which can meet the requirements of engineering applications.
Novel method of measuring the mental workload of anaesthetists during clinical practice.
Byrne, A J; Oliver, M; Bodger, O; Barnett, W A; Williams, D; Jones, H; Murphy, A
2010-12-01
Cognitive overload has been recognized as a significant cause of error in industries such as aviation and measuring mental workload has become a key method of improving safety. The aim of this study was to pilot the use of a new method of measuring mental workload in the operating theatre using a previously published methodology. The mental workload of the anaesthetists was assessed by measuring their response times to a wireless vibrotactile device and the NASA TLX subjective workload score during routine surgical procedures. Primary task workload was inferred from the phase of anaesthesia. Significantly increased response time was associated with the induction phase of anaesthesia compared with maintenance/emergence, non-consultant grade, and during more complex cases. Increased response was also associated with self-reported mental load, physical load, and frustration. These findings are consistent with periods of increased mental workload and with the findings of other studies using similar techniques. These findings confirm the importance of mental workload to the performance of anaesthetists and suggest that increased mental workload is likely to be a common problem in clinical practice. Although further studies are required, the method described may be useful for the measurement of the mental workload of anaesthetists.
Rapid measurement of protein osmotic second virial coefficients by self-interaction chromatography.
Tessier, Peter M; Lenhoff, Abraham M; Sandler, Stanley I
2002-01-01
Weak protein interactions are often characterized in terms of the osmotic second virial coefficient (B(22)), which has been shown to correlate with protein phase behavior, such as crystallization. Traditional methods for measuring B(22), such as static light scattering, are too expensive in terms of both time and protein to allow extensive exploration of the effects of solution conditions on B(22). In this work we have measured protein interactions using self-interaction chromatography, in which protein is immobilized on chromatographic particles and the retention of the same protein is measured in isocratic elution. The relative retention of the protein reflects the average protein interactions, which we have related to the second virial coefficient via statistical mechanics. We obtain quantitative agreement between virial coefficients measured by self-interaction chromatography and traditional characterization methods for both lysozyme and chymotrypsinogen over a wide range of pH and ionic strengths, yet self-interaction chromatography requires at least an order of magnitude less time and protein than other methods. The method thus holds significant promise for the characterization of protein interactions requiring only commonly available laboratory equipment, little specialized expertise, and relatively small investments of both time and protein. PMID:11867474
White, Sarah A; van den Broek, Nynke R
2004-05-30
Before introducing a new measurement tool it is necessary to evaluate its performance. Several statistical methods have been developed, or used, to evaluate the reliability and validity of a new assessment method in such circumstances. In this paper we review some commonly used methods. Data from a study that was conducted to evaluate the usefulness of a specific measurement tool (the WHO Colour Scale) is then used to illustrate the application of these methods. The WHO Colour Scale was developed under the auspices of the WHO to provide a simple portable and reliable method of detecting anaemia. This Colour Scale is a discrete interval scale, whereas the actual haemoglobin values it is used to estimate are on a continuous interval scale and can be measured accurately using electrical laboratory equipment. The methods we consider are: linear regression, correlation coefficients, paired t-tests plotting differences against mean values and deriving limits of agreement; kappa and weighted kappa statistics, sensitivity and specificity, an intraclass correlation coefficient and the repeatability coefficient. We note that although the definition and properties of each of these methods is well established inappropriate methods continue to be used in medical literature for assessing reliability and validity, as evidenced in the context of the evaluation of the WHO Colour Scale. Copyright 2004 John Wiley & Sons, Ltd.
A systematic review of health care efficiency measures.
Hussey, Peter S; de Vries, Han; Romley, John; Wang, Margaret C; Chen, Susan S; Shekelle, Paul G; McGlynn, Elizabeth A
2009-06-01
To review and characterize existing health care efficiency measures in order to facilitate a common understanding about the adequacy of these methods. Review of the MedLine and EconLit databases for articles published from 1990 to 2008, as well as search of the "gray" literature for additional measures developed by private organizations. We performed a systematic review for existing efficiency measures. We classified the efficiency measures by perspective, outputs, inputs, methods used, and reporting of scientific soundness. We identified 265 measures in the peer-reviewed literature and eight measures in the gray literature, with little overlap between the two sets of measures. Almost all of the measures did not explicitly consider the quality of care. Thus, if quality varies substantially across groups, which is likely in some cases, the measures reflect only the costs of care, not efficiency. Evidence on the measures' scientific soundness was mostly lacking: evidence on reliability or validity was reported for six measures (2.3 percent) and sensitivity analyses were reported for 67 measures (25.3 percent). Efficiency measures have been subjected to few rigorous evaluations of reliability and validity, and methods of accounting for quality of care in efficiency measurement are not well developed at this time. Use of these measures without greater understanding of these issues is likely to engender resistance from providers and could lead to unintended consequences.
NASA Astrophysics Data System (ADS)
Eason, Thomas J.; Bond, Leonard J.; Lozev, Mark G.
2016-02-01
The accuracy, precision, and reliability of ultrasonic thickness structural health monitoring systems are discussed in-cluding the influence of systematic and environmental factors. To quantify some of these factors, a compression wave ultrasonic thickness structural health monitoring experiment is conducted on a flat calibration block at ambient temperature with forty four thin-film sol-gel transducers and various time-of-flight thickness calculation methods. As an initial calibration, the voltage response signals from each sensor are used to determine the common material velocity as well as the signal offset unique to each calculation method. Next, the measurement precision of the thickness error of each method is determined with a proposed weighted censored relative maximum likelihood analysis technique incorporating the propagation of asymmetric measurement uncertainty. The results are presented as upper and lower confidence limits analogous to the a90/95 terminology used in industry recognized Probability-of-Detection assessments. Future work is proposed to apply the statistical analysis technique to quantify measurement precision of various thickness calculation methods under different environmental conditions such as high temperature, rough back-wall surface, and system degradation with an intended application to monitor naphthenic acid corrosion in oil refineries.
Ghodrati, Sajjad; Kandi, Saeideh Gorji; Mohseni, Mohsen
2018-06-01
In recent years, various surface roughness measurement methods have been proposed as alternatives to the commonly used stylus profilometry, which is a low-speed, destructive, expensive but precise method. In this study, a novel method, called "image profilometry," has been introduced for nondestructive, fast, and low-cost surface roughness measurement of randomly rough metallic samples based on image processing and machine vision. The impacts of influential parameters such as image resolution and filtering approach for elimination of the long wavelength surface undulations on the accuracy of the image profilometry results have been comprehensively investigated. Ten surface roughness parameters were measured for the samples using both the stylus and image profilometry. Based on the results, the best image resolution was 800 dpi, and the most practical filtering method was Gaussian convolution+cutoff. In these conditions, the best and worst correlation coefficients (R 2 ) between the stylus and image profilometry results were 0.9892 and 0.9313, respectively. Our results indicated that the image profilometry predicted the stylus profilometry results with high accuracy. Consequently, it could be a viable alternative to the stylus profilometry, particularly in online applications.
Probing plasmodesmata function with biochemical inhibitors.
White, Rosemary G
2015-01-01
To investigate plasmodesmata (PD) function, a useful technique is to monitor the effect on cell-to-cell transport of applying an inhibitor of a physiological process, protein, or other cell component of interest. Changes in PD transport can then be monitored in one of several ways, most commonly by measuring the cell-to-cell movement of fluorescent tracer dyes or of free fluorescent proteins. Effects on PD structure can be detected in thin sections of embedded tissue observed using an electron microscope, most commonly a Transmission Electron Microscope (TEM). This chapter outlines commonly used inhibitors, methods for treating different tissues, how to detect altered cell-to-cell transport and PD structure, and important caveats.
A network approach for identifying and delimiting biogeographical regions.
Vilhena, Daril A; Antonelli, Alexandre
2015-04-24
Biogeographical regions (geographically distinct assemblages of species and communities) constitute a cornerstone for ecology, biogeography, evolution and conservation biology. Species turnover measures are often used to quantify spatial biodiversity patterns, but algorithms based on similarity can be sensitive to common sampling biases in species distribution data. Here we apply a community detection approach from network theory that incorporates complex, higher-order presence-absence patterns. We demonstrate the performance of the method by applying it to all amphibian species in the world (c. 6,100 species), all vascular plant species of the USA (c. 17,600) and a hypothetical data set containing a zone of biotic transition. In comparison with current methods, our approach tackles the challenges posed by transition zones and succeeds in retrieving a larger number of commonly recognized biogeographical regions. This method can be applied to generate objective, data-derived identification and delimitation of the world's biogeographical regions.
Peelle's pertinent puzzle using the Monte Carlo technique
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kawano, Toshihiko; Talou, Patrick; Burr, Thomas
2009-01-01
We try to understand the long-standing problem of the Peelle's Pertinent Puzzle (PPP) using the Monte Carlo technique. We allow the probability density functions to be any kind of form to assume the impact of distribution, and obtain the least-squares solution directly from numerical simulations. We found that the standard least squares method gives the correct answer if a weighting function is properly provided. Results from numerical simulations show that the correct answer of PPP is 1.1 {+-} 0.25 if the common error is multiplicative. The thought-provoking answer of 0.88 is also correct, if the common error is additive, andmore » if the error is proportional to the measured values. The least squares method correctly gives us the most probable case, where the additive component has a negative value. Finally, the standard method fails for PPP due to a distorted (non Gaussian) joint distribution.« less