Sample records for method involves measuring

  1. Survey Measurement of Father Involvement in Childrearing: A Reliability and Validity Study.

    ERIC Educational Resources Information Center

    Riley, Dave

    The purpose of this paper is to describe a specific method of measuring fathers' childrearing involvement. The conceptual scheme underlying the method addresses involvement in routine child care, play with the child, and school-related interactions. Measures involved the father's share of childrearing (as compared with the mother's) and the…

  2. Non-destructive ultrasonic measurements of case depth. [in steel

    NASA Technical Reports Server (NTRS)

    Flambard, C.; Lambert, A.

    1978-01-01

    Two ultrasonic methods for nondestructive measurements of the depth of a case-hardened layer in steel are described. One method involves analysis of ultrasonic waves diffused back from the bulk of the workpiece. The other method involves finding the speed of propagation of ultrasonic waves launched on the surface of the work. Procedures followed in the two methods for measuring case depth are described.

  3. Rendering the "Not-So-Simple" Pendulum Experimentally Accessible.

    ERIC Educational Resources Information Center

    Jackson, David P.

    1996-01-01

    Presents three methods for obtaining experimental data related to acceleration of a simple pendulum. Two of the methods involve angular position measurements and the subsequent calculation of the acceleration while the third method involves a direct measurement of the acceleration. Compares these results with theoretical calculations and…

  4. Low-Resolution Raman-Spectroscopy Combustion Thermometry

    NASA Technical Reports Server (NTRS)

    Nguyen, Quang-Viet; Kojima, Jun

    2008-01-01

    A method of optical thermometry, now undergoing development, involves low-resolution measurement of the spectrum of spontaneous Raman scattering (SRS) from N2 and O2 molecules. The method is especially suitable for measuring temperatures in high pressure combustion environments that contain N2, O2, or N2/O2 mixtures (including air). Methods based on SRS (in which scattered light is shifted in wavelength by amounts that depend on vibrational and rotational energy levels of laser-illuminated molecules) have been popular means of probing flames because they are almost the only methods that provide spatially and temporally resolved concentrations and temperatures of multiple molecular species in turbulent combustion. The present SRS-based method differs from prior SRS-based methods that have various drawbacks, a description of which would exceed the scope of this article. Two main differences between this and prior SRS-based methods are that it involves analysis in the frequency (equivalently, wavelength) domain, in contradistinction to analysis in the intensity domain in prior methods; and it involves low-resolution measurement of what amounts to predominantly the rotational Raman spectra of N2 and O2, in contradistinction to higher-resolution measurement of the vibrational Raman spectrum of N2 only in prior methods.

  5. Simple, Low-Cost Data Collection Methods for Agricultural Field Studies.

    ERIC Educational Resources Information Center

    Koenig, Richard T.; Winger, Marlon; Kitchen, Boyd

    2000-01-01

    Summarizes relatively simple and inexpensive methods for collecting data from agricultural field studies. Describes methods involving on-farm testing, crop yield measurement, quality evaluations, weed control effectiveness, plant nutrient status, and other measures. Contains 29 references illustrating how these methods were used to conduct…

  6. On-Line Measurement of Heat of Combustion of Gaseous Hydrocarbon Fuel Mixtures

    NASA Technical Reports Server (NTRS)

    Sprinkle, Danny R.; Chaturvedi, Sushil K.; Kheireddine, Ali

    1996-01-01

    A method for the on-line measurement of the heat of combustion of gaseous hydrocarbon fuel mixtures has been developed and tested. The method involves combustion of a test gas with a measured quantity of air to achieve a preset concentration of oxygen in the combustion products. This method involves using a controller which maintains the fuel (gas) volumetric flow rate at a level consistent with the desired oxygen concentration in the combustion products. The heat of combustion is determined form a known correlation with the fuel flow rate. An on-line computer accesses the fuel flow data and displays the heat of combustion measurement at desired time intervals. This technique appears to be especially applicable for measuring heats of combustion of hydrocarbon mixtures of unknown composition such as natural gas.

  7. Measuring Severity of Involvement in Speech Delay: Segmental and Whole-Word Measures

    ERIC Educational Resources Information Center

    Flipsen, Peter, Jr.; Hammer, Jill B.; Yost, Kathryn M.

    2005-01-01

    Purpose: This study examined whether any of a series of segmental and whole-word measures of articulatory competence captured more of the variance in impressionistic ratings of severity of involvement in speech delay. It also examined whether knowing the age of the child affected severity ratings. Method: Ten very experienced speech-language…

  8. Assessment on the methods of measuring the tyre-road contact patch stresses

    NASA Astrophysics Data System (ADS)

    Anghelache, G.; Moisescu, A.-R.; Buretea, D.

    2017-08-01

    The paper reviews established and modern methods for investigating tri-axial stress distributions in the tyre-road contact patch. The authors used three methods of measuring stress distributions: strain gauge method; force sensing technique; acceleration measurements. Four prototypes of instrumented pins transducers involving mentioned measuring methods were developed. Data acquisitions of the contact patch stresses distributions were performed using each transducer with instrumented pin. The results are analysed and compared, underlining the advantages and drawbacks of each method. The experimental results indicate that the three methods are valuable.

  9. A New Method to Cross Calibrate and Validate TOMS, SBUV/2, and SCIAMACHY Measurements

    NASA Technical Reports Server (NTRS)

    Ahmad, Ziauddin; Hilsenrath, Ernest; Einaudi, Franco (Technical Monitor)

    2001-01-01

    A unique method to validate back scattered ultraviolet (buv) type satellite data that complements the measurements from existing ground networks is proposed. The method involves comparing the zenith sky radiance measurements from the ground to the nadir radiance measurements taken from space. Since the measurements are compared directly, the proposed method is superior to any other method that involves comparing derived products (for example, ozone), because comparison of derived products involve inversion algorithms which are susceptible to several type of errors. Forward radiative transfer (RT) calculations show that for an aerosol free atmosphere, the ground-based zenith sky radiance measurement and the satellite nadir radiance measurements can be predicted with an accuracy of better than 1 percent. The RT computations also show that for certain values of the solar zenith angles, the radiance comparisons could be better than half a percent. This accuracy is practically independent of ozone amount and aerosols in the atmosphere. Experiences with the Shuttle Solar Backscatter Ultraviolet (SSBUV) program show that the accuracy of the ground-based zenith sky radiance measuring instrument can be maintained at a level of a few tenth of a percent. This implies that the zenith sky radiance measurements can be used to validate Total Ozone Mapping Spectrometer (TOMS), Solar Backscatter Ultraviolet (SBUV/2), and The SCanning Imaging Absorption SpectroMeter for Atmospheric CHartographY (SCIAMACHY) radiance data. Also, this method will help improve the long term precision of the measurements for better trend detection and the accuracy of other BUV products such as tropospheric ozone and aerosols. Finally, in the long term, this method is a good candidate to inter-calibrate and validate long term observations of upcoming operational instruments such as Global Ozone Monitoring Experiment (GOME-2), Ozone Mapping Instrument (OMI), Ozone Dynamics Ultraviolet Spectrometer (ODUS), and Ozone Mapping and Profiler Suite (OMPS).

  10. An integrative research review of instruments measuring religious involvement: implications for nursing research with African Americans.

    PubMed

    Mokel, Melissa Jennifer; Shellman, Juliette M

    2013-01-01

    Many instruments in which religious involvement is measured often (a) contain unclear, poorly developed constructs; (b) lack methodological rigor in scale development; and (c) contain language and content culturally incongruent with the religious experiences of diverse ethnic groups. The primary aims of this review were to (a) synthesize the research on instruments designed to measure religious involvement, (b) evaluate the methodological quality of instruments that measure religious involvement, and (c) examine these instruments for conceptual congruency with African American religious involvement. An updated integrative research review method guided the process (Whittemore & Knafl, 2005). 152 articles were reviewed and 23 articles retrieved. Only 3 retained instruments were developed under methodologically rigorous conditions. All 3 instruments were congruent with a conceptual model of African American religious involvement. The Fetzer Multidimensional Measure of Religious Involvement and Spirituality (FMMRS; Idler et al., 2003) was found to have favorable characteristics. Further examination and psychometric testing is warranted to determine its acceptability, readability, and cultural sensitivity in an African American population.

  11. Archimedes Revisited: A Faster, Better, Cheaper Method of Accurately Measuring the Volume of Small Objects

    ERIC Educational Resources Information Center

    Hughes, Stephen W.

    2005-01-01

    A little-known method of measuring the volume of small objects based on Archimedes' principle is described, which involves suspending an object in a water-filled container placed on electronic scales. The suspension technique is a variation on the hydrostatic weighing technique used for measuring volume. The suspension method was compared with two…

  12. The importance of male partner involvement for women's acceptability and adherence to female-initiated HIV prevention methods in Zimbabwe.

    PubMed

    Montgomery, Elizabeth T; van der Straten, Ariane; Chidanyika, Agnes; Chipato, Tsungai; Jaffar, Shabbar; Padian, Nancy

    2011-07-01

    Enlisting male partner involvement is perceived as an important component of women's successful uptake of female-initiated HIV prevention methods. We conducted a longitudinal study among a cohort of 955 Zimbabwean women participating in a clinical trial of the effectiveness of a female-initiated HIV prevention method (the diaphragm and lubricant gel) to: (a) describe the extent to which women involved their male partners in the decision to use the study products, and (b) measure the effect perceived male partner support had on their acceptability and consistent use of these methods. Reported levels of male partner involvement in discussions and decisions regarding: joining the study, study activities, the outcome of HIV/STI test results, and product use were very high. In multivariate analyses, regular disclosure of study product use and partner approval for the diaphragm and gel were significantly associated with women's acceptability and consistent use of the products; an essential component for determining efficacy of investigational prevention methods. These results support the need for more sophisticated measurement of how couples interact to make decisions that impact study participation and investigational product use as well as more rigorous adaptations and evaluations of existing strategies to involve male partners in female-initiated HIV prevention trials.

  13. [Neuropsychological methods of examination of age-specific performance parameters (author's transl)].

    PubMed

    Quatember, R; Maly, J

    1980-11-15

    200 test persons were subjected to a double-blind experiment involving a medication of K. H. 3 (Schwarzhaupt). The measurement procedures involved 9 apparatus dealing with psychophysiological measurement level methods and led to the following outcomes: 1) Increase of the psychomotor tempo of the dominant hand after 5 month application of K. H. 3 (motor performance series). -2) Reduction of reaction errors determined by a vigilance measurement instrument after 5 month treatment with K. H. 3 (evidenced by an increase in monotony resistance and continous attention). -3) Improvement of multiple-choice reactions (simultaneous reaction capacity) to optic, acoustic and orientation-linked stimuli (fewer false and delayed reactions). -4) Increase of visual attentiveness and visual short time memory after 5 month medication of K. H. 3 measured by the Cognitron concentration measurement device). No statistically significant differences of the investigated performance parameters were found between K. H. 3 and placebo groups after 3 month application of K. H. 3. The result of the present study involving measurements at the psychophysiological measurement level are compared with data of a previous study.

  14. Simplified power control method for cellular mobile communication

    NASA Astrophysics Data System (ADS)

    Leung, Y. W.

    1994-04-01

    The centralized power control (CPC) method measures the gain of the communication links between every mobile and every base station in the cochannel cells and determines optimal transmitter power to maximize the minimum carrier-to-interference ratio. The authors propose a simplified power control method which has nearly the same performance as the CPC method but which involves much smaller measurement overhead.

  15. Is dream recall underestimated by retrospective measures and enhanced by keeping a logbook? A review.

    PubMed

    Aspy, Denholm J; Delfabbro, Paul; Proeve, Michael

    2015-05-01

    There are two methods commonly used to measure dream recall in the home setting. The retrospective method involves asking participants to estimate their dream recall in response to a single question and the logbook method involves keeping a daily record of one's dream recall. Until recently, the implicit assumption has been that these measures are largely equivalent. However, this is challenged by the tendency for retrospective measures to yield significantly lower dream recall rates than logbooks. A common explanation for this is that retrospective measures underestimate dream recall. Another is that keeping a logbook enhances it. If retrospective measures underestimate dream recall and if logbooks enhance it they are both unlikely to reflect typical dream recall rates and may be confounded with variables associated with the underestimation and enhancement effects. To date, this issue has received insufficient attention. The present review addresses this gap in the literature. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Finding the Density of Objects without Measuring Mass and Volume

    ERIC Educational Resources Information Center

    Mumba, Frackson; Tsige, Mesfin

    2007-01-01

    A simple method based on the moment of forces and Archimedes' principle is described for finding density without measuring the mass and volume of an object. The method involves balancing two unknown objects of masses M[subscript 1] and M[subscript 2] on each side of a pivot on a metre rule and measuring their corresponding moment arms. The object…

  17. Measuring the degree of integration for an integrated service network

    PubMed Central

    Ye, Chenglin; Browne, Gina; Grdisa, Valerie S; Beyene, Joseph; Thabane, Lehana

    2012-01-01

    Background Integration involves the coordination of services provided by autonomous agencies and improves the organization and delivery of multiple services for target patients. Current measures generally do not distinguish between agencies’ perception and expectation. We propose a method for quantifying the agencies’ service integration. Using the data from the Children’s Treatment Network (CTN), we aimed to measure the degree of integration for the CTN agencies in York and Simcoe. Theory and methods We quantified the integration by the agreement between perceived and expected levels of involvement and calculated four scores from different perspectives for each agency. We used the average score to measure the global network integration and examined the sensitivity of the global score. Results Most agencies’ integration scores were <65%. As measured by the agreement between every other agency’s perception and expectation, the overall integration of CTN in Simcoe and York was 44% (95% CI: 39%–49%) and 52% (95% CI: 48%–56%), respectively. The sensitivity analysis showed that the global scores were robust. Conclusion Our method extends existing measures of integration and possesses a good extent of validity. We can also apply the method in monitoring improvement and linking integration with other outcomes. PMID:23593050

  18. Method and system for determining formation porosity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pittman, R.W.; Hermes, C.E.

    1977-12-27

    The invention discloses a method and/or system for measuring formation porosity from drilling response. It involves measuring a number of drilling parameters and includes determination of tooth dullness as well as determining a reference torque empirically. One of the drilling parameters is the torque applied to the drill string.

  19. Practical data collection : establishing methods and procedures for measuring water clarity and turbidity of storm water run-off from active major highway construction sites.

    DOT National Transportation Integrated Search

    2014-09-12

    In anticipation of regulation involving numeric turbidity limit at highway construction sites, research was : done into the most appropriate, affordable methods for surface water monitoring. Measuring sediment : concentration in streams may be conduc...

  20. Assessment of knowledge and skills in information literacy instruction for rehabilitation sciences students: a scoping review

    PubMed Central

    Boruff, Jill T.; Harrison, Pamela

    2018-01-01

    Objective This scoping review investigates how knowledge and skills are assessed in the information literacy (IL) instruction for students in physical therapy, occupational therapy, or speech-language pathology, regardless of whether the instruction was given by a librarian. The objectives were to discover what assessment measures were used, determine whether these assessment methods were tested for reliability and validity, and provide librarians with guidance on assessment methods to use in their instruction in evidence-based practice contexts. Methods A scoping review methodology was used. A systematic search strategy was run in Ovid MEDLINE and adapted for CINAHL; EMBASE; Education Resources Information Center (ERIC) (EBSCO); Library and Information Science Abstracts (LISA); Library, Information Science & Technology Abstracts (LISTA); and Proquest Theses and Dissertations from 1990 to January 16, 2017. Forty articles were included for data extraction. Results Three major themes emerged: types of measures used, type and context of librarian involvement, and skills and outcomes described. Thirty-four measures of attitude and thirty-seven measures of performance were identified. Course products were the most commonly used type of performance measure. Librarians were involved in almost half the studies, most frequently as instructor, but also as author or assessor. Information literacy skills such as question formulation and database searching were described in studies that did not involve a librarian. Conclusion Librarians involved in instructional assessment can use rubrics such as the Valid Assessment of Learning in Undergraduate Education (VALUE) when grading assignments to improve the measurement of knowledge and skills in course-integrated IL instruction. The Adapted Fresno Test could be modified to better suit the real-life application of IL knowledge and skills. PMID:29339931

  1. An improved method for measuring the magnetic inhomogeneity shift in hydrogen masers

    NASA Technical Reports Server (NTRS)

    Reinhardt, V. S.; Peters, H. E.

    1975-01-01

    The reported method makes it possible to conduct all maser frequency measurements under conditions of low magnetic field intensity for which the hydrogen maser is most stable. Aspects concerning the origin of the magnetic inhomogeneity shift are examined and the available approaches for measuring this shift are considered, taking into account certain drawbacks of currently used methods. An approach free of these drawbacks can be based on the measurement of changes in a parameter representing the difference between the number of atoms in the involved states.

  2. Measurement of rolling friction by a damped oscillator

    NASA Technical Reports Server (NTRS)

    Dayan, M.; Buckley, D. H.

    1983-01-01

    An experimental method for measuring rolling friction is proposed. The method is mechanically simple. It is based on an oscillator in a uniform magnetic field and does not involve any mechanical forces except for the measured friction. The measured pickup voltage is Fourier analyzed and yields the friction spectral response. The proposed experiment is not tailored for a particular case. Instead, various modes of operation, suitable to different experimental conditions, are discussed.

  3. Method and means for dynamic measurement of rates of adsorption from solutions

    DOEpatents

    Slomka, Bogdan J.; Buttermore, William H.

    1992-05-05

    A method and apparatus for dynamic measurement of rates of absorption from solutions. The method has the advantage of avoiding the use of solvent normally used to establish a baseline. The method involves pre-evacuating the adsorbent contained in an adsorbent cell and thereafter rapidly contacting the adsorbent with analytical solution, all without prior exposure of adsorbent to pure solvent. The result is a sharp characteristic adsorption line.

  4. Interferometric Methods of Measuring Refractive Indices and Double-Refraction of Fibres.

    ERIC Educational Resources Information Center

    Hamza, A. A.; El-Kader, H. I. Abd

    1986-01-01

    Presents two methods used to measure the refractive indices and double-refraction of fibers. Experiments are described, with one involving the use of Pluta microscope in the double-beam interference technique, the other employing the multiple-beam technique. Immersion liquids are discussed that can be used in the experiments. (TW)

  5. The Effect of Classical Music on Painting Quality and Classroom Behaviour for Students with Severe Intellectual Disabilities in Special Schools

    ERIC Educational Resources Information Center

    Waugh, Russell F.; Riddoch, Jane V.

    2007-01-01

    There are few studies measuring the effects on painting quality of playing background classical music at special schools. Primary students with severe intellectual disabilities (N=24) were taught abstract painting in a two-part method. The first part involved a Pictorial Only method and the second, immediately following it, involved a Pictorial…

  6. A critical commentary on management science in relation to reforms after institutional National Health Service failures.

    PubMed

    Regan, Paul; Ball, Elaine

    2017-03-01

    A discussion paper on the United Kingdom (UK) National Health Service (NHS) market reforms. NHS market reforms reliance on management science methods introduced a fundamental shift in measuring care for commissioning. A number of key reports are discussed in relation to NHS market reforms and management science. NHS market reforms were influenced through a close alliance between policy makers, the department of health, free market think tanks and management consultancies. The timing of reforms coincided with reports on NHS failings and the evolution of measurement methods to focus on finance. The balance in favour of measurement practises is of concern. Management science methods are criticised in the Francis Report yet promoted as the solution to some of the key findings; why may be explained by the close alliance. A return to principles of management involving consensus, trust and involvement to promote quality care and use management science methods to this end. © 2016 John Wiley & Sons Ltd.

  7. Radiative properties of flame-generated soot

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koeylue, U.O.; Faeth, G.M.

    1993-05-01

    Approximate methods for estimating the optical properties of flame-generated soot aggregates were evaluated using existing computer simulations and measurements in the visible and near-infrared portions of the spectrum. The following approximate methods were evaluated for both individual aggregates and polydisperse aggregate populations: the Rayleigh scattering approximation, Mie scattering for an equivalent sphere, and Rayleigh-Debye-Gans (R-D-G) scattering for both given and fractal aggregates. Results of computer simulations involved both prescribed aggregate geometry and numerically generated aggregates by cluster-cluster aggregation; multiple scattering was considered exactly using the mean-field approximation, and ignored using the R-D-G approximation. Measurements involved the angular scattering properties ofmore » soot in the postflame regions of both premixed and nonpremixed flames. The results show that available computer simulations and measurements of soot aggregate optical properties are not adequate to provide a definitive evaluation of the approximate prediction methods. 40 refs., 7 figs., 1 tab.« less

  8. Chapter 8: Whole-Building Retrofit with Consumption Data Analysis Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Agnew, Ken; Goldberg, Mimi

    Whole-building retrofits involve the installation of multiple measures. Whole-building retrofit programs take many forms. With a focus on overall building performance, these programs usually begin with an energy audit to identify cost-effective energy efficiency measures for the home. Measures are then installed, either at no cost to the homeowner or partially paid for by rebates and/or financing. The methods described here may also be applied to evaluation of single-measure retrofit programs. Related methods exist for replace-on-failure programs and for new construction, but are not the subject of this chapter.

  9. Use of Recommended Search Strategies in Systematic Reviews and the Impact of Librarian Involvement: A Cross-Sectional Survey of Recent Authors

    PubMed Central

    Koffel, Jonathan B.

    2015-01-01

    Background Previous research looking at published systematic reviews has shown that their search strategies are often suboptimal and that librarian involvement, though recommended, is low. Confidence in the results, however, is limited due to poor reporting of search strategies the published articles. Objectives To more accurately measure the use of recommended search methods in systematic reviews, the levels of librarian involvement, and whether librarian involvement predicts the use of recommended methods. Methods A survey was sent to all authors of English-language systematic reviews indexed in the Database of Abstracts of Reviews of Effects (DARE) from January 2012 through January 2014. The survey asked about their use of search methods recommended by the Institute of Medicine, Cochrane Collaboration, and the Agency for Healthcare Research and Quality and if and how a librarian was involved in the systematic review. Rates of use of recommended methods and librarian involvement were summarized. The impact of librarian involvement on use of recommended methods was examined using a multivariate logistic regression. Results 1560 authors completed the survey. Use of recommended search methods ranged widely from 98% for use of keywords to 9% for registration in PROSPERO and were generally higher than in previous studies. 51% of studies involved a librarian, but only 64% acknowledge their assistance. Librarian involvement was significantly associated with the use of 65% of recommended search methods after controlling for other potential predictors. Odds ratios ranged from 1.36 (95% CI 1.06 to 1.75) for including multiple languages to 3.07 (95% CI 2.06 to 4.58) for using controlled vocabulary. Conclusions Use of recommended search strategies is higher than previously reported, but many methods are still under-utilized. Librarian involvement predicts the use of most methods, but their involvement is under-reported within the published article. PMID:25938454

  10. A novel automatic quantification method for high-content screening analysis of DNA double strand-break response.

    PubMed

    Feng, Jingwen; Lin, Jie; Zhang, Pengquan; Yang, Songnan; Sa, Yu; Feng, Yuanming

    2017-08-29

    High-content screening is commonly used in studies of the DNA damage response. The double-strand break (DSB) is one of the most harmful types of DNA damage lesions. The conventional method used to quantify DSBs is γH2AX foci counting, which requires manual adjustment and preset parameters and is usually regarded as imprecise, time-consuming, poorly reproducible, and inaccurate. Therefore, a robust automatic alternative method is highly desired. In this manuscript, we present a new method for quantifying DSBs which involves automatic image cropping, automatic foci-segmentation and fluorescent intensity measurement. Furthermore, an additional function was added for standardizing the measurement of DSB response inhibition based on co-localization analysis. We tested the method with a well-known inhibitor of DSB response. The new method requires only one preset parameter, which effectively minimizes operator-dependent variations. Compared with conventional methods, the new method detected a higher percentage difference of foci formation between different cells, which can improve measurement accuracy. The effects of the inhibitor on DSB response were successfully quantified with the new method (p = 0.000). The advantages of this method in terms of reliability, automation and simplicity show its potential in quantitative fluorescence imaging studies and high-content screening for compounds and factors involved in DSB response.

  11. Method for measuring the alternating current half-wave voltage of a Mach-Zehnder modulator based on opto-electronic oscillation.

    PubMed

    Hong, Jun; Chen, Dongchu; Peng, Zhiqiang; Li, Zulin; Liu, Haibo; Guo, Jian

    2018-05-01

    A new method for measuring the alternating current (AC) half-wave voltage of a Mach-Zehnder modulator is proposed and verified by experiment in this paper. Based on the opto-electronic self-oscillation technology, the physical relationship between the saturation output power of the oscillating signal and the AC half-wave voltage is revealed, and the value of the AC half-wave voltage is solved by measuring the saturation output power of the oscillating signal. The experimental results show that the measured data of this new method involved are in agreement with a traditional method, and not only an external microwave signal source but also the calibration for different frequency measurements is not needed in our new method. The measuring process is simplified with this new method on the premise of ensuring the accuracy of measurement, and it owns good practical value.

  12. Measuring Gravitation Using Polarization Spectroscopy

    NASA Technical Reports Server (NTRS)

    Matsko, Andrey; Yu, Nan; Maleki, Lute

    2004-01-01

    A proposed method of measuring gravitational acceleration would involve the application of polarization spectroscopy to an ultracold, vertically moving cloud of atoms (an atomic fountain). A related proposed method involving measurements of absorption of light pulses like those used in conventional atomic interferometry would yield an estimate of the number of atoms participating in the interferometric interaction. The basis of the first-mentioned proposed method is that the rotation of polarization of light is affected by the acceleration of atoms along the path of propagation of the light. The rotation of polarization is associated with a phase shift: When an atom moving in a laboratory reference interacts with an electromagnetic wave, the energy levels of the atom are Doppler-shifted, relative to where they would be if the atom were stationary. The Doppler shift gives rise to changes in the detuning of the light from the corresponding atomic transitions. This detuning, in turn, causes the electromagnetic wave to undergo a phase shift that can be measured by conventional means. One would infer the gravitational acceleration and/or the gradient of the gravitational acceleration from the phase measurements.

  13. Method and means for dynamic measurement of rates of adsorption from solutions

    DOEpatents

    Slomka, B.J.; Buttermore, W.H.

    1992-05-05

    A method and apparatus are described for the dynamic measurement of rates of absorption from solutions. The method has the advantage of avoiding the use of solvent normally used to establish a baseline. The method involves pre-evacuating the adsorbent contained in an adsorbent cell and thereafter rapidly contacting the adsorbent with analytical solution, all without prior exposure of adsorbent to pure solvent. The result is a sharp characteristic adsorption line. 5 figs.

  14. A practical method for measuring the ion exchange capacity decrease of hydroxide exchange membranes during intrinsic degradation

    NASA Astrophysics Data System (ADS)

    Kreuer, Klaus-Dieter; Jannasch, Patric

    2018-01-01

    In this work we present a practical thermogravimetric method for quantifying the IEC (ion exchange capacity) decrease of hydroxide exchange membranes (HEMs) during intrinsic degradation mainly occurring through nucleophilic attack of the anion exchanging group by hydroxide ions. The method involves measuring weight changes under controlled temperature and relative humidity. These conditions are close to these in a fuel cell, i.e. the measured degradation rate includes all effects originating from the polymeric structure, the consumption of hydroxide ions and the release of water. In particular, this approach involves no added solvents or base, thereby avoiding inaccuracies that may arise in other methods due to the presence of solvents (other than water) or co-ions (such as Na+ or K+). We demonstrate the method by characterizing the decomposition of membranes consisting of poly(2,6-dimethyl-1,4-phenylene oxide) functionalized with trimethyl-pentyl-ammonium side chains. The decomposition rate is found to depend on temperature, relative humidity RH (controlling the hydration number λ) and the total water content (controlled by the actual IEC and RH).

  15. Measurement of Assertive Behavior: Construct and Predictive Validity of Self-Report, Role-Playing, and In-Vivo Measures.

    ERIC Educational Resources Information Center

    Burkhart, Barry R.

    1979-01-01

    Seventy-five subjects, who spanned the range of assertiveness, completed two self-report measures of assertiveness, eight role-playing situations involving positive and negative assertiveness, and a telephone in-vivo task. Correlations between the three measurement methods were examined. (Author/SJL)

  16. The Q-Sort method: use in landscape assessment research and landscape planning

    Treesearch

    David G. Pitt; Ervin H. Zube

    1979-01-01

    The assessment of visual quality inherently involves the measurement of perceptual response to landscape. The Q-Sort Method is a psychometric technique which produces reliable and valid interval measurements of people's perceptions of landscape visual quality as depicted in photographs. It is readily understood by participants across a wide range of age groups and...

  17. [Facilitating Processes of Disintegration instead of Occupational Reintegration: A Qualitative Study on Employer-Involvement in Rehabilitation].

    PubMed

    Schwarz, Betje; Specht, Timo; Bethge, Matthias

    2017-12-01

    Purpose To explore the patient's perspective on the involvement of employers into rehabilitation. Methods 8 participants of a work-related medical rehabilitation were interviewed by telephone 4 weeks after discharge. Qualitative content analysis was used to analyze generated data. Results Beside a poor employer-involvement, the interviews revealed that the process of returning to work was characterized and hampered by unused measures of supporting vocational reintegration during rehabilitation, intersection problems in the health care and social security system, and a strategy of waiting by all involved actors. Conclusion Beside an improved employer-involvement, systematic intersection management and full usage of existing measures are demanded to support vocational reintegration. © Georg Thieme Verlag KG Stuttgart · New York.

  18. Invited Article: Concepts and tools for the evaluation of measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Possolo, Antonio; Iyer, Hari K.

    2017-01-01

    Measurements involve comparisons of measured values with reference values traceable to measurement standards and are made to support decision-making. While the conventional definition of measurement focuses on quantitative properties (including ordinal properties), we adopt a broader view and entertain the possibility of regarding qualitative properties also as legitimate targets for measurement. A measurement result comprises the following: (i) a value that has been assigned to a property based on information derived from an experiment or computation, possibly also including information derived from other sources, and (ii) a characterization of the margin of doubt that remains about the true value of the property after taking that information into account. Measurement uncertainty is this margin of doubt, and it can be characterized by a probability distribution on the set of possible values of the property of interest. Mathematical or statistical models enable the quantification of measurement uncertainty and underlie the varied collection of methods available for uncertainty evaluation. Some of these methods have been in use for over a century (for example, as introduced by Gauss for the combination of mutually inconsistent observations or for the propagation of "errors"), while others are of fairly recent vintage (for example, Monte Carlo methods including those that involve Markov Chain Monte Carlo sampling). This contribution reviews the concepts, models, methods, and computations that are commonly used for the evaluation of measurement uncertainty, and illustrates their application in realistic examples drawn from multiple areas of science and technology, aiming to serve as a general, widely accessible reference.

  19. Use of recommended search strategies in systematic reviews and the impact of librarian involvement: a cross-sectional survey of recent authors.

    PubMed

    Koffel, Jonathan B

    2015-01-01

    Previous research looking at published systematic reviews has shown that their search strategies are often suboptimal and that librarian involvement, though recommended, is low. Confidence in the results, however, is limited due to poor reporting of search strategies the published articles. To more accurately measure the use of recommended search methods in systematic reviews, the levels of librarian involvement, and whether librarian involvement predicts the use of recommended methods. A survey was sent to all authors of English-language systematic reviews indexed in the Database of Abstracts of Reviews of Effects (DARE) from January 2012 through January 2014. The survey asked about their use of search methods recommended by the Institute of Medicine, Cochrane Collaboration, and the Agency for Healthcare Research and Quality and if and how a librarian was involved in the systematic review. Rates of use of recommended methods and librarian involvement were summarized. The impact of librarian involvement on use of recommended methods was examined using a multivariate logistic regression. 1560 authors completed the survey. Use of recommended search methods ranged widely from 98% for use of keywords to 9% for registration in PROSPERO and were generally higher than in previous studies. 51% of studies involved a librarian, but only 64% acknowledge their assistance. Librarian involvement was significantly associated with the use of 65% of recommended search methods after controlling for other potential predictors. Odds ratios ranged from 1.36 (95% CI 1.06 to 1.75) for including multiple languages to 3.07 (95% CI 2.06 to 4.58) for using controlled vocabulary. Use of recommended search strategies is higher than previously reported, but many methods are still under-utilized. Librarian involvement predicts the use of most methods, but their involvement is under-reported within the published article.

  20. The measurement of heats of solution of high melting metallic systems in an electromagnetic levitation field. Ph.D. Thesis - Tech. Univ. Berlin - 1979

    NASA Technical Reports Server (NTRS)

    Frohberg, M. G.; Betz, G.

    1982-01-01

    A method was tested for measuring the enthalpies of mixing of liquid metallic alloying systems, involving the combination of two samples in the electromagnetic field of an induction coil. The heat of solution is calculated from the pyrometrically measured temperature effect, the heat capacity of the alloy, and the heat content of the added sample. The usefulness of the method was tested experimentally with iron-copper and niobium-silicon systems. This method should be especially applicable to high-melting alloys, for which conventional measurements have failed.

  1. Microrheology with optical tweezers: measuring the relative viscosity of solutions 'at a glance'.

    PubMed

    Tassieri, Manlio; Del Giudice, Francesco; Robertson, Emma J; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M

    2015-03-06

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples.

  2. Microrheology with Optical Tweezers: Measuring the relative viscosity of solutions ‘at a glance'

    PubMed Central

    Tassieri, Manlio; Giudice, Francesco Del; Robertson, Emma J.; Jain, Neena; Fries, Bettina; Wilson, Rab; Glidle, Andrew; Greco, Francesco; Netti, Paolo Antonio; Maffettone, Pier Luca; Bicanic, Tihana; Cooper, Jonathan M.

    2015-01-01

    We present a straightforward method for measuring the relative viscosity of fluids via a simple graphical analysis of the normalised position autocorrelation function of an optically trapped bead, without the need of embarking on laborious calculations. The advantages of the proposed microrheology method are evident when it is adopted for measurements of materials whose availability is limited, such as those involved in biological studies. The method has been validated by direct comparison with conventional bulk rheology methods, and has been applied both to characterise synthetic linear polyelectrolytes solutions and to study biomedical samples. PMID:25743468

  3. Hadronic vector boson decay and the art of calorimeter calibration

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobban, Olga Barbara

    2002-12-01

    Presented here are several studies involving the energy measurement of particles using calorimeters. The first study involves the effects of radiation damage on the response of a prototype calorimeter for the Compact Muon Solenoid experiment. We found that the effects of radiation damage on the calorimeter·s response arc dose dependent and that most of the damage will occur in the first year of running at the Large Hadron Collider. Another study involved the assessment of the Energy Flow Method an algorithm which combines the information from the calorimeter system is combined with that from the tracking system in an attmpt to improve the energy resolution for jet measurements. Using the Energy Flow method an improvement ofmore » $$\\sim30\\%$$ is found but this impovement decreases at high energies when the hadronic calorimeter resolution dominates the quality of the jet energy measurements. Finally, we developed a new method to calibrate a longitudinally segnmented calorimeter. This method eliminates problems with the traditional method used for the calorimeters at the Collider Detector at Fermilab. We applied this new method in the search for hadrunic decays of the $W$ and $Z$ bosons in a sample of dijet data taken during Tevatron Run IC. A signal of 9873±3950(sys) ±1130 events was found when the new calibration method was used. This corresponds to a cross section $$\\sigma(p\\bar{p} \\to W,Z) \\cdot B(W,Z \\to jets) = 35.6 \\pm 14.2 ({\\rm sys}) \\pm 4.1 (\\rm{stat})$$ nb.« less

  4. High Power Amplifier Harmonic Output Level Measurement

    NASA Technical Reports Server (NTRS)

    Perez, R. M.; Hoppe, D. J.; Khan, A. R.

    1995-01-01

    A method is presented for the measurement of the harmonic output power of high power klystron amplifiers, involving coherent hemispherical radiation pattern measurements of the radiated klystron output. Results are discussed for the operation in saturated and unsaturated conditions, and with a waveguide harmonic filter included.

  5. Measurement of Outgassing Rates of Steels.

    PubMed

    Park, Chongdo; Kim, Se-Hyun; Ki, Sanghoon; Ha, Taekyun; Cho, Boklae

    2016-12-13

    Steels are commonly used materials in the fabrication of vacuum systems because of their good mechanical, corrosion, and vacuum properties. A variety of steels meet the criterion of low outgassing required for high or ultrahigh vacuum applications. However, a given material can present different outgassing rates depending on its manufacturing process or the various pretreatment processes involved during the fabrication. Thus, the measurement of outgassing rates is highly desirable for a specific vacuum application. For this reason, the rate-of-pressure rise (RoR) method is often used to measure the outgassing of hydrogen after bakeout. In this article, a detailed description of the design and execution of the experimental protocol involved in the RoR method is provided. The RoR method uses a spinning rotor gauge to minimize errors that stem from outgassing or the pumping action of a vacuum gauge. The outgassing rates of two ordinary steels (stainless steel and mild steel) were measured. The measurements were made before and after the heat pretreatment of the steels. The heat pretreatment of steels was performed to reduce the outgassing. Extremely low rates of outgassing (on the order of 10 - 11 Pa m 3 sec - 1 m - 2 ) can be routinely measured using relatively small samples.

  6. The Challenge and Opportunity of Parental Involvement in Juvenile Justice Services.

    PubMed

    Burke, Jeffrey D; Mulvey, Edward P; Schubert, Carol A; Garbin, Sara R

    2014-04-01

    The active involvement of parents - whether as recipients, extenders, or managers of services - during their youth's experience with the juvenile justice system is widely assumed to be crucial. Parents and family advocacy groups note persisting concerns with the degree to which successful parental involvement is achieved. Justice system providers are highly motivated and actively working to make improvements. These coalescing interests provide a strong motivation for innovation and improvement regarding family involvement, but the likely success of these efforts is severely limited by the absence of any detailed definition of parental involvement or validated measure of this construct. Determining whether and how parental involvement works in juvenile justice services depends on the development of clear models and sound measurement. Efforts in other child serving systems offer guidance to achieve this goal. A multidimensional working model developed with parents involved in child protective services is presented as a template for developing a model for parental involvement in juvenile justice. Features of the model requiring changes to make it more adaptable to juvenile justice are identified. A systematic research agenda for developing methods and measures to meet the present demands for enhanced parental involvement in juvenile justice services is presented.

  7. The Challenge and Opportunity of Parental Involvement in Juvenile Justice Services

    PubMed Central

    Burke, Jeffrey D.; Mulvey, Edward P.; Schubert, Carol A.; Garbin, Sara R.

    2014-01-01

    The active involvement of parents – whether as recipients, extenders, or managers of services - during their youth’s experience with the juvenile justice system is widely assumed to be crucial. Parents and family advocacy groups note persisting concerns with the degree to which successful parental involvement is achieved. Justice system providers are highly motivated and actively working to make improvements. These coalescing interests provide a strong motivation for innovation and improvement regarding family involvement, but the likely success of these efforts is severely limited by the absence of any detailed definition of parental involvement or validated measure of this construct. Determining whether and how parental involvement works in juvenile justice services depends on the development of clear models and sound measurement. Efforts in other child serving systems offer guidance to achieve this goal. A multidimensional working model developed with parents involved in child protective services is presented as a template for developing a model for parental involvement in juvenile justice. Features of the model requiring changes to make it more adaptable to juvenile justice are identified. A systematic research agenda for developing methods and measures to meet the present demands for enhanced parental involvement in juvenile justice services is presented. PMID:24748704

  8. Surface photovoltage measurements and finite element modeling of SAW devices.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Donnelly, Christine

    2012-03-01

    Over the course of a Summer 2011 internship with the MEMS department of Sandia National Laboratories, work was completed on two major projects. The first and main project of the summer involved taking surface photovoltage measurements for silicon samples, and using these measurements to determine surface recombination velocities and minority carrier diffusion lengths of the materials. The SPV method was used to fill gaps in the knowledge of material parameters that had not been determined successfully by other characterization methods. The second project involved creating a 2D finite element model of a surface acoustic wave device. A basic form ofmore » the model with the expected impedance response curve was completed, and the model is ready to be further developed for analysis of MEMS photonic resonator devices.« less

  9. Pharmacists' perspectives on monitoring adherence to treatment in Cystic Fibrosis.

    PubMed

    Mooney, Karen; Ryan, Cristín; Downey, Damian G

    2016-04-01

    Cystic Fibrosis (CF) management requires complex treatment regimens but adherence to treatment is poor and has negative health implications. There are various methods of measuring adherence, but little is known regarding the extent of adherence measurement in CF centres throughout the UK and Ireland. To determine the adherence monitoring practices in CF centres throughout the UK and Ireland, and to establish CF pharmacists' views on these practices. UK and Ireland Cystic Fibrosis Pharmacists' Group's annual meeting (2014). A questionnaire was designed, piloted and distributed to pharmacists attending the UK and Ireland Cystic Fibrosis Pharmacists' Group's annual meeting (2014). The main outcome measures were the methods of inhaled/nebulised antibiotic supply and the methods used to measure treatment adherence in CF centres. The questionnaire also ascertained the demographic information of participating pharmacists. Closed question responses were analysed using descriptive statistics. Open questions were analysed using content analysis. Twenty-one respondents (84 % response) were included in the analysis and were mostly from English centres (66.7 %). Detailed records of patients receiving their inhaled/nebulised antibiotics were lacking. Adherence was most commonly described to be measured at 'every clinic visit' (28.6 %) and 'occasionally' (28.6 %). Patient self-reported adherence was the most commonly used method of measuring adherence in practice (90.5 %). The availability of electronic adherence monitoring in CF centres did not guarantee its use. Pharmacists attributed an equal professional responsibility for adherence monitoring in CF to Consultants, Nurses and Pharmacists. Seventy-six percent of pharmacists felt that the current adherence monitoring practices within their own unit were inadequate and associated with the absence of sufficient specialist CF pharmacist involvement. Many suggested that greater specialist pharmacist involvement could facilitate improved adherence monitoring. Current adherence knowledge is largely based on self-report. Further work is required to establish the most appropriate method of adherence monitoring in CF centres, to improve the recording of adherence and to understand the impact of increased specialist pharmacist involvement on that adherence.

  10. Measuring the Recovery Orientation of ACT

    PubMed Central

    Salyers, Michelle P.; Stull, Laura G.; Rollins, Angela L.; McGrew, John H.; Hicks, Lia J.; Thomas, Dave; Strieter, Doug

    2014-01-01

    Background Approaches to measuring recovery orientation are needed, particularly for programs that may struggle with implementing recovery-oriented treatment. Objective A mixed methods comparative study was conducted to explore effective approaches to measuring recovery orientation of Assertive Community Treatment (ACT) teams. Design Two ACT teams exhibiting high and low recovery orientation were compared using surveys, treatment plan ratings, diaries of treatment visits, and team-leader-reported treatment control mechanisms. Results The recovery-oriented team differed on one survey measure (higher expectations for consumer recovery), treatment planning (greater consumer involvement and goal-directed content), and use of control mechanisms (less use of representative payee, agency-held lease, daily medication delivery, and family involvement). Staff and consumer diaries showed the most consistent differences (e.g., conveying hope and choice) and were the least susceptible to observer bias, but had the lowest response rates. Conclusions Several practices differentiate recovery orientation on ACT teams, and a mixed-methods assessment approach is feasible. PMID:23690285

  11. Mounting Thin Samples For Electrical Measurements

    NASA Technical Reports Server (NTRS)

    Matus, L. G.; Summers, R. L.

    1988-01-01

    New method for mounting thin sample for electrical measurements involves use of vacuum chuck to hold a ceramic mounting plate, which holds sample. Contacts on mounting plate establish electrical connection to sample. Used to make electrical measurements over temperature range from 77 to 1,000 K and does not introduce distortions into magnetic field during Hall measurements.

  12. Off-line wafer level reliability control: unique measurement method to monitor the lifetime indicator of gate oxide validated within bipolar/CMOS/DMOS technology

    NASA Astrophysics Data System (ADS)

    Gagnard, Xavier; Bonnaud, Olivier

    2000-08-01

    We have recently published a paper on a new rapid method for the determination of the lifetime of the gate oxide involved in a Bipolar/CMOS/DMOS technology (BCD). Because this previous method was based on a current measurement with gate voltage as a parameter needing several stress voltages, it was applied only by lot sampling. Thus, we tried to find an indicator in order to monitor the gate oxide lifetime during the wafer level parametric test and involving only one measurement of the device on each wafer test cell. Using the Weibull law and Crook model, combined with our recent model, we have developed a new test method needing only one electrical measurement of MOS capacitor to monitor the quality of the gate oxide. Based also on a current measurement, the parameter is the lifetime indicator of the gate oxide. From the analysis of several wafers, we gave evidence of the possibility to detect a low performance wafer, which corresponds to the infantile failure on the Weibull plot. In order to insert this new method in the BCD parametric program, a parametric flowchart was established. This type of measurement is an important challenges, because the actual measurements, breakdown charge, Qbd, and breakdown electric field, Ebd, at parametric level and Ebd and interface states density, Dit during the process cannot guarantee the gate oxide lifetime all along fabrication process. This indicator measurement is the only one, which predicts the lifetime decrease.

  13. Analytical solutions for determining residual stresses in two-dimensional domains using the contour method

    PubMed Central

    Kartal, Mehmet E.

    2013-01-01

    The contour method is one of the most prevalent destructive techniques for residual stress measurement. Up to now, the method has involved the use of the finite-element (FE) method to determine the residual stresses from the experimental measurements. This paper presents analytical solutions, obtained for a semi-infinite strip and a finite rectangle, which can be used to calculate the residual stresses directly from the measured data; thereby, eliminating the need for an FE approach. The technique is then used to determine the residual stresses in a variable-polarity plasma-arc welded plate and the results show good agreement with independent neutron diffraction measurements. PMID:24204187

  14. Use of an Anatomical Scalar to Control for Sex-Based Size Differences in Measures of Hyoid Excursion during Swallowing

    ERIC Educational Resources Information Center

    Molfenter, Sonja M.; Steele, Catriona M.

    2014-01-01

    Purpose: Traditional methods for measuring hyoid excursion from dynamic videofluoroscopy recordings involve calculating changes in position in absolute units (mm). This method shows a high degree of variability across studies but agreement that greater hyoid excursion occurs inmen than in women. Given that men are typically taller than women, the…

  15. A simple technique for measurement of pressure in the tympanitic rumen of cattle.

    PubMed

    Turner, C B; Whyte, T D

    1978-05-13

    The construction and method of use of a simple device for the non-invasive measurement of intra-rumenal pressure is outlined. Results obtained from calves suffering from increased intra-rumenal pressure (bloat) are shown. The method is capable of quantifying pressures involved in bloat and could be used to augment the visual assessment of bloat scoring.

  16. Measuring Pinhole Leaks - A Novel Method

    NASA Technical Reports Server (NTRS)

    Dunn, Carol Anne

    2009-01-01

    Both of the shuttle pads have one of these large liquid hydrogen tanks and the Shuttle program is currently using both pads. However, just recently, there has been increasing concerns over possible air leaks from the outside into the evacuated region. A method to detect leaks involving measuring the change in the boil-off rate of the liquid hydrogen in the tank.

  17. Applications of an automated stem measurer for precision forestry

    Treesearch

    N. Clark

    2001-01-01

    Accurate stem measurements are required for the determination of many silvicultural prescriptions, i.e., what are we going to do with a stand of trees. This would only be amplified in a precision forestry context. Many methods have been proposed for optimal ways to evaluate stems for a variety of characteristics. These methods usually involve the acquisition of total...

  18. Validation of an Instrument to Measure High School Students' Attitudes toward Fitness Testing

    ERIC Educational Resources Information Center

    Mercier, Kevin; Silverman, Stephen

    2014-01-01

    Purpose: The purpose of this investigation was to develop an instrument that has scores that are valid and reliable for measuring students' attitudes toward fitness testing. Method: The method involved the following steps: (a) an elicitation study, (b) item development, (c) a pilot study, and (d) a validation study. The pilot study included 427…

  19. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement

    PubMed Central

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-01-01

    Objective The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Methods Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Results Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. Conclusions AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis. PMID:22654681

  20. Measurement of plasma hydrogen sulfide in vivo and in vitro

    PubMed Central

    Shen, Xinggui; Pattillo, Christopher B.; Pardue, Sibile; Bir, Shyamal C.; Wang, Rui; Kevil, Christopher G.

    2015-01-01

    The gasotransmitter hydrogen sulfide is known to regulate multiple cellular functions during normal and pathophysiological states. However, a paucity of concise information exists regarding quantitative amounts of hydrogen sulfide involved in physiological and pathological responses. This is primarily due to disagreement among various methods employed to measure free hydrogen sulfide. In this article, we describe a very sensitive method of measuring the presence of H2S in plasma down to nanomolar levels, using monobromobimane (MBB). The current standard assay using methylene blue provides erroneous results that do not actually measure H2S. The method presented herein involves derivatization of sulfide with excess MBB in 100 mM Tris–HCl buffer (pH 9.5, 0.1 mM DTPA) for 30 min in 1% oxygen at room temperature. The fluorescent product sulfide-dibimane (SDB) is analyzed by RP-HPLC using an eclipse XDB-C18 (4.6×250 mm) column with gradient elution by 0.1% (v/v) trifluoroacetic acid in acetonitrile. The limit of detection for sulfide-dibimane is 2 nM and the SDB product is very stable over time, allowing batch storage and analysis. In summary, our MBB method is suitable for sensitive quantitative measurement of free hydrogen sulfide in multiple biological samples such as plasma, tissue and cell culture lysates, or media. PMID:21276849

  1. Towards capturing meaningful outcomes for people with dementia in psychosocial intervention research: A pan-European consultation.

    PubMed

    Øksnebjerg, Laila; Diaz-Ponce, Ana; Gove, Dianne; Moniz-Cook, Esme; Mountain, Gail; Chattat, Rabih; Woods, Bob

    2018-06-19

    People with dementia are often marginalized and excluded from influence, also in relation to dementia research. There is, however, a growing requirement for inclusion through Patient and Public Involvement (PPI), but there is still limited knowledge on how researchers can fully benefit from the involvement of people with dementia in the development and testing of psychosocial interventions. This paper describes the results of a pan-European consultation with people with dementia, synthesizing their views on outcomes of psychosocial interventions. To involve people with dementia in establishing what are meaningful outcomes when participating in psychosocial interventions. Consultations took place at four divergent sites across Europe, involving twenty-five people with dementia from nine European countries. The methods used for the consultation were developed through an iterative process involving people with dementia. Data from the consultation were analysed from a thematic analysis approach. The results suggested that people with dementia wish to participate in interventions that enhance their well-being, confidence, health, social participation and human rights. This highlights a need for improvements in psychosocial research to capture these outcomes. Involving people with dementia in discussions of psychosocial interventions has enhanced our understanding about meaningful outcome measures in research and methods of data collection. This study suggests that new outcome measures in psychosocial research are needed where concepts of positive psychology and social health can guide innovation and outcome measurement. © 2018 The Authors. Health Expectations published by John Wiley & Sons Ltd.

  2. System for routine surface anthropometry using reprojection registration

    NASA Astrophysics Data System (ADS)

    Sadleir, R. J.; Owens, R. A.; Hartmann, P. E.

    2003-11-01

    Range data measurement can be usefully applied to non-invasive monitoring of anthropometric changes due to disease, healing or during normal physiological processes. We have developed a computer vision system that allows routine capture of biological surface shapes and accurate measurement of anthropometric changes, using a structured light stripe triangulation system. In many applications involving relocation of soft tissue for image-guided surgery or anthropometry it is neither accurate nor practical to apply fiducial markers directly to the body. This system features a novel method of achieving subject re-registration that involves application of fiducials by a standard data projector. Calibration of this reprojector is achieved using a variation of structured lighting techniques. The method allows accurate and comparable repositioning of elastic surfaces. Tests of repositioning using the reprojector found a significant improvement in subject registration compared to an earlier method which used video overlay comparison only. It has a current application to the measurement of breast volume changes in lactating mothers, but may be extended to any application where repeatable positioning and measurement is required.

  3. Increasing patient involvement in choosing treatment for early breast cancer.

    PubMed

    Street, R L; Voigt, B; Geyer, C; Manning, T; Swanson, G P

    1995-12-01

    This investigation examined factors affecting patient involvement in consultations to decide local treatment for early breast cancer and the effectiveness of two methods of preconsultation education aimed at increasing patient participation in these discussions. Sixty patients with Stage I or II breast cancer (1) were pretested on their knowledge about breast cancer treatment and optimism for the future, (2) were randomly assigned to one of two methods for preconsultation education: interactive multimedia program or brochure, (3) completed knowledge and optimism measures, (4) consulted with a medical oncologist, radiation oncologist, and general surgeon, and (5) completed self-report measures assessing their involvement in the consultations and control over decision-making. The consultations were audiorecorded and analyzed to identify behavioral indicators of patient involvement (question-asking, opinion-giving, and expressing concern) and physician utterances encouraging patient participation. College-educated patients younger than 65 years of age were more active participants in these consultations than were older, less educated patients. In addition, patients showed more involvement when they interacted with physicians who encouraged and facilitated patient participation. The method of education did not affect patient involvement although patients tended to learn more about breast cancer treatment after using the multimedia program than after reading the brochure. Although patients vary in their expressiveness, physicians may be able to increase patient participation in deciding treatment by using patient-centered behavior. Also, preconsultation education appears to be an effective clinical strategy for helping patients gain an accurate understanding of their treatment options before meeting with physicians.

  4. Random Item Generation Is Affected by Age

    ERIC Educational Resources Information Center

    Multani, Namita; Rudzicz, Frank; Wong, Wing Yiu Stephanie; Namasivayam, Aravind Kumar; van Lieshout, Pascal

    2016-01-01

    Purpose: Random item generation (RIG) involves central executive functioning. Measuring aspects of random sequences can therefore provide a simple method to complement other tools for cognitive assessment. We examine the extent to which RIG relates to specific measures of cognitive function, and whether those measures can be estimated using RIG…

  5. 48 CFR 9903.302-1 - Cost accounting practice.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ..., or measurement of cost. (a) Measurement of cost, as used in this part, encompasses accounting methods... practice. Examples of cost accounting practices which involve measurement of costs are— (1) The use of... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Cost accounting practice...

  6. Measurement of Outcomes in Vision-Related Rehabilitation.

    ERIC Educational Resources Information Center

    Head, Daniel

    1998-01-01

    Comments on an earlier article by Lorraine Lidoff on health insurance coverage of vision-related rehabilitation services. Urges a standard model of services involving selection of measurable outcomes that reflect treatment processes, selection of the most appropriate time to measure outcomes, and selection of the best method for collecting outcome…

  7. Differences That Make a Difference: A Study in Collaborative Learning

    ERIC Educational Resources Information Center

    Touchman, Stephanie

    2012-01-01

    Collaborative learning is a common teaching strategy in classrooms across age groups and content areas. It is important to measure and understand the cognitive process involved during collaboration to improve teaching methods involving interactive activities. This research attempted to answer the question: why do students learn more in…

  8. Smartphone versus knee ligament arthrometer when size does not matter.

    PubMed

    Ferretti, Andrea; Andrea, Ferretti; Valeo, Luigi; Luigi, Valeo; Mazza, Daniele; Daniele, Mazza; Muliere, Luca; Luca, Muliere; Iorio, Paolo; Paolo, Iorio; Giovannetti, Giovanni; Giovanni, Giovannetti; Conteduca, Fabio; Fabio, Conteduca; Iorio, Raffaele; Raffaele, Iorio

    2014-10-01

    The use of available mechanical methods to measure anterior tibial translation (ATT) in anterior cruciate ligament (ACL)-deficient knees are limited by size and costs. This study evaluated the performance of a portable device based on a downloadable electronic smartphone application to measure ATT in ACL-deficient knees. A specific smartphone application (SmartJoint) was developed for this purpose. Two independent observers nonsequentially measured the amount of ATT during execution of a maximum manual Lachman test in 35 patients with an ACL-deficient knee using KT 1000 and SmartJoint on both involved and uninvolved knees. As each examiner performed the test three times on each knee, a total of 840 measurements were collected. Statistical analysis compared intertest, interobserver and intra-observer reliability using the interclass correlation coefficient (ICC). An ICC > 0.75 indicates excellent reproducibility among measurements. Mean amount of ATT on uninvolved knees was 6.1 mm [standard deviation (SD = 2)] with the KT 1000 and 6.4 mm (SD = 2) with SmartJoint. Mean side-to-side difference was 8.1 mm. (SD = 4) with KT 1000 and 8.3 mm (SD = 3) with SmartJoint. Intertest reliability between the two methods yielded an ICC 0.797 [95 % confidence interval (CI) 0.717-0.857] for the uninvolved knee and of 0.987 (CI 0.981-0.991) for the involved knee. Interobserver ICC for SmartJoint and KT 1000 was 0.957 (CI 0.927-0.976) for the uninvolved knee and 0.992 (CI 0.986-0.996) for the involved knee and 0.973 (CI 0.954-0.985) for the uninvolved knee and 0.989 (CI 0.981-0.994) for involved knee, respectively. The performance of SmartJoint is comparable and highly correlated with measurements obtained from KT 1000. SmartJoint may provide a truly portable, noninvasive, accurate, reliable, inexpensive and widely accessible method to characterize ATT in ACL-deficient knee.

  9. Measurements of the Absorption by Auditorium SEATING—A Model Study

    NASA Astrophysics Data System (ADS)

    BARRON, M.; COLEMAN, S.

    2001-01-01

    One of several problems with seat absorption is that only small numbers of seats can be tested in standard reverberation chambers. One method proposed for reverberation chamber measurements involves extrapolation when the absorption coefficient results are applied to actual auditoria. Model seat measurements in an effectively large model reverberation chamber have allowed the validity of this extrapolation to be checked. The alternative barrier method for reverberation chamber measurements was also tested and the two methods were compared. The effect on the absorption of row-row spacing as well as absorption by small numbers of seating rows was also investigated with model seats.

  10. Extinction-ratio-independent electrical method for measuring chirp parameters of Mach-Zehnder modulators using frequency-shifted heterodyne.

    PubMed

    Zhang, Shangjian; Wang, Heng; Zou, Xinhai; Zhang, Yali; Lu, Rongguo; Liu, Yong

    2015-06-15

    An extinction-ratio-independent electrical method is proposed for measuring chirp parameters of Mach-Zehnder electric-optic intensity modulators based on frequency-shifted optical heterodyne. The method utilizes the electrical spectrum analysis of the heterodyne products between the intensity modulated optical signal and the frequency-shifted optical carrier, and achieves the intrinsic chirp parameters measurement at microwave region with high-frequency resolution and wide-frequency range for the Mach-Zehnder modulator with a finite extinction ratio. Moreover, the proposed method avoids calibrating the responsivity fluctuation of the photodiode in spite of the involved photodetection. Chirp parameters as a function of modulation frequency are experimentally measured and compared to those with the conventional optical spectrum analysis method. Our method enables an extinction-ratio-independent and calibration-free electrical measurement of Mach-Zehnder intensity modulators by using the high-resolution frequency-shifted heterodyne technique.

  11. A Transfer Voltage Simulation Method for Generator Step Up Transformers

    NASA Astrophysics Data System (ADS)

    Funabashi, Toshihisa; Sugimoto, Toshirou; Ueda, Toshiaki; Ametani, Akihiro

    It has been found from measurements for 13 sets of GSU transformers that a transfer voltage of a generator step-up (GSU) transformer involves one dominant oscillation frequency. The frequency can be estimated from the inductance and capacitance values of the GSU transformer low-voltage-side. This observation has led to a new method for simulating a GSU transformer transfer voltage. The method is based on the EMTP TRANSFORMER model, but stray capacitances are added. The leakage inductance and the magnetizing resistance are modified using approximate curves for their frequency characteristics determined from the measured results. The new method is validated in comparison with the measured results.

  12. Reliability and validity of the AutoCAD software method in lumbar lordosis measurement.

    PubMed

    Letafatkar, Amir; Amirsasan, Ramin; Abdolvahabi, Zahra; Hadadnezhad, Malihe

    2011-12-01

    The aim of this study was to determine the reliability and validity of the AutoCAD software method in lumbar lordosis measurement. Fifty healthy volunteers with a mean age of 23 ± 1.80 years were enrolled. A lumbar lateral radiograph was taken on all participants, and the lordosis was measured according to the Cobb method. Afterward, the lumbar lordosis degree was measured via AutoCAD software and flexible ruler methods. The current study is accomplished in 2 parts: intratester and intertester evaluations of reliability as well as the validity of the flexible ruler and software methods. Based on the intraclass correlation coefficient, AutoCAD's reliability and validity in measuring lumbar lordosis were 0.984 and 0.962, respectively. AutoCAD showed to be a reliable and valid method to measure lordosis. It is suggested that this method may replace those that are costly and involve health risks, such as radiography, in evaluating lumbar lordosis.

  13. A Hydrogen Exchange Method Using Tritium and Sephadex: Its Application to Ribonuclease*

    PubMed Central

    Englander, S. Walter

    2012-01-01

    A new method for measuring the hydrogen exchange of macromolecules in solution is described. The method uses tritium to trace the movement of hydrogen, and utilizes Sephadex columns to effect, in about 2 minutes, a separation between tritiated macromolecule and tritiated solvent great enough to allow the measurement of bound tritium. High sensitivity and freedom from artifact is demonstrated and the possible value of the technique for investigation of other kinds of colloid-small molecule interaction is indicated. Competition experiments involving tritium, hydrogen, and deuterium indicate the absence of any equilibrium isotope effect in the ribonuclease-hydrogen isotope system, though a secondary kinetic isotope effect is apparent when ribonuclease is largely deuterated. Ribonuclease shows four clearly distinguishable kinetic classes of exchangeable hydrogens. Evidence is marshaled to suggest the independently measurable classes II, III, and IV (in order of decreasing rate of exchange) to represent “random-chain” peptides, peptides involved in α-helix, and otherwise shielded side-chain and peptide hydrogens, respectively. PMID:14075117

  14. Multiple Reaction Monitoring Mode Based Liquid Chromatography-Mass Spectrometry Method for Simultaneous Quantification of Brassinolide and Other Plant Hormones Involved in Abiotic Stresses.

    PubMed

    Kasote, Deepak M; Ghosh, Ritesh; Chung, Jun Young; Kim, Jonggeun; Bae, Inhwan; Bae, Hanhong

    2016-01-01

    Plant hormones are the key regulators of adaptive stress response. Abiotic stresses such as drought and salt are known to affect the growth and productivity of plants. It is well known that the levels of plant hormones such as zeatin (ZA), abscisic acid (ABA), salicylic acid (SA), jasmonic acid (JA), and brassinolide (BR) fluctuate upon abiotic stress exposure. At present, there is not any single suitable liquid chromatography-mass spectrometry (LC-MS) method for simultaneous analysis of BR and other plant hormones involved in abiotic stresses. In the present study, we developed a simple, sensitive, and rapid method for simultaneous analysis of five major plant hormones, ZA, ABA, JA, SA, and BR, which are directly or indirectly involved in drought and salt stresses. The optimized extraction procedure was simple and easy to use for simultaneous measurement of these plant hormones in Arabidopsis thaliana. The developed method is highly reproducible and can be adapted for simultaneous measurement of changes in plant hormones (ZA, ABA, JA, SA, and BR) in response to abiotic stresses in plants like A. thaliana and tomato.

  15. An iterative method for analysis of hadron ratios and Spectra in relativistic heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Choi, Suk; Lee, Kang Seog

    2016-04-01

    A new iteration method is proposed for analyzing both the multiplicities and the transverse momentum spectra measured within a small rapidity interval with low momentum cut-off without assuming the invariance of the rapidity distribution under the Lorentz-boost and is applied to the hadron data measured by the ALICE collaboration for Pb+Pb collisions at √ {^sNN} = 2.76 TeV. In order to correctly consider the resonance contribution only to the small rapidity interval measured, we only consider ratios involving only those hadrons whose transverse momentum spectrum is available. In spite of the small number of ratios considered, the quality of fitting both of the ratios and the transverse momentum spectra is excellent. Also, the calculated ratios involving strange baryons with the fitted parameters agree with the data surprisingly well.

  16. Developments in Sampling and Analysis Instrumentation for Stationary Sources

    ERIC Educational Resources Information Center

    Nader, John S.

    1973-01-01

    Instrumentation for the measurement of pollutant emissions is considered including sample-site selection, sample transport, sample treatment, sample analysis, and data reduction, display, and interpretation. Measurement approaches discussed involve sample extraction from within the stack and electro-optical methods. (BL)

  17. A Localized Ensemble Kalman Smoother

    NASA Technical Reports Server (NTRS)

    Butala, Mark D.

    2012-01-01

    Numerous geophysical inverse problems prove difficult because the available measurements are indirectly related to the underlying unknown dynamic state and the physics governing the system may involve imperfect models or unobserved parameters. Data assimilation addresses these difficulties by combining the measurements and physical knowledge. The main challenge in such problems usually involves their high dimensionality and the standard statistical methods prove computationally intractable. This paper develops and addresses the theoretical convergence of a new high-dimensional Monte-Carlo approach called the localized ensemble Kalman smoother.

  18. The Impact of Adopting Physical Fitness Standards on Army Personnel Assignment: A Preliminary Study

    DTIC Science & Technology

    1981-01-01

    changes will also have the side benefit of reducing the number of job-related injuries . The changes will also provide wider and more effi- cient...be clear that the popular conception of a strength test, weightlifting , involves both force and work, and therefore, is not a pure measure of strength...measure. It also involves risk of injury from overexertion. A second method is the isometric or static strength test in which the individual is required to

  19. Multivariate information-theoretic measures reveal directed information structure and task relevant changes in fMRI connectivity.

    PubMed

    Lizier, Joseph T; Heinzle, Jakob; Horstmann, Annette; Haynes, John-Dylan; Prokopenko, Mikhail

    2011-02-01

    The human brain undertakes highly sophisticated information processing facilitated by the interaction between its sub-regions. We present a novel method for interregional connectivity analysis, using multivariate extensions to the mutual information and transfer entropy. The method allows us to identify the underlying directed information structure between brain regions, and how that structure changes according to behavioral conditions. This method is distinguished in using asymmetric, multivariate, information-theoretical analysis, which captures not only directional and non-linear relationships, but also collective interactions. Importantly, the method is able to estimate multivariate information measures with only relatively little data. We demonstrate the method to analyze functional magnetic resonance imaging time series to establish the directed information structure between brain regions involved in a visuo-motor tracking task. Importantly, this results in a tiered structure, with known movement planning regions driving visual and motor control regions. Also, we examine the changes in this structure as the difficulty of the tracking task is increased. We find that task difficulty modulates the coupling strength between regions of a cortical network involved in movement planning and between motor cortex and the cerebellum which is involved in the fine-tuning of motor control. It is likely these methods will find utility in identifying interregional structure (and experimentally induced changes in this structure) in other cognitive tasks and data modalities.

  20. In-Vivo Techniques for Measuring Electrical Properties of Tissues.

    DTIC Science & Technology

    1980-09-01

    probe Electromagnetic energy Dielectric properties Monopole antenna In-situ tissues , Antemortem/Pos tmortem studies Renal blood flow 10 ABSTRACT... mice or rats, which were positioned beneath a fixed measurement probe. Several alternative methods involving the use of semi-rigid or flexible coaxial

  1. Applied statistics in agricultural, biological, and environmental sciences.

    USDA-ARS?s Scientific Manuscript database

    Agronomic research often involves measurement and collection of multiple response variables in an effort to understand the more complex nature of the system being studied. Multivariate statistical methods encompass the simultaneous analysis of all random variables measured on each experimental or s...

  2. Review of chemical separation techniques applicable to alpha spectrometric measurements

    NASA Astrophysics Data System (ADS)

    de Regge, P.; Boden, R.

    1984-06-01

    Prior to alpha-spectrometric measurements several chemical manipulations are usually required to obtain alpha-radiating sources with the desired radiochemical and chemical purity. These include sampling, dissolution or leaching of the elements of interest, conditioning of the solution, chemical separation and preparation of the alpha-emitting source. The choice of a particular method is dependent on different criteria but always involves aspects of the selectivity or the quantitative nature of the separations. The availability of suitable tracers or spikes and modern high resolution instruments resulted in the wide-spread application of isotopic dilution techniques to the problems associated with quantitative chemical separations. This enhanced the development of highly elective methods and reagents which led to important simplifications in the separation schemes. The chemical separation methods commonly used in connection with alpha-spectrometric measurements involve precipitation with selected scavenger elements, solvent extraction, ion exchange and electrodeposition techniques or any combination of them. Depending on the purpose of the final measurement and the type of sample available the chemical separation methods have to be adapted to the particular needs of environment monitoring, nuclear chemistry and metrology, safeguards and safety, waste management and requirements in the nuclear fuel cycle. Against the background of separation methods available in the literature the present paper highlights the current developments and trends in the chemical techniques applicable to alpha spectrometry.

  3. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  4. Assessment of knowledge and skills in information literacy instruction for rehabilitation sciences students: a scoping review.

    PubMed

    Boruff, Jill T; Harrison, Pamela

    2018-01-01

    This scoping review investigates how knowledge and skills are assessed in the information literacy (IL) instruction for students in physical therapy, occupational therapy, or speech-language pathology, regardless of whether the instruction was given by a librarian. The objectives were to discover what assessment measures were used, determine whether these assessment methods were tested for reliability and validity, and provide librarians with guidance on assessment methods to use in their instruction in evidence-based practice contexts. A scoping review methodology was used. A systematic search strategy was run in Ovid MEDLINE and adapted for CINAHL; EMBASE; Education Resources Information Center (ERIC) (EBSCO); Library and Information Science Abstracts (LISA); Library, Information Science & Technology Abstracts (LISTA); and Proquest Theses and Dissertations from 1990 to January 16, 2017. Forty articles were included for data extraction. Three major themes emerged: types of measures used, type and context of librarian involvement, and skills and outcomes described. Thirty-four measures of attitude and thirty-seven measures of performance were identified. Course products were the most commonly used type of performance measure. Librarians were involved in almost half the studies, most frequently as instructor, but also as author or assessor. Information literacy skills such as question formulation and database searching were described in studies that did not involve a librarian. Librarians involved in instructional assessment can use rubrics such as the Valid Assessment of Learning in Undergraduate Education (VALUE) when grading assignments to improve the measurement of knowledge and skills in course-integrated IL instruction. The Adapted Fresno Test could be modified to better suit the real-life application of IL knowledge and skills.

  5. Should "standard gamble" and "'time trade off" utility measurement be used more in mental health research?

    PubMed

    Flood, Chris

    2010-06-01

    This review and discussion paper demonstrates that utility and preference measurement in mental health research is increasing. However there is still a general reluctance around using the methods due to methodological challenges and concerns around the capacity of users to understand utility methods during the research process. This paper sets out to describe and review some of the previously documented difficulties of using utility measurements in mental health services research and to highlight where they have been used successfully as measures. Additionally the paper aims to discuss a means of improving the methods used to capture service user utility and preference measurement and why decision making would be better informed as a result. International literature on utility measurement is reviewed, specifically examining the use of standard gamble and time trade off methods in mental health. Utility measurement in mental health is increasing though as the review demonstrates, concerns still exist over its application. A number of methods can be used to improve the approach overall and these are discussed as well as specific areas worthy of utility measurement including 'disutility' of admission, medication and medication side effects. Overall this paper argues that it is necessary to persist with efforts to conduct utility measurement calculation albeit with a critical eye on the methods in an attempt to ensure improvements are continually made. Utility and preference scores may be limited in that they only provide a rough score but they are defended as a means of providing some form of strength of preference for health states. The review is limited to English only texts. The debate on whether to use standard gamble and time trade off has implications for health services resource allocations, decision making, health economics research, policy making and health services research generally involving psychiatric service users. The paper argues that the absence of utility measurement in mental health runs the risk of mental health being disadvantaged in decisions around resource allocation. Institutions involved in decision making like the United Kingdom's National Institute for Health and Clinical Excellence, would be better served in their decision making and calculation of Quality Adjusted Life Years if more utility measurement in psychiatric research was carried out. Other arguments for using utility measurement include the desirability of using utility measurement to elicit a patient dimension of risk. Future utility research should aim for better involvement of service users in the design stage, the changing of time frames offered to users in health state scenarios used, a greater need for comparative work of utilities scoring across illness and between standard gamble and time trade off and more staff training in the use of utility methodology with mental health service users.

  6. A regularization corrected score method for nonlinear regression models with covariate error.

    PubMed

    Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna

    2013-03-01

    Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.

  7. Inventive Thinking in Biology.

    ERIC Educational Resources Information Center

    McCormack, Alan J., Ed.

    1982-01-01

    To encourage students to become involved in the inventive and imaginative dimensions of biology, students are asked to invent: a useful product, way to use old newspapers, insect repellent, organism attracter, organelle separater, way to measure rate of hyphal growth, and method to measure strength of spider web. (DC)

  8. Automatic identification of abstract online groups

    DOEpatents

    Engel, David W; Gregory, Michelle L; Bell, Eric B; Cowell, Andrew J; Piatt, Andrew W

    2014-04-15

    Online abstract groups, in which members aren't explicitly connected, can be automatically identified by computer-implemented methods. The methods involve harvesting records from social media and extracting content-based and structure-based features from each record. Each record includes a social-media posting and is associated with one or more entities. Each feature is stored on a data storage device and includes a computer-readable representation of an attribute of one or more records. The methods further involve grouping records into record groups according to the features of each record. Further still the methods involve calculating an n-dimensional surface representing each record group and defining an outlier as a record having feature-based distances measured from every n-dimensional surface that exceed a threshold value. Each of the n-dimensional surfaces is described by a footprint that characterizes the respective record group as an online abstract group.

  9. Optical Sensor/Actuator Locations for Active Structural Acoustic Control

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Palumbo, Daniel L.; Kincaid, Rex K.

    1998-01-01

    Researchers at NASA Langley Research Center have extensive experience using active structural acoustic control (ASAC) for aircraft interior noise reduction. One aspect of ASAC involves the selection of optimum locations for microphone sensors and force actuators. This paper explains the importance of sensor/actuator selection, reviews optimization techniques, and summarizes experimental and numerical results. Three combinatorial optimization problems are described. Two involve the determination of the number and position of piezoelectric actuators, and the other involves the determination of the number and location of the sensors. For each case, a solution method is suggested, and typical results are examined. The first case, a simplified problem with simulated data, is used to illustrate the method. The second and third cases are more representative of the potential of the method and use measured data. The three case studies and laboratory test results establish the usefulness of the numerical methods.

  10. Measurement of motor disability in MPTP-treated macaques using a telemetry system for estimating circadian motor activity.

    PubMed

    Barcia, C; De Pablos, V; Bautista-Hernández, V; Sanchez-Bahillo, A; Fernández-Barreiro, A; Poza, M; Herrero, M T

    2004-03-15

    The parkinsonian symptoms of primates after MPTP exposure can be measured by several visual methods (classical motor scores). However, these methods have a subjective bias, especially as regards the evaluation of the motor activity. Computerized monitoring systems represent an unbiased method for measuring the motor disability of monkeys after MPTP administration. In this work the motor activity of monkeys before and after MPTP administration is measured and compared with the activity of a control intact group by means of a telemetry system. A pronounced decrease in motor activity was observed after MPTP administration. These results suggest the monitoring method used is suited for characterizing the motor incapacity and possible improvements following treatments to test different therapies to control Parkinson's disease in MPTP models involving primates.

  11. Noncontact Measurement Of Critical Current In Superconductor

    NASA Technical Reports Server (NTRS)

    Israelsson, Ulf E.; Strayer, Donald M.

    1992-01-01

    Critical current measured indirectly via flux-compression technique. Magnetic flux compressed into gap between superconductive hollow cylinder and superconductive rod when rod inserted in hole in cylinder. Hall-effect probe measures flux density before and after compression. Method does not involve any electrical contact with superconductor. Therefore, does not cause resistive heating and consequent premature loss of superconductivity.

  12. Effects of stakeholder involvement in river management

    NASA Astrophysics Data System (ADS)

    Buchecker, M.; Menzel, S.

    2012-04-01

    In the last decades, in many parts of Europe involving local stakeholders or the local public in river management has become a standard procedure. For many decision makers, the purpose of involving other interest groups is limited to achieving a sufficient local acceptance of the project, and accordingly they adopt minimal forms of involvement. Theoretical literature and first empirical studies, however, suggest that stakeholder involvement can have, if done in appropriate quality, have much more far-reaching benefits for a sustainable river management such as a better consensus, social learning and social capital building. But there is so far only little reliable evidence that and under which conditions such benefits or effects in fact result from stakeholder involvement processes. The reason for this is that such involvement processes represent very complex social interventions, and all"affordable"effect measurement methods have their weaknesses. In our project we wanted to find out which were the really robust social effects of stakeholder involvement in river management. We therefore evaluated a number of real Swiss case studies of participatory river management using three different approaches of effect measurements: a quasi-experimental approach using repeated standardized measurement of stakeholders' attitudes, a qualitative long-term ex-post measurement approach based on interviews with stakeholders of five participatory river projects, and a comparative analysis approach based on data of residents effect assessments of participatory river planning gathered in a Swiss national survey. The analysis of all three evaluation studies confirmed that stakeholder involvement in river management projects have substantive social effects. The comparison of the results of the three measurement approaches revealed that social learning and acceptance building were the most robust effects of stakeholder involvement, as they were confirmed by all the three measurement approaches. Social capital building, however, was not found to be a relevant effect in the long-term qualitative ex-post measurement of stakeholder processes in river management. The data suggested that social capital was "only" maintained or reproduced by the involvement process. The results will be discussed, and implications for the practice as well as for future research will be drawn.

  13. Measurement of Density, Sound Velocity, Surface Tension, and Viscosity of Freely Suspended Supercooled Liquids

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.

    1995-01-01

    Non-contact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, o-terphenyl, succinonitrile, as well as higher temperature melts such as molten indium, aluminum and other metals. Although these techniques have thus far involved ultrasonic, electromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved have ranged between 10 and 15% of the absolute temperature of the melting point for the materials mentioned above. The physical properties measurement methods have been mostly novel approaches, and the typical accuracy achieved have not yet matched their standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread, and as we gain a better understanding of the physics of levitated liquid samples.

  14. Measurement of density, sound velocity, surface tension, and viscosity of freely suspended supercooled liquids

    NASA Astrophysics Data System (ADS)

    Trinh, E. H.; Ohsaka, K.

    1995-03-01

    Noncontact methods have been implemented in conjunction with levitation techniques to carry out the measurement of the macroscopic properties of liquids significantly cooled below their nominal melting point. Free suspension of the sample and remote methods allow the deep excursion into the metastable liquid state and the determination of its thermophysical properties. We used this approach to investigate common substances such as water, v-terphenyl. succinonitrile, as well as higher temperature melts such as molten indium, aluminum, and other metals. Although these techniques have thus far involved ultrasonic, eletromagnetic, and more recently electrostatic levitation, we restrict our attention to ultrasonic methods in this paper. The resulting magnitude of maximum thermal supercooling achieved has ranged between 10% and 15% of the absolute temperature of the melting point for the materials mentioned above. The methods for measuring the physical properties have been mostly novel approaches, and the typical accuracy achieved has not yet matched the standard equivalent techniques involving contained samples and invasive probing. They are currently being refined, however, as the levitation techniques become more widespread and as we gain a better understanding of the physics of levitated liquid samples.

  15. Robust Strategy for Rocket Engine Health Monitoring

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    2001-01-01

    Monitoring the health of rocket engine systems is essentially a two-phase process. The acquisition phase involves sensing physical conditions at selected locations, converting physical inputs to electrical signals, conditioning the signals as appropriate to establish scale or filter interference, and recording results in a form that is easy to interpret. The inference phase involves analysis of results from the acquisition phase, comparison of analysis results to established health measures, and assessment of health indications. A variety of analytical tools may be employed in the inference phase of health monitoring. These tools can be separated into three broad categories: statistical, rule based, and model based. Statistical methods can provide excellent comparative measures of engine operating health. They require well-characterized data from an ensemble of "typical" engines, or "golden" data from a specific test assumed to define the operating norm in order to establish reliable comparative measures. Statistical methods are generally suitable for real-time health monitoring because they do not deal with the physical complexities of engine operation. The utility of statistical methods in rocket engine health monitoring is hindered by practical limits on the quantity and quality of available data. This is due to the difficulty and high cost of data acquisition, the limited number of available test engines, and the problem of simulating flight conditions in ground test facilities. In addition, statistical methods incur a penalty for disregarding flow complexity and are therefore limited in their ability to define performance shift causality. Rule based methods infer the health state of the engine system based on comparison of individual measurements or combinations of measurements with defined health norms or rules. This does not mean that rule based methods are necessarily simple. Although binary yes-no health assessment can sometimes be established by relatively simple rules, the causality assignment needed for refined health monitoring often requires an exceptionally complex rule base involving complicated logical maps. Structuring the rule system to be clear and unambiguous can be difficult, and the expert input required to maintain a large logic network and associated rule base can be prohibitive.

  16. Measurement equivalence and differential item functioning in family psychology.

    PubMed

    Bingenheimer, Jeffrey B; Raudenbush, Stephen W; Leventhal, Tama; Brooks-Gunn, Jeanne

    2005-09-01

    Several hypotheses in family psychology involve comparisons of sociocultural groups. Yet the potential for cross-cultural inequivalence in widely used psychological measurement instruments threatens the validity of inferences about group differences. Methods for dealing with these issues have been developed via the framework of item response theory. These methods deal with an important type of measurement inequivalence, called differential item functioning (DIF). The authors introduce DIF analytic methods, linking them to a well-established framework for conceptualizing cross-cultural measurement equivalence in psychology (C.H. Hui and H.C. Triandis, 1985). They illustrate the use of DIF methods using data from the Project on Human Development in Chicago Neighborhoods (PHDCN). Focusing on the Caregiver Warmth and Environmental Organization scales from the PHDCN's adaptation of the Home Observation for Measurement of the Environment Inventory, the authors obtain results that exemplify the range of outcomes that may result when these methods are applied to psychological measurement instruments. (c) 2005 APA, all rights reserved

  17. Inversion of solar extinction data from the Apollo-Soyuz Test Project Stratospheric Aerosol Measurement (ASTP/SAM) experiment

    NASA Technical Reports Server (NTRS)

    Pepin, T. J.

    1977-01-01

    The inversion methods are reported that have been used to determine the vertical profile of the extinction coefficient due to the stratospheric aerosols from data measured during the ASTP/SAM solar occultation experiment. Inversion methods include the onion skin peel technique and methods of solving the Fredholm equation for the problem subject to smoothing constraints. The latter of these approaches involves a double inversion scheme. Comparisons are made between the inverted results from the SAM experiment and near simultaneous measurements made by lidar and balloon born dustsonde. The results are used to demonstrate the assumptions required to perform the inversions for aerosols.

  18. [What should we know about cardiac amyloidosis? From clinical signs to treatment].

    PubMed

    Földeák, Dóra; Nemes, Attila; Kalapos, Anita; Domsik, Péter; Kormányos, Árpád; Krenács, László; Bagdi, Enikő; Borbényi, Zita

    2017-11-01

    Systemic amyloidosis is a rare disease, in which the heart involvement is rather frequent and determines survival remarkably. Regarding the disease and organ involvement, new diagnostic procedures help to establish the diagnosis and to start the adequate treatment as soon as possible. Cardiac involvement is more likely to be characterised by monoclonal immunglobulin free light chain (AL amyloidosis) type and transthyretin type. In case of AL amyloidosis, heart involvement can lead to serious consequences. Biomarker assessments for cardiac function are important to determine disease severity at the beginning and to measure response to the treatment. In case of amyloidosis, the incidence of the heart involvement grows with age. The prevalence is not known exactly, but probably there are more cases than recognised. The authors present the clinical signs and diagnostic methods, emphasizing the importance of the cardiac examination methods. Orv Hetil. 2017; 158(46): 1811-1818.

  19. Measurement of Air Pollutants in the Troposphere

    ERIC Educational Resources Information Center

    Clemitshaw, Kevin C.

    2011-01-01

    This article describes the principles, applications and performances of methods to measure gas-phase air pollutants that either utilise passive or active sampling with subsequent laboratory analysis or involve automated "in situ" sampling and analysis. It focuses on air pollutants that have adverse impacts on human health (nitrogen…

  20. Systems Engineering Management Procedures

    DTIC Science & Technology

    1966-03-10

    load -..................................................... tch 2 1t55 𔄃 Trade Study-Companson ,f Methods for Measuring Quantities of Loaded... method of system operation and the ancillary equipment required such as instru- system elements is a highly involved process mentation. depot tooling...Installation and checkout. MiGI-Maintenance g-round equipment. IM-Item manager. NIP-Materiel improvement proipct. indenturo-A method of showing relationships

  1. Apparatus and method for detecting electromagnetic radiation using electron photoemission in a micromechanical sensor

    DOEpatents

    Datskos, Panagiotis G.; Rajic, Slobodan; Datskou, Irene C.; Egert, Charles M.

    2002-01-01

    A micromechanical sensor and method for detecting electromagnetic radiation involve producing photoelectrons from a metal surface in contact with a semiconductor. The photoelectrons are extracted into the semiconductor, which causes photo-induced bending. The resulting bending is measured, and a signal corresponding to the measured bending is generated and processed. A plurality of individual micromechanical sensors can be arranged in a two-dimensional matrix for imaging applications.

  2. Determination of Energy of a Clinical Electron Beam as Part of a Routine Quality Assurance and Audit System

    NASA Astrophysics Data System (ADS)

    Hernández-Bello, Jimmy; D'Souza, Derek; Rossenberg, Ivan

    2002-08-01

    A method to determine the electron beam energy and an electron audit based on the current IPEM electron Code of Practice has been devised. During the commissioning on the new Varian 2100CD linear accelerator in The Middlesex Hospital, two methods were devised for the determination of electron energy. The first method involves the use of a two-depth method, whereby the ratio of ionisation (presented as a percentage) measured by an ion chamber at two depths in solid water is used to compare against the baseline ionisation depth value for that energy. The second method involves the irradiation of an X-ray film in solid water to obtain a depth dose curve and, hence determine the half value depth and practical range of the electrons. The results showed that the two-depth method has a better accuracy, repeatability, reliability and consistency than the X-ray method. The results for the electron audit showed that electron absolute outputs are obtained from ionisation measurements in solid water, where the energy-range parameters such as practical range and the depth at which ionisation is 50% of that at the maximum for the depth-ionisation curve are determined.

  3. Middle East Regional Irrigation Management Information Systems project-Some science products

    USDA-ARS?s Scientific Manuscript database

    Similarities in the aridity of environments and water scarcity for irrigation allow common approaches to irrigation management problems and research methods in the Southern Great Plains of the United States and the Middle East. Measurement methods involving weighing lysimeters and eddy covariance sy...

  4. Upscaling with data assimilation in soil hydrology

    USDA-ARS?s Scientific Manuscript database

    Most of measurements in soil hydrology are point-based, and methods are needed to use the point-based data for estimating soil water contents at larger societally-important scales, such as field, hillslope or watershed. One group of appropriate methods involves data assimilation which is a methodolo...

  5. QUANTITATIVE MEASUREMENT OF HELICOBACTER PYLORI BY THE TAQMAN FLUOROGENIC PROBE SYSTEM

    EPA Science Inventory

    Culturing of H. pylori from environmental sources continues to be an obstacle in detecting and enumerating this organism. Successful methods of isolation and growth from water samples have not yet been developed. In this study a method involving real tme PCR product detection wit...

  6. CATCHING THE WIND: A LOW COST METHOD FOR WIND POWER SITE ASSESSMENT

    EPA Science Inventory

    Our Phase I successes involve the installation of a wind monitoring station in Humboldt County, the evaluation of four different measure-correlate-predict methods for wind site assessment, and the creation of SWEET, an open source software package implementing the prediction ...

  7. Microbial detection method based on sensing molecular hydrogen

    NASA Technical Reports Server (NTRS)

    Wilkins, J. R.; Stoner, G. E.; Boykin, E. H.

    1974-01-01

    An approach involving the measurement of hydrogen evolution by test organisms was used to detect and enumerate various members of the Enterobacteriaceae group. The experimental setup for measuring hydrogen evolution consisted of a test tube containing two electrodes plus broth and organisms. The test tube was kept in a water bath at a temperature of 35 C. It is pointed out that the hydrogen-sensing method, coupled with the pressure transducer technique reported by Wilkins (1974) could be used in various experiments in which gas production by microorganisms is being measured.

  8. Characteristics of high-tension magnetos

    NASA Technical Reports Server (NTRS)

    Silsbee, F B

    1920-01-01

    This report gives the results of an investigation made into the fundamental physical characteristics of high-tension ignition magnetos, and also describes the methods used for measuring the quantities involved.

  9. Measurement of H2S in vivo and in vitro by the monobromobimane method

    PubMed Central

    Shen, Xinggui; Kolluru, Gopi K.; Yuan, Shuai; Kevil, Christopher

    2015-01-01

    The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with RP-HPLC. The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide-dibimane. The resultant fluorescent sulfide-dibimane (SDB) is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels. PMID:25725514

  10. Measurement of H2S in vivo and in vitro by the monobromobimane method.

    PubMed

    Shen, Xinggui; Kolluru, Gopi K; Yuan, Shuai; Kevil, Christopher G

    2015-01-01

    The gasotransmitter hydrogen sulfide (H2S) is known as an important regulator in several physiological and pathological responses. Among the challenges facing the field is the accurate and reliable measurement of hydrogen sulfide bioavailability. We have reported an approach to discretely measure sulfide and sulfide pools using the monobromobimane (MBB) method coupled with reversed phase high-performance liquid chromatography (RP-HPLC). The method involves the derivatization of sulfide with excess MBB under precise reaction conditions at room temperature to form sulfide dibimane (SDB). The resultant fluorescent SDB is analyzed by RP-HPLC using fluorescence detection with the limit of detection for SDB (2 nM). Care must be taken to avoid conditions that may confound H2S measurement with this method. Overall, RP-HPLC with fluorescence detection of SDB is a useful and powerful tool to measure biological sulfide levels. © 2015 Elsevier Inc. All rights reserved.

  11. Hot Tearing in Aluminium — Copper Alloys

    NASA Astrophysics Data System (ADS)

    Viano, David; StJohn, David; Grandfield, John; Cáceres, Carlos

    For many aluminium alloys, hot tearing susceptibility follows a lambda curve relationship when hot tearing severity is plotted as a function of solute content. In the past, there has been some difficulty quantifying hot tearing. Traditional methods rely upon measuring electrical resistivity or the number and/or length of cracks in tests such as the ring test. In this experimental program, a hot tear test rig was used to investigate a series of binary Al-Cu alloys. This device measures the load imposed on the mushy zone during solidification. Hot tearing susceptibility was quantified in two ways. The first method involved measuring the load at the solidus temperature (548°C). The second method was to radiograph the hot spot and measure the image density of the cracks. Both methods had advantages and disadvantages. It was found that the results from the hot tear rig correlates with other published data using different experimental methods.

  12. Vehicle crashworthiness ratings in Australia.

    PubMed

    Cameron, M; Mach, T; Neiger, D; Graham, A; Ramsay, R; Pappas, M; Haley, J

    1994-08-01

    The paper reviews the published vehicle safety ratings based on mass crash data from the United States, Sweden, and Great Britain. It then describes the development of vehicle crashworthiness ratings based on injury compensation claims and police accident reports from Victoria and New South Wales, the two most populous states in Australia. Crashworthiness was measured by a combination of injury severity (of injured drivers) and injury risk (of drivers involved in crashes). Injury severity was based on 22,600 drivers injured in crashes in the two states. Injury risk was based on 70,900 drivers in New South Wales involved in crashes after which a vehicle was towed away. Injury risk measured in this way was compared with the "relative injury risk" of particular model cars involved in two car crashes in Victoria (where essentially only casualty crashes are reported), which was based on the method developed by Folksam Insurance in Sweden from Evans' double-pair comparison method. The results include crashworthiness ratings for the makes and models crashing in Australia in sufficient numbers to measure their crash performance adequately. The ratings were normalised for the driver sex and speed limit at the crash location, the two factors found to be strongly related to injury risk and/or severity and to vary substantially across makes and models of Australian crash-involved cars. This allows differences in crashworthiness of individual models to be seen, uncontaminated by major crash exposure differences.

  13. ULTRAVIOLET DISINFECTION OF A SECONDARY EFFLUENT: MEASUREMENT OF DOSE AND EFFECTS OF FILTRATION

    EPA Science Inventory

    Ultraviolet (UV) disinfection of wastewater secondary effluent was investigated in a two-phase study to develop methods for measuring UV dose and to determine the effects of filtration on UV disinfection. The first phase of this study involved a pilot plant study comparing filtra...

  14. Studying Photosynthesis by Measuring Fluorescence

    ERIC Educational Resources Information Center

    Sanchez, Jose Francisco; Quiles, Maria Jose

    2006-01-01

    This paper describes an easy experiment to study the absorption and action spectrum of photosynthesis, as well as the inhibition by heat, high light intensity and the presence of the herbicide 3-(3,4-dichlorophenyl)-1,1-dimethylurea (DCMU) on the photosynthetic process. The method involves measuring the chlorophyll fluorescence emitted by intact…

  15. Studying learning in the healthcare setting: the potential of quantitative diary methods.

    PubMed

    Ciere, Yvette; Jaarsma, Debbie; Visser, Annemieke; Sanderman, Robbert; Snippe, Evelien; Fleer, Joke

    2015-08-01

    Quantitative diary methods are longitudinal approaches that involve the repeated measurement of aspects of peoples' experience of daily life. In this article, we outline the main characteristics and applications of quantitative diary methods and discuss how their use may further research in the field of medical education. Quantitative diary methods offer several methodological advantages, such as measuring aspects of learning with great detail, accuracy and authenticity. Moreover, they enable researchers to study how and under which conditions learning in the health care setting occurs and in which way learning can be promoted. Hence, quantitative diary methods may contribute to theory development and the optimization of teaching methods in medical education.

  16. Food parenting measurement issues: working group consensus report.

    PubMed

    Hughes, Sheryl O; Frankel, Leslie A; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef

    2013-08-01

    Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent-child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models.

  17. Food Parenting Measurement Issues: Working Group Consensus Report

    PubMed Central

    Frankel, Leslie A.; Beltran, Alicia; Hodges, Eric; Hoerr, Sharon; Lumeng, Julie; Tovar, Alison; Kremers, Stef

    2013-01-01

    Abstract Childhood obesity is a growing problem. As more researchers become involved in the study of parenting influences on childhood obesity, there appears to be a lack of agreement regarding the most important parenting constructs of interest, definitions of those constructs, and measurement of those constructs in a consistent manner across studies. This article aims to summarize findings from a working group that convened specifically to discuss measurement issues related to parental influences on childhood obesity. Six subgroups were formed to address key measurement issues. The conceptualization subgroup proposed to define and distinguish constructs of general parenting styles, feeding styles, and food parenting practices with the goal of understanding interrelating levels of parental influence on child eating behaviors. The observational subgroup identified the need to map constructs for use in coding direct observations and create observational measures that can capture the bidirectional effects of parent–child interactions. The self-regulation subgroup proposed an operational definition of child self-regulation of energy intake and suggested future measures of self-regulation across different stages of development. The translational/community involvement subgroup proposed the involvement of community in the development of surveys so that measures adequately reflect cultural understanding and practices of the community. The qualitative methods subgroup proposed qualitative methods as a way to better understand the breadth of food parenting practices and motivations for the use of such practices. The longitudinal subgroup stressed the importance of food parenting measures sensitive to change for use in longitudinal studies. In the creation of new measures, it is important to consider cultural sensitivity and context-specific food parenting domains. Moderating variables such as child temperament and child food preferences should be considered in models. PMID:23944928

  18. Determination of Tissue Thermal Conductivity by Measuring and Modeling Temperature Rise Induced in Tissue by Pulsed Focused Ultrasound

    PubMed Central

    Kujawska, Tamara; Secomski, Wojciech; Kruglenko, Eleonora; Krawczyk, Kazimierz; Nowicki, Andrzej

    2014-01-01

    A tissue thermal conductivity (Ks) is an important parameter which knowledge is essential whenever thermal fields induced in selected organs are predicted. The main objective of this study was to develop an alternative ultrasonic method for determining Ks of tissues in vitro suitable for living tissues. First, the method involves measuring of temperature-time T(t) rises induced in a tested tissue sample by a pulsed focused ultrasound with measured acoustic properties using thermocouples located on the acoustic beam axis. Measurements were performed for 20-cycle tone bursts with a 2 MHz frequency, 0.2 duty-cycle and 3 different initial pressures corresponding to average acoustic powers equal to 0.7 W, 1.4 W and 2.1 W generated from a circular focused transducer with a diameter of 15 mm and f-number of 1.7 in a two-layer system of media: water/beef liver. Measurement results allowed to determine position of maximum heating located inside the beef liver. It was found that this position is at the same axial distance from the source as the maximum peak-peak pressure calculated for each nonlinear beam produced in the two-layer system of media. Then, the method involves modeling of T(t) at the point of maximum heating and fitting it to the experimental data by adjusting Ks. The averaged value of Ks determined by the proposed method was found to be 0.5±0.02 W/(m·°C) being in good agreement with values determined by other methods. The proposed method is suitable for determining Ks of some animal tissues in vivo (for example a rat liver). PMID:24743838

  19. LC Circuits for Diagnosing Embedded Piezoelectric Devices

    NASA Technical Reports Server (NTRS)

    Chattin, Richard L.; Fox, Robert Lee; Moses, Robert W.; Shams, Qamar A.

    2005-01-01

    A recently invented method of nonintrusively detecting faults in piezoelectric devices involves measurement of the resonance frequencies of inductor capacitor (LC) resonant circuits. The method is intended especially to enable diagnosis of piezoelectric sensors, actuators, and sensor/actuators that are embedded in structures and/or are components of multilayer composite material structures.

  20. Creating IRT-Based Parallel Test Forms Using the Genetic Algorithm Method

    ERIC Educational Resources Information Center

    Sun, Koun-Tem; Chen, Yu-Jen; Tsai, Shu-Yen; Cheng, Chien-Fen

    2008-01-01

    In educational measurement, the construction of parallel test forms is often a combinatorial optimization problem that involves the time-consuming selection of items to construct tests having approximately the same test information functions (TIFs) and constraints. This article proposes a novel method, genetic algorithm (GA), to construct parallel…

  1. An EEG-based functional connectivity measure for automatic detection of alcohol use disorder.

    PubMed

    Mumtaz, Wajid; Saad, Mohamad Naufal B Mohamad; Kamel, Nidal; Ali, Syed Saad Azhar; Malik, Aamir Saeed

    2018-01-01

    The abnormal alcohol consumption could cause toxicity and could alter the human brain's structure and function, termed as alcohol used disorder (AUD). Unfortunately, the conventional screening methods for AUD patients are subjective and manual. Hence, to perform automatic screening of AUD patients, objective methods are needed. The electroencephalographic (EEG) data have been utilized to study the differences of brain signals between alcoholics and healthy controls that could further developed as an automatic screening tool for alcoholics. In this work, resting-state EEG-derived features were utilized as input data to the proposed feature selection and classification method. The aim was to perform automatic classification of AUD patients and healthy controls. The validation of the proposed method involved real-EEG data acquired from 30 AUD patients and 30 age-matched healthy controls. The resting-state EEG-derived features such as synchronization likelihood (SL) were computed involving 19 scalp locations resulted into 513 features. Furthermore, the features were rank-ordered to select the most discriminant features involving a rank-based feature selection method according to a criterion, i.e., receiver operating characteristics (ROC). Consequently, a reduced set of most discriminant features was identified and utilized further during classification of AUD patients and healthy controls. In this study, three different classification models such as Support Vector Machine (SVM), Naïve Bayesian (NB), and Logistic Regression (LR) were used. The study resulted into SVM classification accuracy=98%, sensitivity=99.9%, specificity=95%, and f-measure=0.97; LR classification accuracy=91.7%, sensitivity=86.66%, specificity=96.6%, and f-measure=0.90; NB classification accuracy=93.6%, sensitivity=100%, specificity=87.9%, and f-measure=0.95. The SL features could be utilized as objective markers to screen the AUD patients and healthy controls. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Short hold times in dynamic vapor sorption measurements mischaracterize the equilibrium moisture content of wood

    Treesearch

    Samuel V. Glass; Charles R. Boardman; Samuel L. Zelinka

    2017-01-01

    Recently, the dynamic vapor sorption (DVS) technique has been used to measure sorption isotherms and develop moisture-mechanics models for wood and cellulosic materials. This method typically involves measuring the time-dependent mass response of a sample following step changes in relative humidity (RH), fitting a kinetic model to the data, and extrapolating the...

  3. Atomic and molecular gas phase spectrometry

    NASA Astrophysics Data System (ADS)

    Winefordner, J. D.

    1985-10-01

    The major goals of this research have been to develop diagnostical spectroscopic methods for measuring spatial/temporal temperatures and species of combustion flames and plasmas and to develop sensitive, selective, precise, reliable, rapid spectrometric methods of trace analysis of elements present in jet engine lubricating oils, metallurgical samples, and engine exhausts. The diagnostical approaches have been based upon the measurement of metal probes introduced into the flame or plasmas and the measurement of OH in flames. The measurement approaches have involved the use of laser-excited fluorescence, saturated absorption, polarization, and linear absorption. The spatial resolution in most studies is less than 1 cu mm and the temporal resolution is less than 10 ns with the use of pulsed lasers. Single pulse temperature and species measurements have also been carried out. Other diagnostical studies have involved the measurement of collisional redistribution of radiatively excited levels of Na and Tl in acetylene/02/Ar flames and the measurement of lifetimes and quantum efficiencies of atoms and ions in the inductively coupled plasmas, ICP. The latter studies indicate that the high electron number densities in ICPs are not efficient quenchers of excited atoms/ions. Temperatures of microwave atmospheric plasmas produced capacitatively and cool metastable N2 discharge produced by a dielectric discharge have also been measured.

  4. Thermal well-test method

    DOEpatents

    Tsang, C.F.; Doughty, C.A.

    1984-02-24

    A well-test method involving injection of hot (or cold) water into a groundwater aquifer, or injecting cold water into a geothermal reservoir is disclosed. By making temperature measurements at various depths in one or more observation wells, certain properties of the aquifer are determined. These properties, not obtainable from conventional well test procedures, include the permeability anisotropy, and layering in the aquifer, and in-situ thermal properties. The temperature measurements at various depths are obtained from thermistors mounted in the observation wells.

  5. Thermal well-test method

    DOEpatents

    Tsang, Chin-Fu; Doughty, Christine A.

    1985-01-01

    A well-test method involving injection of hot (or cold) water into a groundwater aquifer, or injecting cold water into a geothermal reservoir. By making temperature measurements at various depths in one or more observation wells, certain properties of the aquifer are determined. These properties, not obtainable from conventional well test procedures, include the permeability anisotropy, and layering in the aquifer, and in-situ thermal properties. The temperature measurements at various depths are obtained from thermistors mounted in the observation wells.

  6. Adaptive Filtering Using Recurrent Neural Networks

    NASA Technical Reports Server (NTRS)

    Parlos, Alexander G.; Menon, Sunil K.; Atiya, Amir F.

    2005-01-01

    A method for adaptive (or, optionally, nonadaptive) filtering has been developed for estimating the states of complex process systems (e.g., chemical plants, factories, or manufacturing processes at some level of abstraction) from time series of measurements of system inputs and outputs. The method is based partly on the fundamental principles of the Kalman filter and partly on the use of recurrent neural networks. The standard Kalman filter involves an assumption of linearity of the mathematical model used to describe a process system. The extended Kalman filter accommodates a nonlinear process model but still requires linearization about the state estimate. Both the standard and extended Kalman filters involve the often unrealistic assumption that process and measurement noise are zero-mean, Gaussian, and white. In contrast, the present method does not involve any assumptions of linearity of process models or of the nature of process noise; on the contrary, few (if any) assumptions are made about process models, noise models, or the parameters of such models. In this regard, the method can be characterized as one of nonlinear, nonparametric filtering. The method exploits the unique ability of neural networks to approximate nonlinear functions. In a given case, the process model is limited mainly by limitations of the approximation ability of the neural networks chosen for that case. Moreover, despite the lack of assumptions regarding process noise, the method yields minimum- variance filters. In that they do not require statistical models of noise, the neural- network-based state filters of this method are comparable to conventional nonlinear least-squares estimators.

  7. Fabrication of oriented crystals as force measurement tips via focused ion beam and microlithography methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Zhigang; Chun, Jaehun; Chatterjee, Sayandev

    Detailed knowledge of the forces between nanocrystals is very crucial for understanding many generic (e.g., random aggregation/assembly and rheology) and specific (e.g., oriented attachment) phenomena at macroscopic length scales, especially considering the additional complexities involved in nanocrystals such as crystal orientation and corresponding orientation-dependent physicochemical properties. Because there are a limited number of methods to directly measure the forces, little is known about the forces that drive the various emergent phenomena. Here we report on two methods of preparing crystals as force measurement tips used in an atomic force microscope (AFM): the focused ion beam method and microlithography method. Themore » desired crystals are fabricated using these two methods and are fixed to the AFM probe using platinum deposition, ultraviolet epoxy, or resin, which allows for the orientation-dependent force measurements. These two methods can be used to attach virtually any solid particles (from the size of a few hundreds of nanometers to millimeters). We demonstrate the force measurements between aqueous media under different conditions such as pH.« less

  8. Optimal PMU placement using topology transformation method in power systems.

    PubMed

    Rahman, Nadia H A; Zobaa, Ahmed F

    2016-09-01

    Optimal phasor measurement units (PMUs) placement involves the process of minimizing the number of PMUs needed while ensuring the entire power system completely observable. A power system is identified observable when the voltages of all buses in the power system are known. This paper proposes selection rules for topology transformation method that involves a merging process of zero-injection bus with one of its neighbors. The result from the merging process is influenced by the selection of bus selected to merge with the zero-injection bus. The proposed method will determine the best candidate bus to merge with zero-injection bus according to the three rules created in order to determine the minimum number of PMUs required for full observability of the power system. In addition, this paper also considered the case of power flow measurements. The problem is formulated as integer linear programming (ILP). The simulation for the proposed method is tested by using MATLAB for different IEEE bus systems. The explanation of the proposed method is demonstrated by using IEEE 14-bus system. The results obtained in this paper proved the effectiveness of the proposed method since the number of PMUs obtained is comparable with other available techniques.

  9. Analysis of methods to estimate spring flows in a karst aquifer

    USGS Publications Warehouse

    Sepulveda, N.

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer. ?? 2008 National Ground Water Association.

  10. Analysis of methods to estimate spring flows in a karst aquifer.

    PubMed

    Sepúlveda, Nicasio

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer.

  11. An intercomparison of carbon monoxide, nitric oxide, and hydroxyl measurement techniques - Overview of results

    NASA Technical Reports Server (NTRS)

    Hoell, J. M.; Gregory, G. L.; Carroll, M. A.; Mcfarland, M.; Ridley, B. A.; Davis, D. D.; Bradshaw, J.; Rodgers, M. O.; Torres, A. L.; Condon, E. P.

    1984-01-01

    Results from an intercomparison of methods to measure carbon monoxide (CO), nitric oxide (NO), and the hydroxyl radical (OH) are discussed. The intercomparison was conducted at Wallops Island, Virginia, in July 1983 and included a laser differential absorption and three grab sample/gas chromatograph methods for CO, a laser-induced fluorescence (LIF) and two chemiluminescence methods for NO, and two LIF methods and a radiocarbon tracer method for OH. The intercomparison was conducted as a field measurement program involving ambient measurements of CO (150-300 ppbv) and NO (10-180 pptv) from a common manifold with controlled injection of CO in incremental steps from 20 to 500 ppbv and NO in steps from 10 to 220 pptv. Only ambient measurements of OH were made. The agreement between the techniques was on the order of 14 percent for CO and 17 percent for NO. Hardware difficulties during the OH tests resulted in a data base with insufficient data and uncertanties too large to permit a meaningful intercomposition.

  12. Service User- and Carer-Reported Measures of Involvement in Mental Health Care Planning: Methodological Quality and Acceptability to Users

    PubMed Central

    Gibbons, Chris J.; Bee, Penny E.; Walker, Lauren; Price, Owen; Lovell, Karina

    2014-01-01

    Background: Increasing service user and carer involvement in mental health care planning is a key healthcare priority but one that is difficult to achieve in practice. To better understand and measure user and carer involvement, it is crucial to have measurement questionnaires that are both psychometrically robust and acceptable to the end user. Methods: We conducted a systematic review using the terms “care plan$,” “mental health,” “user perspective$,” and “user participation” and their linguistic variants as search terms. Databases were searched from inception to November 2012, with an update search at the end of September 2014. We included any articles that described the development, validation or use of a user and/or carer-reported outcome measures of involvement in mental health care planning. We assessed the psychometric quality of each instrument using the “Evaluating the Measurement of Patient-Reported Outcomes” (EMPRO) criteria. Acceptability of each instrument was assessed using novel criteria developed in consultation with a mental health service user and carer consultation group. Results: We identified eleven papers describing the use, development, and/or validation of nine user/carer-reported outcome measures. Psychometric properties were sparsely reported and the questionnaires met few service user/carer-nominated attributes for acceptability. Where reported, basic psychometric statistics were of good quality, indicating that some measures may perform well if subjected to more rigorous psychometric tests. The majority were deemed to be too long for use in practice. Discussion: Multiple instruments are available to measure user/carer involvement in mental health care planning but are either of poor quality or poorly described. Existing measures cannot be considered psychometrically robust by modern standards, and cannot currently be recommended for use. Our review has identified an important knowledge gap, and an urgent need to develop new user and carer measures of care-planning involvement. PMID:25566099

  13. Petz recovery versus matrix reconstruction

    NASA Astrophysics Data System (ADS)

    Holzäpfel, Milan; Cramer, Marcus; Datta, Nilanjana; Plenio, Martin B.

    2018-04-01

    The reconstruction of the state of a multipartite quantum mechanical system represents a fundamental task in quantum information science. At its most basic, it concerns a state of a bipartite quantum system whose subsystems are subjected to local operations. We compare two different methods for obtaining the original state from the state resulting from the action of these operations. The first method involves quantum operations called Petz recovery maps, acting locally on the two subsystems. The second method is called matrix (or state) reconstruction and involves local, linear maps that are not necessarily completely positive. Moreover, we compare the quantities on which the maps employed in the two methods depend. We show that any state that admits Petz recovery also admits state reconstruction. However, the latter is successful for a strictly larger set of states. We also compare these methods in the context of a finite spin chain. Here, the state of a finite spin chain is reconstructed from the reduced states of a few neighbouring spins. In this setting, state reconstruction is the same as the matrix product operator reconstruction proposed by Baumgratz et al. [Phys. Rev. Lett. 111, 020401 (2013)]. Finally, we generalize both these methods so that they employ long-range measurements instead of relying solely on short-range correlations embodied in such local reduced states. Long-range measurements enable the reconstruction of states which cannot be reconstructed from measurements of local few-body observables alone and hereby we improve existing methods for quantum state tomography of quantum many-body systems.

  14. Quantitative Balance and Gait Measurement in Patients with Frontotemporal Dementia and Alzheimer Diseases: A Pilot Study

    PubMed Central

    Velayutham, Selva Ganapathy; Chandra, Sadanandavalli Retnaswami; Bharath, Srikala; Shankar, Ravi Girikamatha

    2017-01-01

    Introduction: Alzhiemers disease and Frontotemporal dementia are common neurodegenerative dementias with a wide prevalence. Falls are a common cause of morbidity in these patients. Identifying subclinical involvement of these parameters might serve as a tool in differential analysis of these distinct parameters involved in these conditions and also help in planning preventive strategies to prevent falls. Patients and Methods: Eight patients in age and gender matched patients in each group were compared with normal controls. Standardizes methods of gait and balance aseesment were done in all persons. Results: Results revealed subclinical involvement of gait and balancesin all groups specially during divided attention. The parameters were significantly more affected in patients. Patients with AD and FTD had involement of over all ambulation index balance more affected in AD patients FTD patients showed step cycle, stride length abnormalities. Discussion: There is balance and gait involvement in normal ageing as well as patients with AD and FTD. The pattern of involvement in AD correlates with WHERE pathway involvement and FTD with frontal subcortical circuits involvement. Conclusion: Identification the differential patterns of involvement in subclinical stage might help to differentiate normal ageing and the different types of cortical dementias. This could serve as an additional biomarker and also assist in initiating appropriate training methods to prevent future falls. PMID:28515555

  15. Measurement of absolute gamma emission probabilities

    NASA Astrophysics Data System (ADS)

    Sumithrarachchi, Chandana S.; Rengan, Krish; Griffin, Henry C.

    2003-06-01

    The energies and emission probabilities (intensities) of gamma-rays emitted in radioactive decays of particular nuclides are the most important characteristics by which to quantify mixtures of radionuclides. Often, quantification is limited by uncertainties in measured intensities. A technique was developed to reduce these uncertainties. The method involves obtaining a pure sample of a nuclide using radiochemical techniques, and using appropriate fractions for beta and gamma measurements. The beta emission rates were measured using a liquid scintillation counter, and the gamma emission rates were measured with a high-purity germanium detector. Results were combined to obtain absolute gamma emission probabilities. All sources of uncertainties greater than 0.1% were examined. The method was tested with 38Cl and 88Rb.

  16. Temperature measurements behind reflected shock waves in air. [radiometric measurement of gas temperature in self-absorbing gas flow

    NASA Technical Reports Server (NTRS)

    Bader, J. B.; Nerem, R. M.; Dann, J. B.; Culp, M. A.

    1972-01-01

    A radiometric method for the measurement of gas temperature in self-absorbing gases has been applied in the study of shock tube generated flows. This method involves making two absolute intensity measurements at identical wavelengths, but for two different pathlengths in the same gas sample. Experimental results are presented for reflected shock waves in air at conditions corresponding to incident shock velocities from 7 to 10 km/s and an initial driven tube pressure of 1 torr. These results indicate that, with this technique, temperature measurements with an accuracy of + or - 5 percent can be carried out. The results also suggest certain facility related problems.

  17. The Electrostatic Gavimeter: An Alternative Way of Measuring Gravitational Acceleration

    NASA Astrophysics Data System (ADS)

    Kashinski, David

    2005-03-01

    In the past, Earth’s gravitational acceleration g has been measured in many ways, including the use of a pendulum as well as other models involving the use of a mass and a spring. We have designed a new method incorporating a spring with a capacitor and a voltmeter. This capacitor model still uses a hanging mass on a spring, but alters the method of determining the change in position of the spring due to the gravitational acceleration. We relate the change in position to the potential difference across the capacitor needed to cause a discharge through parallel plates. By relating this voltage directly to the gravitaional acceleration,a new method of measuring g is obtained.

  18. Comparative Evaluation of Cone-beam Computed Tomography versus Direct Surgical Measurements in the Diagnosis of Mandibular Molar Furcation Involvement

    PubMed Central

    Padmanabhan, Shyam; Dommy, Ahila; Guru, Sanjeela R.; Joseph, Ajesh

    2017-01-01

    Aim: Periodontists frequently experience inconvenience in accurate assessment and treatment of furcation areas affected by periodontal disease. Furcation involvement (FI) most commonly affects the mandibular molars. Diagnosis of furcation-involved teeth is mainly by the assessment of probing pocket depth, clinical attachment level, furcation entrance probing, and intraoral periapical radiographs. Three-dimensional imaging has provided advantage to the clinician in assessment of bone morphology. Thus, the present study aimed to compare the diagnostic efficacy of cone-beam computed tomography (CBCT) as against direct intrasurgical measurements of furcation defects in mandibular molars. Subjects and Methods: Study population included 14 patients with 25 mandibular molar furcation sites. CBCT was performed to measure height, width, and depth of furcation defects of mandibular molars with Grade II and Grade III FI. Intrasurgical measurements of the FI were assessed during periodontal flap surgery in indicated teeth which were compared with CBCT measurements. Statistical analysis was done using paired t-test and Bland–Altman plot. Results: The CBCT versus intrasurgical furcation measurements were 2.18 ± 0.86 mm and 2.30 ± 0.89 mm for furcation height, 1.87 ± 0.52 mm and 1.84 ± 0.49 mm for furcation width, and 3.81 ± 1.37 mm and 4.05 ± 1.49 mm for furcation depth, respectively. Results showed that there was no statistical significance between the measured parameters, indicating that the two methods were statistically similar. Conclusion: Accuracy of assessment of mandibular molar FI by CBCT was comparable to that of direct surgical measurements. These findings indicate that CBCT is an excellent adjunctive diagnostic tool in periodontal treatment planning. PMID:29042732

  19. Method for measuring the three-dimensional distribution of a fluorescent dye in a cell membrane

    NASA Astrophysics Data System (ADS)

    Yamamoto, Kazuya; Ishimaru, Ichirou; Fujii, Yoshiki; Yasokawa, Toshiki; Kuriyama, Shigeki; Masaki, Tsutomu; Takegawa, Kaoru; Tanaka, Naotaka

    2007-01-01

    This letter reports on a method for accurately determining the component distribution in a cell membrane over the entire cell surface. This method involves exciting a fluorescent-dyed cell membrane using evanescent light and scanning the entire cell surface by rotating the cell using a noncontact technique, namely, proximal two-beam optical tweezers. To position the cell membrane in the thin evanescent field, the authors designed an optical system capable of precisely positioning the focal position. Using this method, they were able to measure the surface distribution of glycoprotein labeled by lectin in a breast cancer cell membrane.

  20. Conflict monitoring and stimulus categorization processes involved in the prosocial attitude implicit association test: Evidence from event-related potentials.

    PubMed

    Xiao, Fengqiu; Zheng, Zhiwei; Wang, Ya; Cui, Jifang; Chen, Yinghe

    2015-08-01

    The implicit association test (IAT) is a promising method used to assess individual implicit attitudes by indirectly measuring the strengths of associations between target and attribute categories. To date, the cognitive processes involved in the prosocial attitude IAT task have received little attention. The present study examined the temporal dynamics of the IAT that measures prosocial attitude using event-related potentials (ERPs). ERP results revealed enhanced N2 amplitudes for incongruent trials when compared with congruent trials and enhanced P300 amplitudes for congruent trials when compared with incongruent trials. In addition, the N2 amplitude differences were significantly correlated with individual prosocial behavior (the amount of donation). Our findings suggest that conflict monitoring and stimulus categorization processes are involved in the prosocial attitude IAT task and that the ERP indices of IATs that measure prosocial attitude may predict individual prosocial behavior.

  1. A "two-objective, one-area" procedure in absorption microphotometry and its application using an inverted microscope.

    PubMed

    Chaubal, K A

    1988-08-01

    A 'two-objective, one-area' method and related equations are suggested to measure absorbance of microscopic stained objects. In such work, the measuring field invariably includes an image of the object and some clear area surrounding the image. The total intensity in the two areas is measured photometrically, using two different objectives, and substituted in the equation for absorbance. The equation is independent of the term representing intensity from the clear area and hence the error in the measurement of absorbance is reduced. The limitations of the 'two-objective, one-area' method are discussed and its pragmatic operation described with an experimental setup involving an inverted microscope. The method permits measurement of intensity in a part of a stained cell while the rest of the cell remains in the field of view. The method is applied to measure absorbance in Giemsa stained ascites cells and Feulgen stained liver and Human Amnion cells.

  2. Development of a method for measuring femoral torsion using real-time ultrasound.

    PubMed

    Hafiz, Eliza; Hiller, Claire E; Nicholson, Leslie L; Nightingale, E Jean; Clarke, Jillian L; Grimaldi, Alison; Eisenhuth, John P; Refshauge, Kathryn M

    2014-07-01

    Excessive femoral torsion has been associated with various musculoskeletal and neurological problems. To explore this relationship, it is essential to be able to measure femoral torsion in the clinic accurately. Computerized tomography (CT) and magnetic resonance imaging (MRI) are thought to provide the most accurate measurements but CT involves significant radiation exposure and MRI is expensive. The aim of this study was to design a method for measuring femoral torsion in the clinic, and to determine the reliability of this method. Details of design process, including construction of a jig, the protocol developed and the reliability of the method are presented. The protocol developed used ultrasound to image a ridge on the greater trochanter, and a customized jig placed on the femoral condyles as reference points. An inclinometer attached to the customized jig allowed quantification of the degree of femoral torsion. Measurements taken with this protocol had excellent intra- and inter-rater reliability (ICC2,1 = 0.98 and 0.97, respectively). This method of measuring femoral torsion also permitted measurement of femoral torsion with a high degree of accuracy. This method is applicable to the research setting and, with minor adjustments, will be applicable to the clinical setting.

  3. Neutron spectrum measurements using proton recoil proportional counters: results of measurements of leakage spectra for the Little Boy assembly

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, E.F.; Yule, T.J.

    1984-01-01

    Measurements of degraded fission-neutron spectra using recoil proportional counters are done routinely for studies involving fast reactor mockups. The same techniques are applicable to measurements of neutron spectra required for personnel dosimetry in fast neutron environments. A brief discussion of current applications of these methods together with the results of a measurement made on the LITTLE BOY assembly at Los Alamos are here described.

  4. Patients' and observers' perceptions of involvement differ. Validation study on inter-relating measures for shared decision making.

    PubMed

    Kasper, Jürgen; Heesen, Christoph; Köpke, Sascha; Fulcher, Gary; Geiger, Friedemann

    2011-01-01

    Patient involvement into medical decisions as conceived in the shared decision making method (SDM) is essential in evidence based medicine. However, it is not conclusively evident how best to define, realize and evaluate involvement to enable patients making informed choices. We aimed at investigating the ability of four measures to indicate patient involvement. While use and reporting of these instruments might imply wide overlap regarding the addressed constructs this assumption seems questionable with respect to the diversity of the perspectives from which the assessments are administered. The study investigated a nested cohort (N = 79) of a randomized trial evaluating a patient decision aid on immunotherapy for multiple sclerosis. Convergent validities were calculated between observer ratings of videotaped physician-patient consultations (OPTION) and patients' perceptions of the communication (Shared Decision Making Questionnaire, Control Preference Scale & Decisional Conflict Scale). OPTION reliability was high to excellent. Communication performance was low according to OPTION and high according to the three patient administered measures. No correlations were found between observer and patient judges, neither for means nor for single items. Patient report measures showed some moderate correlations. Existing SDM measures do not refer to a single construct. A gold standard is missing to decide whether any of these measures has the potential to indicate patient involvement. Pronounced heterogeneity of the underpinning constructs implies difficulties regarding the interpretation of existing evidence on the efficacy of SDM. Consideration of communication theory and basic definitions of SDM would recommend an inter-subjective focus of measurement. Controlled-Trials.com ISRCTN25267500.

  5. Strabismus Measurements

    MedlinePlus

    ... method is most accurate and feasible. What is light reflex testing? Light reflex testing (called Hirschberg testing) involves directing a patient to look at a point of light held about three feet from the patient’s face. ...

  6. Image based method for aberration measurement of lithographic tools

    NASA Astrophysics Data System (ADS)

    Xu, Shuang; Tao, Bo; Guo, Yongxing; Li, Gongfa

    2018-01-01

    Information of lens aberration of lithographic tools is important as it directly affects the intensity distribution in the image plane. Zernike polynomials are commonly used for a mathematical description of lens aberrations. Due to the advantage of lower cost and easier implementation of tools, image based measurement techniques have been widely used. Lithographic tools are typically partially coherent systems that can be described by a bilinear model, which entails time consuming calculations and does not lend a simple and intuitive relationship between lens aberrations and the resulted images. Previous methods for retrieving lens aberrations in such partially coherent systems involve through-focus image measurements and time-consuming iterative algorithms. In this work, we propose a method for aberration measurement in lithographic tools, which only requires measuring two images of intensity distribution. Two linear formulations are derived in matrix forms that directly relate the measured images to the unknown Zernike coefficients. Consequently, an efficient non-iterative solution is obtained.

  7. Heuristics as a Basis for Assessing Creative Potential: Measures, Methods, and Contingencies

    ERIC Educational Resources Information Center

    Vessey, William B.; Mumford, Michael D.

    2012-01-01

    Studies of creative thinking skills have generally measured a single aspect of creativity, divergent thinking. A number of other processes involved in creative thought have been identified. Effective execution of these processes is held to depend on the strategies applied in process execution, or heuristics. In this article, we review prior…

  8. Physical Activity Measurement Instruments for Children with Cerebral Palsy: A Systematic Review

    ERIC Educational Resources Information Center

    Capio, Catherine M.; Sit, Cindy H. P.; Abernethy, Bruce; Rotor, Esmerita R.

    2010-01-01

    Aim: This paper is a systematic review of physical activity measurement instruments for field-based studies involving children with cerebral palsy (CP). Method: Database searches using PubMed Central, MEDLINE, CINAHL Plus, PsycINFO, EMBASE, Cochrane Library, and PEDro located 12 research papers, identifying seven instruments that met the inclusion…

  9. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure

    ERIC Educational Resources Information Center

    Hall, William J.

    2016-01-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability…

  10. Flux Redux: The Spinning Coil Comes around Again

    ERIC Educational Resources Information Center

    Lund, Daniel; Dietz, Eric; Zou, Xueli; Ard, Christopher; Lee, Jaydie; Kaneshiro, Chris; Blanton, Robert; Sun, Steven

    2017-01-01

    An essential laboratory exercise for our lower-division electromagnetism course involves the measurement of Earth's local magnetic field from the emf induced in a rotating coil of wire. Although many methods exist for the measurement of Earth's field, this one gives our students some practical experience with Faraday's law. The apparatus we had…

  11. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  12. Assessment of Robotic Patient Simulators for Training in Manual Physical Therapy Examination Techniques

    PubMed Central

    Ishikawa, Shun; Okamoto, Shogo; Isogai, Kaoru; Akiyama, Yasuhiro; Yanagihara, Naomi; Yamada, Yoji

    2015-01-01

    Robots that simulate patients suffering from joint resistance caused by biomechanical and neural impairments are used to aid the training of physical therapists in manual examination techniques. However, there are few methods for assessing such robots. This article proposes two types of assessment measures based on typical judgments of clinicians. One of the measures involves the evaluation of how well the simulator presents different severities of a specified disease. Experienced clinicians were requested to rate the simulated symptoms in terms of severity, and the consistency of their ratings was used as a performance measure. The other measure involves the evaluation of how well the simulator presents different types of symptoms. In this case, the clinicians were requested to classify the simulated resistances in terms of symptom type, and the average ratios of their answers were used as performance measures. For both types of assessment measures, a higher index implied higher agreement among the experienced clinicians that subjectively assessed the symptoms based on typical symptom features. We applied these two assessment methods to a patient knee robot and achieved positive appraisals. The assessment measures have potential for use in comparing several patient simulators for training physical therapists, rather than as absolute indices for developing a standard. PMID:25923719

  13. Interference detection and correction applied to incoherent-scatter radar power spectrum measurement

    NASA Technical Reports Server (NTRS)

    Ying, W. P.; Mathews, J. D.; Rastogi, P. K.

    1986-01-01

    A median filter based interference detection and correction technique is evaluated and the method applied to the Arecibo incoherent scatter radar D-region ionospheric power spectrum is discussed. The method can be extended to other kinds of data when the statistics involved in the process are still valid.

  14. Research on aviation fuel instability

    NASA Technical Reports Server (NTRS)

    Baker, C. E.; Bittker, D. A.; Cohen, S. M.; Seng, G. T.

    1983-01-01

    The underlying causes of fuel thermal degradation are discussed. Topics covered include: nature of fuel instability and its temperature dependence, methods of measuring the instability, chemical mechanisms involved in deposit formation, and instrumental methods for characterizing fuel deposits. Finally, some preliminary thoughts on design approaches for minimizing the effects of lowered thermal stability are briefly discussed.

  15. Evaluation of Short-Term Bioassays to Predict Functional Impairment. Selected Short-Term Renal Toxicity Tests.

    DTIC Science & Technology

    1980-10-01

    reported using the method of Gentzkow (1942), which involves conversion of urea to ammonia with urease and measurement of the ammonia by...Nesslerization. Methods employing urease are not well suited for automated analysis since an incubation time of about 20 minutes is required for the conversion of

  16. A Comparative Study of Measuring Devices Used During Space Shuttle Processing for Inside Diameters

    NASA Technical Reports Server (NTRS)

    Rodriguez, Antonio

    2006-01-01

    During Space Shuttle processing, discrepancies between vehicle dimensions and per print dimensions determine if a part should be refurbished, replaced or accepted "as-is." The engineer's job is to address each discrepancy by choosing the most accurate procedure and tool available, sometimes with up to ten thousands of an inch tolerance. Four methods of measurement are commonly used at the Kennedy Space Center: 1) caliper, 2) mold impressions, 3) optical comparator, 4) dial bore gage. During a problem report evaluation, uncertainty arose between methods after measuring diameters with variations of up to 0.0004" inches. The results showed that computer based measuring devices are extremely accurate, but when human factor is involved in determining points of reference, the results may vary widely compared to more traditional methods. iv

  17. Development of a new test cell to measure cumulative permeation of water-insoluble pesticides with low vapor pressure through protective clothing and glove materials

    PubMed Central

    SHAW, Anugrah; COLEONE-CARVALHO, Ana Carla; HOLLINGSHURST, Julien; DRAPER, Michael; MACHADO NETO, Joaquim Gonçalves

    2017-01-01

    A collaborative approach, involving resources and expertise from several countries, was used to develop a test cell to measure cumulative permeation by a solid-state collection technique. The new technique was developed to measure the permeation of pesticide active ingredients and other chemicals with low vapor pressure that would otherwise be difficult to test via standard techniques. The development process is described and the results from the final chosen test method are reported. Inter-laboratory studies were conducted to further refine the new method and determine repeatability and reliability. The revised test method has been approved as a new ISO/EN standard to measure permeation of chemicals with low vapor pressure and/or solubility in water. PMID:29033403

  18. Chapter 4: Small Commercial and Residential Unitary and Split System HVAC Heating and Cooling Equipment-Efficiency Upgrade Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W; Jacobson, David; Metoyer, Jarred

    The specific measure described here involves improving the overall efficiency in air-conditioning systems as a whole (compressor, evaporator, condenser, and supply fan). The efficiency rating is expressed as the energy efficiency ratio (EER), seasonal energy efficiency ratio (SEER), and integrated energy efficiency ratio (IEER). The higher the EER, SEER or IEER, the more efficient the unit is.

  19. Terahertz Mapping of Microstructure and Thickness Variations

    NASA Technical Reports Server (NTRS)

    Roth, Donald J.; Seebo, Jeffrey P.; Winfree, William P.

    2010-01-01

    A noncontact method has been devised for mapping or imaging spatial variations in the thickness and microstructure of a layer of a dielectric material. The method involves (1) placement of the dielectric material on a metal substrate, (2) through-the-thickness pulse-echo measurements by use of electromagnetic waves in the terahertz frequency range with a raster scan in a plane parallel to the substrate surface that do not require coupling of any kind, and (3) appropriate processing of the digitized measurement data.

  20. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  1. Comparison of Pathway and Center of Gravity of the Calcaneus on Non-Involved and Involved Sides According to Eccentric and Concentric Strengthening in Patients With Achilles Tendinopathy

    PubMed Central

    Yu, JaeHo; Lee, GyuChang

    2012-01-01

    This study compares the changes in pathway and center of gravity (COG) on the calcaneus of non-involved and involved sides according to eccentric and concentric strengthening in patients with unilateral Achilles tendinopathy. The goal was to define the biomechanical changes according to eccentric strengthening for the development of clinical guidelines. Eighteen patients with Achilles tendinopathy were recruited at the K Rehabilitation Hospital in Seoul. The subjects were instructed to perform 5 sessions of concentric strengthening. The calcaneal pathway was measured using a three-dimensional (3D) motion analyzer, and COG was measured by a force plate. Subsequently, eccentric strengthening was implemented, and identical variables were measured. Concentric and eccentric strengthening was carried out on both the involved and non-involved sides. There was no significant difference in the calcaneal pathway in patients with Achilles tendinopathy during concentric and eccentric strengthening. However, during eccentric strengthening, the calcaneal pathway significantly increased on the involved side compared to the non-involved side for all variables excluding the z-axis. COG significantly decreased on the involved side when compared to the non-involved side in patients with Achilles tendinopathy during eccentric and concentric strengthening. During concentric strengthening, all variables of the COG significantly increased on the involved side compared to the non-involved side. Compared with eccentric strengthening, concentric strengthening decreased the stability of ankle joints and increased the movement distance of the calcaneus in patients with Achilles tendinopathy. Furthermore, eccentric strengthening was verified to be an effective exercise method for prevention of Achilles tendinopathy through the reduction of forward and backward path length of foot pressure. The regular application of eccentric strengthening was found to be effective in the secondary prevention of Achilles tendinopathy in a clinical setting. Key point Compared with eccentric strengthening, concentric strengthening decreased the stability of ankle joints, increasing movement of the calcaneus in patients with Achilles tendinopathy. Eccentric strengthening was shown to be an effective exercise method for preventing Achilles tendinopathy through the reduction of forward and backward path length of foot pressure. It was verified that regular application of eccentric strengthening is effective in secondary prevention of Achilles tendinopathy in the clinical setting. PMID:24149129

  2. Ice Growth Measurements from Image Data to Support Ice Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter M.; Lynch, Christopher J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  3. Ice Growth Measurements from Image Data to Support Ice-Crystal and Mixed-Phase Accretion Testing

    NASA Technical Reports Server (NTRS)

    Struk, Peter, M; Lynch, Christopher, J.

    2012-01-01

    This paper describes the imaging techniques as well as the analysis methods used to measure the ice thickness and growth rate in support of ice-crystal icing tests performed at the National Research Council of Canada (NRC) Research Altitude Test Facility (RATFac). A detailed description of the camera setup, which involves both still and video cameras, as well as the analysis methods using the NASA Spotlight software, are presented. Two cases, one from two different test entries, showing significant ice growth are analyzed in detail describing the ice thickness and growth rate which is generally linear. Estimates of the bias uncertainty are presented for all measurements. Finally some of the challenges related to the imaging and analysis methods are discussed as well as methods used to overcome them.

  4. HR-ICPMS ANALYTICAL METHODS DEVELOPMENT

    EPA Science Inventory

    Recent HEASD studies involving atmospheric sampling in remote areas (Barrow, Alaska; Cheeka Peak, Washington; Mauna Loa, Hawaii; Ny-Alesund, Norway; aircraft measurements off Florida's Atlantic coast) and low volume personal exposure monitoring (Baltimore, Maryland; Fresno, Calif...

  5. Contact angle studies on anodic porous alumina.

    PubMed

    Redón, Rocío; Vázquez-Olmos, A; Mata-Zamora, M E; Ordóñez-Medrano, A; Rivera-Torres, F; Saniger, J M

    2005-07-15

    The preparation of nanostructures using porous anodic aluminum oxide (AAO) as templates involves the introduction of dissolved materials into the pores of the membranes; one way to determine which materials are preferred to fill the pores involves the measurement of the contact angles (theta) of different solvents or test liquids on the AAOs. Thus, we present measurements of contact angles of nine solvents on four different AAO sheets by tensiometric and goniometric methods. From the solvents tested, we found dimethyl sulfoxide (DMSO) and N,N(')-dimethylformamide (DMF) to interact with the AAOs, the polarity of the solvents and the surfaces being the driving force.

  6. Medical Staff Involvement in Nursing Homes: Development of a Conceptual Model and Research Agenda

    PubMed Central

    Shield, Renée; Rosenthal, Marsha; Wetle, Terrie; Tyler, Denise; Clark, Melissa; Intrator, Orna

    2013-01-01

    Medical staff (physicians, nurse practitioners, physicians’ assistants) involvement in nursing homes (NH) is limited by professional guidelines, government policies, regulations, and reimbursements, creating bureaucratic burden. The conceptual NH Medical Staff Involvement Model, based on our mixed methods research, applies the Donabedian structure-process-outcomes framework to the NH identifying measures for a coordinated research agenda. Quantitative surveys and qualitative interviews conducted with medical directors, administrators and directors of nursing, other experts, residents and family members and Minimum Data Set, the Online Certification and Reporting System and Medicare Part B claims data related to NH structure, process and outcomes were analyzed. NH control of medical staff, or structure, affects medical staff involvement in care processes and is associated with better outcomes (e.g. symptom management, appropriate transitions, satisfaction). The Model identifies measures clarifying the impact of NH medical staff involvement on care processes and resident outcomes and has strong potential to inform regulatory policies. PMID:24652944

  7. A new method for estimation of involved BSAs for obese and normal-weight patients with burn injury.

    PubMed

    Neaman, Keith C; Andres, L Albert; McClure, Amanda M; Burton, Michael E; Kemmeter, Paul R; Ford, Ronald D

    2011-01-01

    An accurate measurement of BSA involved in patients injured by burns is critical in determining initial fluid requirements, nutritional needs, and criteria for tertiary center admissions. The rule of nines and the Lund-Browder chart are commonly used to calculate the BSA involved. However, their accuracy in all patient populations, namely obese patients, remains to be proven. Detailed BSA measurements were obtained from 163 adult patients according to linear formulas defined previously for individual body segments. Patients were then grouped based on body mass index (BMI). The contribution of individual body segments to the TBSA was determined based on BMI, and the validity of existing measurement tools was examined. Significant errors were found when comparing all groups with the rule of nines, which overestimated the contribution of the head and arms to the TBSA while underestimating the trunk and legs for all BMI groups. A new rule is proposed to minimize error, assigning 5% of the TBSA to the head and 15% of the TBSA to the arms across all BMI groups, while alternating the contribution of the trunk/legs as follows: normal-weight 35/45%, obese 40/40%, and morbidly obese 45/35%. Current modalities used to determine BSA burned are subject to significant errors, which are magnified as BMI increases. This new method provides increased accuracy in estimating the BSA involved in patients with burn injury regardless of BMI.

  8. Detection of S-Nitrosothiols

    PubMed Central

    Diers, Anne R.; Keszler, Agnes; Hogg, Neil

    2015-01-01

    BACKGROUND S-Nitrosothiols have been recognized as biologically-relevant products of nitric oxide that are involved in many of the diverse activities of this free radical. SCOPE OF REVIEW This review serves to discuss current methods for the detection and analysis of protein S-nitrosothiols. The major methods of S-nitrosothiol detection include chemiluminescence-based methods and switch-based methods, each of which comes in various flavors with advantages and caveats. MAJOR CONCLUSIONS The detection of S-nitrosothiols is challenging and prone to many artifacts. Accurate measurements require an understanding of the underlying chemistry of the methods involved and the use of appropriate controls. GENERAL SIGNIFICANCE Nothing is more important to a field of research than robust methodology that is generally trusted. The field of S-Nitrosation has developed such methods but, as S-nitrosothiols are easy to introduce as artifacts, it is vital that current users learn from the lessons of the past. PMID:23988402

  9. pKa of fentanyl varies with temperature: implications for acid-base management during extremes of body temperature.

    PubMed

    Thurlkill, Richard L; Cross, David A; Scholtz, J Martin; Pace, C Nick

    2005-12-01

    The pKa of fentanyl has not been measured previously at varying extremes of body temperature. The goal of this laboratory investigation was to test the hypothesis that the pKa of fentanyl changes with temperature. The investigation involved measuring the pKa values of aqueous fentanyl at varying temperatures. The investigation was conducted in a controlled laboratory environment. No human or animal subjects were involved. Because no live subjects were involved in the investigation, no interventions were necessary. This paper reports the effect of temperature on the pKa of fentanyl. The pKa of aqueous fentanyl was measured at 15 degrees C, 25 degrees C, 37 degrees C, 42 degrees C, and 47.5 degrees C by potentiometric titration in 0.01 mmol/L of potassium chloride after extensive degassing. Data were analyzed using the least squares method with an appropriately fitting equation. The pKa of fentanyl was found to change in a similar manner to the neutral point of water at varying temperatures. This finding has implications for the bioavailability of fentanyl at extremes of body temperature in association with the clinical acid-base management of the patient. Clinical implications for differing methods of intraoperative acid-base management at varying temperatures are discussed.

  10. Measurement and genetics of human subcortical and hippocampal asymmetries in large datasets.

    PubMed

    Guadalupe, Tulio; Zwiers, Marcel P; Teumer, Alexander; Wittfeld, Katharina; Vasquez, Alejandro Arias; Hoogman, Martine; Hagoort, Peter; Fernandez, Guillen; Buitelaar, Jan; Hegenscheid, Katrin; Völzke, Henry; Franke, Barbara; Fisher, Simon E; Grabe, Hans J; Francks, Clyde

    2014-07-01

    Functional and anatomical asymmetries are prevalent features of the human brain, linked to gender, handedness, and cognition. However, little is known about the neurodevelopmental processes involved. In zebrafish, asymmetries arise in the diencephalon before extending within the central nervous system. We aimed to identify genes involved in the development of subtle, left-right volumetric asymmetries of human subcortical structures using large datasets. We first tested the feasibility of measuring left-right volume differences in such large-scale samples, as assessed by two automated methods of subcortical segmentation (FSL|FIRST and FreeSurfer), using data from 235 subjects who had undergone MRI twice. We tested the agreement between the first and second scan, and the agreement between the segmentation methods, for measures of bilateral volumes of six subcortical structures and the hippocampus, and their volumetric asymmetries. We also tested whether there were biases introduced by left-right differences in the regional atlases used by the methods, by analyzing left-right flipped images. While many bilateral volumes were measured well (scan-rescan r = 0.6-0.8), most asymmetries, with the exception of the caudate nucleus, showed lower repeatabilites. We meta-analyzed genome-wide association scan results for caudate nucleus asymmetry in a combined sample of 3,028 adult subjects but did not detect associations at genome-wide significance (P < 5 × 10(-8) ). There was no enrichment of genetic association in genes involved in left-right patterning of the viscera. Our results provide important information for researchers who are currently aiming to carry out large-scale genome-wide studies of subcortical and hippocampal volumes, and their asymmetries. Copyright © 2013 Wiley Periodicals, Inc.

  11. Use of a combined oxygen and carbon dioxide transcutaneous electrode in the estimation of gas exchange during exercise.

    PubMed Central

    Sridhar, M K; Carter, R; Moran, F; Banham, S W

    1993-01-01

    BACKGROUND--Accurate and reliable measurement of gas exchange during exercise has traditionally involved arterial cannulation. Non-invasive devices to estimate arterial oxygen (O2) and carbon dioxide (CO2) tensions are now available. A method has been devised and evaluated for measuring gas exchange during exercise with a combined transcutaneous O2 and CO2 electrode. METHODS--Symptom limited exercise tests were carried out in 24 patients reporting effort intolerance and breathlessness. Exercise testing was performed by bicycle ergometry with a specifically designed protocol involving gradual two minute workload increments. Arterial O2 and CO2 tensions were measured at rest and during exercise by direct blood sampling from an indwelling arterial cannula and a combined transcutaneous electrode heated to 45 degrees C. The transcutaneous system was calibrated against values obtained by direct arterial sampling before each test. RESULTS--In all tests the trend of gas exchange measured by the transcutaneous system was true to the trend measured from direct arterial sampling. In the 140 measurements the mean difference between the O2 tensions estimated by direct sampling and the transcutaneous method was 0.08 kPa (0.62 mm Hg, limits of agreement 4.42 and -3.38 mm Hg). The mean difference between the methods for CO2 was 0.02 kPa (0.22 mm Hg, limits of agreement 2.20 and -1.70 mm Hg). There was no morbidity associated with the use of the transcutaneous electrode heated to 45 degrees C. CONCLUSIONS--A combined transcutaneous O2 and CO2 electrode heated to 45 degrees C can be used to provide a reliable estimate of gas exchange during gradual incremental exercise in adults. PMID:8346496

  12. Method for conducting nonlinear electrochemical impedance spectroscopy

    DOEpatents

    Adler, Stuart B.; Wilson, Jamie R.; Huff, Shawn L.; Schwartz, Daniel T.

    2015-06-02

    A method for conducting nonlinear electrochemical impedance spectroscopy. The method includes quantifying the nonlinear response of an electrochemical system by measuring higher-order current or voltage harmonics generated by moderate-amplitude sinusoidal current or voltage perturbations. The method involves acquisition of the response signal followed by time apodization and fast Fourier transformation of the data into the frequency domain, where the magnitude and phase of each harmonic signal can be readily quantified. The method can be implemented on a computer as a software program.

  13. Development of a new, completely implantable intraventricular pressure meter and preliminary report of its clinical experience

    NASA Technical Reports Server (NTRS)

    Osaka, K.; Murata, T.; Okamoto, S.; Ohta, T.; Ozaki, T.; Maeda, T.; Mori, K.; Handa, H.; Matsumoto, S.; Sakaguchi, I.

    1982-01-01

    A completely implantable intracranial pressure sensor designed for long-term measurement of intraventricular pressure in hydrocephalic patients is described. The measurement principal of the device is discussed along with the electronic and component structure and sources of instrument error. Clinical tests of this implanted pressure device involving both humans and animals showed it to be comparable to other methods of intracranial pressure measurement.

  14. Total ozone observation by sun photometry at Arosa, Switzerland

    NASA Astrophysics Data System (ADS)

    Staehelin, Johannes; Schill, Herbert; Hoegger, Bruno; Viatte, Pierre; Levrat, Gilbert; Gamma, Adrian

    1995-07-01

    The method used for ground-based total ozone observations and the design of two instruments used to monitor atmospheric total ozone at Arosa (Dobson spectrophotometer and Brewer spectrometer) are briefly described. Two different procedures of the calibration of the Dobson spectrometer, both based on the Langley plot method, are presented. Data quality problems that occured in recent years in the measurements of one Dobson instrument at Arosa are discussed, and two different methods to reassess total ozone observations are compared. Two partially automated Dobson spectrophotometers and two completely automated Brewer spectrometers are currently in operation at Arosa. Careful comparison of the results of the measurements of the different instruments yields valuable information of possible small long- term drifts of the instruments involved in the operational measurements.

  15. Measuring the severity of topical 5-fluorouracil toxicity.

    PubMed

    Korgavkar, Kaveri; Firoz, Elnaz F; Xiong, Michael; Lew, Robert; Marcolivio, Kimberly; Burnside, Nancy; Dyer, Robert; Weinstock, Martin A

    2014-01-01

    Topical 5% 5-fluorouracil (5-FU) is known to cause toxicity, such as erythema, pain, and crusting/erosions. We sought to develop a scale to measure this toxicity and test the scale for reliability. A scale was developed involving four parameters: erythema severity, percentage of face involved in erythema, crusting/erosions severity, and percentage of face involved in crusting/erosions. Thirteen raters graded 99 sets of photographs from the Veterans Affairs Keratinocyte Carcinoma Chemoprevention (VAKCC) Trial using these parameters. Intraclass correlation overall for 13 raters was 0.82 (95% CI 0.77-0.86). There was no statistically significant trend in reliability by level of training in dermatology. This scale is a reliable method of evaluating the severity of toxicity from topical 5-fluorouracil and can be used by dermatologists and nondermatologists alike.

  16. Preschool Racial Attitude Measure II (PRAM II): Technical Report #1: 1970-71 Standardization Study.

    ERIC Educational Resources Information Center

    Williams, John E.

    This report provides detailed technical information concerning the Preschool Racial Attitude Measure II (PRAM II) a method for assessing the attitudes of pre-literate children toward light- and dark-skinned individuals. Several major changes were involved in the PRAM II revision: (1) the length was doubled, (2) the general artistic quality of the…

  17. Modeling individual tree growth by fusing diameter tape and increment core data

    Treesearch

    Erin M. Schliep; Tracy Qi Dong; Alan E. Gelfand; Fan. Li

    2014-01-01

    Tree growth estimation is a challenging task as difficulties associated with data collection and inference often result in inaccurate estimates. Two main methods for tree growth estimation are diameter tape measurements and increment cores. The former involves repeatedly measuring tree diameters with a cloth or metal tape whose scale has been adjusted to give diameter...

  18. Measuring the Quality of Life of University Students. Research Monograph Series. Volume 1.

    ERIC Educational Resources Information Center

    Roberts, Lance W.; Clifton, Rodney A.

    This study sought to develop a valid set of scales in the cognitive and affective domains for measuring the quality of life of university students. In addition the study attempted to illustrate the usefulness of Thomas Piazza's procedures for constructing valid scales in educational research. Piazza's method involves a multi-step construction of…

  19. Development and Validation of an Instrument to Measure Indonesian Pre-Service Teachers' Conceptions of Statistics

    ERIC Educational Resources Information Center

    Idris, Khairiani; Yang, Kai-Lin

    2017-01-01

    This article reports the results of a mixed-methods approach to develop and validate an instrument to measure Indonesian pre-service teachers' conceptions of statistics. First, a phenomenographic study involving a sample of 44 participants uncovered six categories of conceptions of statistics. Second, an instrument of conceptions of statistics was…

  20. Doubling Your Sunsets or How Anyone Can Measure the Earth's Size with Wristwatch and Meterstick.

    ERIC Educational Resources Information Center

    Rawlins, Dennis

    1979-01-01

    Describes a simple method to measure the size of the Earth to an accuracy of order of magnitude 10 percent. The procedure involves finding the time interval between two sunsets, a sunset observed at sea level while lying down, and a sunset viewed at eye height after standing up. (GA)

  1. Time-resolved brightness measurements by streaking

    NASA Astrophysics Data System (ADS)

    Torrance, Joshua S.; Speirs, Rory W.; McCulloch, Andrew J.; Scholten, Robert E.

    2018-03-01

    Brightness is a key figure of merit for charged particle beams, and time-resolved brightness measurements can elucidate the processes involved in beam creation and manipulation. Here we report on a simple, robust, and widely applicable method for the measurement of beam brightness with temporal resolution by streaking one-dimensional pepperpots, and demonstrate the technique to characterize electron bunches produced from a cold-atom electron source. We demonstrate brightness measurements with 145 ps temporal resolution and a minimum resolvable emittance of 40 nm rad. This technique provides an efficient method of exploring source parameters and will prove useful for examining the efficacy of techniques to counter space-charge expansion, a critical hurdle to achieving single-shot imaging of atomic scale targets.

  2. Comparison of NAVSTAR satellite L band ionospheric calibrations with Faraday rotation measurements

    NASA Technical Reports Server (NTRS)

    Royden, H. N.; Miller, R. B.; Buennagel, L. A.

    1984-01-01

    It is pointed out that interplanetary navigation at the Jet Propulsion Laboratory (JPL) is performed by analyzing measurements derived from the radio link between spacecraft and earth and, near the target, onboard optical measurements. For precise navigation, corrections for ionospheric effects must be applied, because the earth's ionosphere degrades the accuracy of the radiometric data. These corrections are based on ionospheric total electron content (TEC) determinations. The determinations are based on the measurement of the Faraday rotation of linearly polarized VHF signals from geostationary satellites. Problems arise in connection with the steadily declining number of satellites which are suitable for Faraday rotation measurements. For this reason, alternate methods of determining ionospheric electron content are being explored. One promising method involves the use of satellites of the NAVSTAR Global Positioning System (GPS). The results of a comparative study regarding this method are encouraging.

  3. Features of the non-contact carotid pressure waveform: Cardiac and vascular dynamics during rebreathing

    NASA Astrophysics Data System (ADS)

    Casaccia, S.; Sirevaag, E. J.; Richter, E. J.; O'Sullivan, J. A.; Scalise, L.; Rohrbaugh, J. W.

    2016-10-01

    This report amplifies and extends prior descriptions of the use of laser Doppler vibrometry (LDV) as a method for assessing cardiovascular activity, on a non-contact basis. A rebreathing task (n = 35 healthy individuals) was used to elicit multiple effects associated with changes in autonomic drive as well as blood gases including hypercapnia. The LDV pulse was obtained from two sites overlying the carotid artery, separated by 40 mm. A robust pulse signal was obtained from both sites, in accord with the well-described changes in carotid diameter over the blood pressure cycle. Emphasis was placed on extracting timing measures from the LDV pulse, which could serve as surrogate measures of pulse wave velocity (PWV) and the associated arterial stiffness. For validation purposes, a standard measure of pulse transit time (PTT) to the radial artery was obtained using a tonometric sensor. Two key measures of timing were extracted from the LDV pulse. One involved the transit time along the 40 mm distance separating the two LDV measurement sites. A second measure involved the timing of a late feature of the LDV pulse contour, which was interpreted as reflection wave latency and thus a measure of round-trip travel time. Both LDV measures agreed with the conventional PTT measure, in disclosing increased PWV during periods of active rebreathing. These results thus provide additional evidence that measures based on the non-contact LDV technique might provide surrogate measures for those obtained using conventional, more obtrusive assessment methods that require attached sensors.

  4. Comparison of an objective method of measuring bulbar redness to the use of traditional grading scales.

    PubMed

    Sorbara, Luigina; Simpson, Trefford; Duench, Stephanie; Schulze, Marc; Fonn, Desmond

    2007-03-01

    The primary objective was to compare measures of bulbar redness objectively using a photometric method with standard grading methods. Measures of redness were made on 24 participants wearing a silicone hydrogel contact lens in one eye for overnight wear. This report compares hyperaemia after 1 week of daily wear (baseline) with redness measured after 6 months of overnight wear. A new method of objectively measuring bulbar conjunctival redness was performed using the Spectrascan650 Photometer by Photo Research under fixed illumination. Photometric measures in CIEu(*) chromaticity values involve the measurement of chromaticity, a physical analogue of redness, greenness and blueness in the image. This method was validated in Part 1 of the study using repeated measurements on the photographic CCLRU scale. In Part 2 of the study, the photographic grading scale (CCLRU) from 0 (none) to 100 (extreme) was used to make the comparison. Part 1 indicated that the photometer provides a repeatable and reliable measure of bulbar redness (CCC=0.989). A moderately strong and significant correlation was found between the CIEu(*) chromaticity values and the analogue data (R=0.795, p=0.000) at each measurement session (from baseline to 1 day, 1 week, and 1, 3 and 6 months of overnight wear). This new standardized and objective method of measuring bulbar redness has great potential to replace subjective grading scales, especially with multi-centre studies, where variability between investigators occurs. This method may also detect smaller changes between visits or between eyes.

  5. Measurements of the effective atomic numbers of minerals using bremsstrahlung produced by low-energy electrons

    NASA Astrophysics Data System (ADS)

    Czarnecki, S.; Williams, S.

    2017-12-01

    The accuracy of a method for measuring the effective atomic numbers of minerals using bremsstrahlung intensities has been investigated. The method is independent of detector-efficiency and maximum accelerating voltage. In order to test the method, experiments were performed which involved low-energy electrons incident on thick malachite, pyrite, and galena targets. The resultant thick-target bremsstrahlung was compared to bremsstrahlung produced using a standard target, and experimental effective atomic numbers were calculated using data from a previous study (in which the Z-dependence of thick-target bremsstrahlung was studied). Comparisons of the results to theoretical values suggest that the method has potential for implementation in energy-dispersive X-ray spectroscopy systems.

  6. Assessment of PIV-based unsteady load determination of an airfoil with actuated flap

    NASA Astrophysics Data System (ADS)

    Sterenborg, J. J. H. M.; Lindeboom, R. C. J.; Simão Ferreira, C. J.; van Zuijlen, A. H.; Bijl, H.

    2014-02-01

    For complex experimental setups involving movable structures it is not trivial to directly measure unsteady loads. An alternative is to deduce unsteady loads indirectly from measured velocity fields using Noca's method. The ultimate aim is to use this method in future work to determine unsteady loads for fluid-structure interaction problems. The focus in this paper is first on the application and assessment of Noca's method for an airfoil with an oscillating trailing edge flap. To our best knowledge Noca's method has not been applied yet to airfoils with moving control surfaces or fluid-structure interaction problems. In addition, wind tunnel corrections for this type of unsteady flow problem are considered.

  7. Spatial Mutual Information Based Hyperspectral Band Selection for Classification

    PubMed Central

    2015-01-01

    The amount of information involved in hyperspectral imaging is large. Hyperspectral band selection is a popular method for reducing dimensionality. Several information based measures such as mutual information have been proposed to reduce information redundancy among spectral bands. Unfortunately, mutual information does not take into account the spatial dependency between adjacent pixels in images thus reducing its robustness as a similarity measure. In this paper, we propose a new band selection method based on spatial mutual information. As validation criteria, a supervised classification method using support vector machine (SVM) is used. Experimental results of the classification of hyperspectral datasets show that the proposed method can achieve more accurate results. PMID:25918742

  8. Formative Assessment and Increased Student Involvement Increase Grades in an Upper Secondary School Biology Course

    ERIC Educational Resources Information Center

    Granbom, Martin

    2016-01-01

    This study shows that formative methods and increased student participation has a positive influence on learning measured as grades. The study was conducted during the course Biology A in a Swedish Upper Secondary School. The students constructed grade criteria and defined working methods and type of examination within a given topic, Gene…

  9. Paired comparison estimates of willingness to accept versus contingent valuation estimates of willingness to pay

    Treesearch

    John B. Loomis; George Peterson; Patricia A. Champ; Thomas C. Brown; Beatrice Lucero

    1998-01-01

    Estimating empirical measures of an individual's willingness to accept that are consistent with conventional economic theory, has proven difficult. The method of paired comparison offers a promising approach to estimate willingness to accept. This method involves having individuals make binary choices between receiving a particular good or a sum of money....

  10. Oxygen radicals as key mediators in neurological disease: fact or fiction?

    PubMed

    Halliwell, B

    1992-01-01

    A free radical is any species capable of independent existence that contains one or more unpaired electrons. Free radicals and other reactive oxygen species are frequently proposed to be involved in the pathology of several neurological disorders. Criteria for establishing such involvement are presented. Development of new methods for measuring oxidative damage should enable elucidation of the precise role of reactive oxygen species in neurological disorders.

  11. Differences That Make A Difference: A Study in Collaborative Learning

    NASA Astrophysics Data System (ADS)

    Touchman, Stephanie

    Collaborative learning is a common teaching strategy in classrooms across age groups and content areas. It is important to measure and understand the cognitive process involved during collaboration to improve teaching methods involving interactive activities. This research attempted to answer the question: why do students learn more in collaborative settings? Using three measurement tools, 142 participants from seven different biology courses at a community college and at a university were tested before and after collaborating about the biological process of natural selection. Three factors were analyzed to measure their effect on learning at the individual level and the group level. The three factors were: difference in prior knowledge, sex and religious beliefs. Gender and religious beliefs both had a significant effect on post-test scores.

  12. Where and How Does Grammatically Geared Processing Take Place--And Why Is Broca's Area Often Involved. A Coordinated fMRI/ERBP Study of Language Processing

    ERIC Educational Resources Information Center

    Dogil, Grzegorz; Frese, Inga; Haider, Hubert; Rohm, Dietmar; Wokurek, Wolfgang

    2004-01-01

    We address the possibility of combining the results from hemodynamic and electrophysiological methods for the study of cognitive processing of language. The hemodynamic method we use is Event-Related fMRI, and the electrophysiological method measures Event-Related Band Power (ERBP) of the EEG signal. The experimental technique allows us to…

  13. Facioskeletal changes in children with juvenile idiopathic arthritis

    PubMed Central

    Twilt, M; Schulten, A J M; Nicolaas, P; Dülger, A; van Suijlekom‐Smit, L W A

    2006-01-01

    Objective To investigate the facioskeletal morphology in patients with juvenile idiopathic arthritis (JIA) with and without temporomandibular joint (TMJ) involvement. Methods Eighty five patients were included. TMJ involvement was defined by orthopantomogram alterations. Lateral cephalograms were used to determine linear and angular measurements and occlusion. Results Patients regardless of their TMJ status had a 67% chance for retrognathia and a 52% chance for posterior rotation of the mandible and, respectively, 82% and 58% if TMJ involvement were present. Changes were not uniformly distributed among the different subtypes. Conclusion Patients with JIA have an altered facial morphology, especially in the presence of TMJ involvement. PMID:16699052

  14. Flip-angle profile of slice-selective excitation and the measurement of the MR longitudinal relaxation time with steady-state magnetization

    NASA Astrophysics Data System (ADS)

    Hsu, Jung-Jiin

    2015-08-01

    In MRI, the flip angle (FA) of slice-selective excitation is not uniform across the slice-thickness dimension. This work investigates the effect of the non-uniform FA profile on the accuracy of a commonly-used method for the measurement, in which the T1 value, i.e., the longitudinal relaxation time, is determined from the steady-state signals of an equally-spaced RF pulse train. By using the numerical solutions of the Bloch equation, it is shown that, because of the non-uniform FA profile, the outcome of the T1 measurement depends significantly on T1 of the specimen and on the FA and the inter-pulse spacing τ of the pulse train. A new method to restore the accuracy of the T1 measurement is described. Different from the existing approaches, the new method also removes the FA profile effect for the measurement of the FA, which is normally a part of the T1 measurement. In addition, the new method does not involve theoretical modeling, approximation, or modification to the underlying principle of the T1 measurement. An imaging experiment is performed, which shows that the new method can remove the FA-, the τ-, and the T1-dependence and produce T1 measurements in excellent agreement with the ones obtained from a gold standard method (the inversion-recovery method).

  15. Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review

    PubMed Central

    Chung, Stephanie T.; Chacko, Shaji K.; Sunehag, Agneta L.

    2015-01-01

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. PMID:26604176

  16. Anthropometry-corrected exposure modeling as a method to improve trunk posture assessment with a single inclinometer.

    PubMed

    Van Driel, Robin; Trask, Catherine; Johnson, Peter W; Callaghan, Jack P; Koehoorn, Mieke; Teschke, Kay

    2013-01-01

    Measuring trunk posture in the workplace commonly involves subjective observation or self-report methods or the use of costly and time-consuming motion analysis systems (current gold standard). This work compared trunk inclination measurements using a simple data-logging inclinometer with trunk flexion measurements using a motion analysis system, and evaluated adding measures of subject anthropometry to exposure prediction models to improve the agreement between the two methods. Simulated lifting tasks (n=36) were performed by eight participants, and trunk postures were simultaneously measured with each method. There were significant differences between the two methods, with the inclinometer initially explaining 47% of the variance in the motion analysis measurements. However, adding one key anthropometric parameter (lower arm length) to the inclinometer-based trunk flexion prediction model reduced the differences between the two systems and accounted for 79% of the motion analysis method's variance. Although caution must be applied when generalizing lower-arm length as a correction factor, the overall strategy of anthropometric modeling is a novel contribution. In this lifting-based study, by accounting for subject anthropometry, a single, simple data-logging inclinometer shows promise for trunk posture measurement and may have utility in larger-scale field studies where similar types of tasks are performed.

  17. 75 FR 70925 - Office of the Assistant Secretary for Planning and Evaluation; Medicare Program; Meeting of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-19

    ... estimation involving economics and actuarial science. Panelists are not restricted, however, in the topics... actuarial and economic assumptions and methods by which Trustees might more accurately measure health...

  18. 76 FR 558 - Office of the Assistant Secretary for Planning and Evaluation; Medicare Program; Meeting of the...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-05

    ... technical aspects of estimation involving economics and actuarial science. Panelists are not restricted... actuarial and economic assumptions and methods by which Trustees might more accurately measure health...

  19. Playing Hardball with Facilities Expenses.

    ERIC Educational Resources Information Center

    Fickes, Michael

    1997-01-01

    Describes one school district manager's tactics for successfully controlling district costs and increasing capital improvements while only marginally increasing the facilities maintenance budget. Highlights guidelines for controlling personnel requirements and cost-reduction methods. Discusses specific cost-control measures involving telephone…

  20. An optimal open/closed-loop control method with application to a pre-stressed thin duralumin plate

    NASA Astrophysics Data System (ADS)

    Nadimpalli, Sruthi Raju

    The excessive vibrations of a pre-stressed duralumin plate, suppressed by a combination of open-loop and closed-loop controls, also known as open/closed-loop control, is studied in this thesis. The two primary steps involved in this process are: Step (I) with an assumption that the closed-loop control law is proportional, obtain the optimal open-loop control by direct minimization of the performance measure consisting of energy at terminal time and a penalty on open-loop control force via calculus of variations. If the performance measure also involves a penalty on closed-loop control effort then a Fourier based method is utilized. Step (II) the energy at terminal time is minimized numerically to obtain optimal values of feedback gains. The optimal closed-loop control gains obtained are used to describe the displacement and the velocity of open-loop, closed-loop and open/closed-loop controlled duralumin plate.

  1. Electrical Versus Optical: Comparing Methods for Detecting Terahertz Radiation Using Neon Lamps

    NASA Astrophysics Data System (ADS)

    Slocombe, L. L.; Lewis, R. A.

    2018-05-01

    Terahertz radiation impinging on a lit neon tube causes additional ionization of the encapsulated gas. As a result, the electrical current flowing between the electrodes increases and the glow discharge in the tube brightens. These dual phenomena suggest two distinct modes of terahertz sensing. The electrical mode simply involves measuring the electrical current. The optical mode involves monitoring the brightness of the weakly ionized plasma glow discharge. Here, we directly compare the two detection modes under identical experimental conditions. We measure 0.1-THz radiation modulated at frequencies in the range 0.1-10 kHz, for lamp currents in the range 1-10 mA. We find that electrical detection provides a superior signal-to-noise ratio while optical detection has a faster response. Either method serves as the basis of a compact, robust, and inexpensive room-temperature detector of terahertz radiation.

  2. Laminar Premixed and Diffusion Flames (Ground-Based Study)

    NASA Technical Reports Server (NTRS)

    Dai, Z.; El-Leathy, A. M.; Lin, K.-C.; Sunderland, P. B.; Xu, F.; Faeth, G. M.; Urban, D. L. (Technical Monitor); Yuan, Z.-G. (Technical Monitor)

    2000-01-01

    Ground-based studies of soot processes in laminar flames proceeded in two phases, considering laminar premixed flames and laminar diffusion flames, in turn. The test arrangement for laminar premixed flames involved round flat flame burners directed vertically upward at atmospheric pressure. The test arrangement for laminar jet diffusion flames involved a round fuel port directed vertically upward with various hydrocarbon fuels burning at atmospheric pressure in air. In both cases, coflow was used to prevent flame oscillations and measurements were limited to the flame axes. The measurements were sufficient to resolve soot nucleation, growth and oxidation rates, as well as the properties of the environment needed to evaluate mechanisms of these processes. The experimental methods used were also designed to maintain capabilities for experimental methods used in corresponding space-based experiments. This section of the report will be limited to consideration of flame structure for both premixed and diffusion flames.

  3. Rapid viscosity measurements of powdered thermosetting resins

    NASA Technical Reports Server (NTRS)

    Price, H. L.; Burks, H. D.; Dalal, S. K.

    1978-01-01

    A rapid and inexpensive method of obtaining processing-related data on powdered thermosetting resins has been investigated. The method involved viscosity measurements obtained with a small specimen (less than 100 mg) parallel plate plastometer. A data acquisition and reduction system was developed which provided a value of viscosity and strain rate about 12-13 second intervals during a test. The effects of specimen compaction pressure and reduction of adhesion between specimen and parallel plates were examined. The plastometer was used to measure some processing-related viscosity changes of an addition polyimide resin, including changes caused by pre-test heat treatment, test temperature, and strain rate.

  4. Validity and Reliability of Perinatal Biomarkers after Storage as Dry Blood Spots on Paper

    PubMed Central

    Mihalopoulos, Nicole L.; Phillips, Terry M.; Slater, Hillarie; Thomson, J. Anne; Varner, Michael W.; Moyer-Mileur, Laurie J.

    2013-01-01

    Ojective To validate use of chip-based immunoaffinity capillary electrophoresis on dry blood spot samples (DBSS) to measure obesity-related cytokines. Methods Chip-based immunoaffinity capillary electrophoresis was used to measure adiponectin, leptin and insulin in serum and DBSS in pregnant women, cord blood, and infant heelstick at birth and 6 weeks. Concordance of measurements was determined with Pearson's correlation. Results We report high concordance between results obtained from serum and DBSS with the exception of cord blood specimens. Conclusions Ease of sample collection and storage makes DBSS an optimal method for use in studies involving neonates and young children. PMID:21735507

  5. An evaluation of condition indices for birds

    USGS Publications Warehouse

    Johnson, D.H.; Krapu, G.L.; Reinecke, K.J.; Jorde, Dennis G.

    1985-01-01

    A Lipid Index, the ratio of fat to fat-free dry weight, is proposed as a measure of fat stores in birds. The estimation of the index from field measurements of live birds is illustrated with data on the sandhill crane (Grus canadensis) and greater white-fronted goose (Anser albifrons). Of the various methods of assessing fat stores, lipid extraction is the most accurate but also the most involved. Water extraction is a simpler laboratory method that provides a good index to fat and can be calibrated to serve as an estimator. Body weight itself is often inadequate as a condition index, but scaling by morphological measurements can markedly improve its value.

  6. Discriminant content validity: a quantitative methodology for assessing content of theory-based measures, with illustrative applications.

    PubMed

    Johnston, Marie; Dixon, Diane; Hart, Jo; Glidewell, Liz; Schröder, Carin; Pollard, Beth

    2014-05-01

    In studies involving theoretical constructs, it is important that measures have good content validity and that there is not contamination of measures by content from other constructs. While reliability and construct validity are routinely reported, to date, there has not been a satisfactory, transparent, and systematic method of assessing and reporting content validity. In this paper, we describe a methodology of discriminant content validity (DCV) and illustrate its application in three studies. Discriminant content validity involves six steps: construct definition, item selection, judge identification, judgement format, single-sample test of content validity, and assessment of discriminant items. In three studies, these steps were applied to a measure of illness perceptions (IPQ-R) and control cognitions. The IPQ-R performed well with most items being purely related to their target construct, although timeline and consequences had small problems. By contrast, the study of control cognitions identified problems in measuring constructs independently. In the final study, direct estimation response formats for theory of planned behaviour constructs were found to have as good DCV as Likert format. The DCV method allowed quantitative assessment of each item and can therefore inform the content validity of the measures assessed. The methods can be applied to assess content validity before or after collecting data to select the appropriate items to measure theoretical constructs. Further, the data reported for each item in Appendix S1 can be used in item or measure selection. Statement of contribution What is already known on this subject? There are agreed methods of assessing and reporting construct validity of measures of theoretical constructs, but not their content validity. Content validity is rarely reported in a systematic and transparent manner. What does this study add? The paper proposes discriminant content validity (DCV), a systematic and transparent method of assessing and reporting whether items assess the intended theoretical construct and only that construct. In three studies, DCV was applied to measures of illness perceptions, control cognitions, and theory of planned behaviour response formats. Appendix S1 gives content validity indices for each item of each questionnaire investigated. Discriminant content validity is ideally applied while the measure is being developed, before using to measure the construct(s), but can also be applied after using a measure. © 2014 The British Psychological Society.

  7. Low-cost photonic sensors for carbon dioxide exchange rate measurement

    NASA Astrophysics Data System (ADS)

    Bieda, Marcin S.; Sobotka, Piotr; Lesiak, Piotr; Woliński, Tomasz R.

    2017-10-01

    Carbon dioxide (CO2) measurement has an important role in atmosphere monitoring. Usually, two types of measurements are carried out. The first one is based on gas concentration measurement while the second involves gas exchange rate measurement between earth surface and atmosphere [1]. There are several methods which allow gas concentration measurement. However, most of them require expensive instrumentation or large devices (i.e. gas chambers). In order to precisely measure either CO2 concentration or CO2 exchange rate, preferably a sensors network should be used. These sensors must have small dimensions, low power consumption, and they should be cost-effective. Therefore, this creates a great demand for a robust low-power and low-cost CO2 sensor [2,3]. As a solution, we propose a photonic sensor that can measure CO2 concentration and also can be used to measure gas exchange by using the Eddy covariance method [1].

  8. Sheet metals characterization using the virtual fields method

    NASA Astrophysics Data System (ADS)

    Marek, Aleksander; Davis, Frances M.; Pierron, Fabrice

    2018-05-01

    In this work, a characterisation method involving a deep-notched specimen subjected to a tensile loading is introduced. This specimen leads to heterogeneous states of stress and strain, the latter being measured using a stereo DIC system (MatchID). This heterogeneity enables the identification of multiple material parameters in a single test. In order to identify material parameters from the DIC data, an inverse method called the Virtual Fields Method is employed. The method combined with recently developed sensitivity-based virtual fields allows to optimally locate areas in the test where information about each material parameter is encoded, improving accuracy of the identification over the traditional user-defined virtual fields. It is shown that a single test performed at 45° to the rolling direction is sufficient to obtain all anisotropic plastic parameters, thus reducing experimental effort involved in characterisation. The paper presents the methodology and some numerical validation.

  9. Chapter 16: Retrocommissioning Evaluation Protocol. The Uniform Methods Project: Methods for Determining Energy Efficiency Savings for Specific Measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurnik, Charles W.; Tiessen, Alex

    Retrocommissioning (RCx) is a systematic process for optimizing energy performance in existing buildings. It specifically focuses on improving the control of energy-using equipment (e.g., heating, ventilation, and air conditioning [HVAC] equipment and lighting) and typically does not involve equipment replacement. Field results have shown proper RCx can achieve energy savings ranging from 5 percent to 20 percent, with a typical payback of two years or less (Thorne 2003). The method presented in this protocol provides direction regarding: (1) how to account for each measure's specific characteristics and (2) how to choose the most appropriate savings verification approach.

  10. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  11. Combining classifiers to predict gene function in Arabidopsis thaliana using large-scale gene expression measurements.

    PubMed

    Lan, Hui; Carson, Rachel; Provart, Nicholas J; Bonner, Anthony J

    2007-09-21

    Arabidopsis thaliana is the model species of current plant genomic research with a genome size of 125 Mb and approximately 28,000 genes. The function of half of these genes is currently unknown. The purpose of this study is to infer gene function in Arabidopsis using machine-learning algorithms applied to large-scale gene expression data sets, with the goal of identifying genes that are potentially involved in plant response to abiotic stress. Using in house and publicly available data, we assembled a large set of gene expression measurements for A. thaliana. Using those genes of known function, we first evaluated and compared the ability of basic machine-learning algorithms to predict which genes respond to stress. Predictive accuracy was measured using ROC50 and precision curves derived through cross validation. To improve accuracy, we developed a method for combining these classifiers using a weighted-voting scheme. The combined classifier was then trained on genes of known function and applied to genes of unknown function, identifying genes that potentially respond to stress. Visual evidence corroborating the predictions was obtained using electronic Northern analysis. Three of the predicted genes were chosen for biological validation. Gene knockout experiments confirmed that all three are involved in a variety of stress responses. The biological analysis of one of these genes (At1g16850) is presented here, where it is shown to be necessary for the normal response to temperature and NaCl. Supervised learning methods applied to large-scale gene expression measurements can be used to predict gene function. However, the ability of basic learning methods to predict stress response varies widely and depends heavily on how much dimensionality reduction is used. Our method of combining classifiers can improve the accuracy of such predictions - in this case, predictions of genes involved in stress response in plants - and it effectively chooses the appropriate amount of dimensionality reduction automatically. The method provides a useful means of identifying genes in A. thaliana that potentially respond to stress, and we expect it would be useful in other organisms and for other gene functions.

  12. Differentiation of involved and uninvolved psoriatic skin from healthy skin using noninvasive visual, colorimeter and evaporimeter methods.

    PubMed

    Pershing, L K; Bakhtian, S; Wright, E D; Rallis, T M

    1995-08-01

    Uninvolved skin of psoriasis may not be entirely normal. The object was to characterize healthy, uninvolved psoriatic skin and lesional skin by biophysical methods. Involved and uninvolved psoriatic and age-gender matched healthy skin was measured objectively with a colorimeter and evaporimeter and subjectively with visual assessment in 14 subjects. Visual assessment of erythema (E), scaling (S) and induration (I) as well as the target lesion score at the involved psoriatic skin sites were significantly elevated (p<0.05) above uninvolved psoriatic or healthy skin sites. No difference between uninvolved psoriatic and healthy skin was measured visually. Transepidermal water loss at involved psoriatic skin >uninvolved psoriatic skin >healthy skin (p<0.05). Objective assessment of skin color in 3 color scales, L*, a*, and b*, differentiated involved and uninvolved psoriatic skin from healthy skin sites. Involved psoriatic skin demonstrated higher (p<0.01) a-scale values and lower (p<0.01) L* and b* scale values than uninvolved psoriatic skin. Further, colorimeter L* and a* scale values at uninvolved psoriatic skin sites were lower and higher (p<0.05), respectively, than healthy skin. The individual chromameter parameters (L*, a*, b*) correlated well with the visual parameters (E, S and I). Composite colorimeter description (L*× b*)/a* significantly differentiated healthy skin from both involved and uninvolved psoriatic skin. These collective data highlight that even visually appearing uninvolved psoriatic skin is compromised compared with healthy skin. These objective, noninvasive but differential capabilities of the colorimeter and evaporimeter will aid in the mechanistic quantification of new psoriatic drug therapies and in conjuction with biochemical studies, add to understanding of the multifactorial pathogenesis of psoriasis.

  13. Fruit Quality Evaluation Using Spectroscopy Technology: A Review

    PubMed Central

    Wang, Hailong; Peng, Jiyu; Xie, Chuanqi; Bao, Yidan; He, Yong

    2015-01-01

    An overview is presented with regard to applications of visible and near infrared (Vis/NIR) spectroscopy, multispectral imaging and hyperspectral imaging techniques for quality attributes measurement and variety discrimination of various fruit species, i.e., apple, orange, kiwifruit, peach, grape, strawberry, grape, jujube, banana, mango and others. Some commonly utilized chemometrics including pretreatment methods, variable selection methods, discriminant methods and calibration methods are briefly introduced. The comprehensive review of applications, which concentrates primarily on Vis/NIR spectroscopy, are arranged according to fruit species. Most of the applications are focused on variety discrimination or the measurement of soluble solids content (SSC), acidity and firmness, but also some measurements involving dry matter, vitamin C, polyphenols and pigments have been reported. The feasibility of different spectral modes, i.e., reflectance, interactance and transmittance, are discussed. Optimal variable selection methods and calibration methods for measuring different attributes of different fruit species are addressed. Special attention is paid to sample preparation and the influence of the environment. Areas where further investigation is needed and problems concerning model robustness and model transfer are identified. PMID:26007736

  14. Gas permeability measurements for film envelope materials

    DOEpatents

    Ludtka, G.M.; Kollie, T.G.; Watkin, D.C.; Walton, D.G.

    1998-05-12

    Method and apparatus for measuring the permeability of polymer film materials such as used in super-insulation powder-filled evacuated panels (PEPs) reduce the time required for testing from several years to weeks or months. The method involves substitution of a solid non-outgassing body having a free volume of between 0% and 25% of its total volume for the usual powder in the PEP to control the free volume of the ``body-filled panel.`` Pressure versus time data for the test piece permit extrapolation to obtain long term performance of the candidate materials. 4 figs.

  15. Gas permeability measurements for film envelope materials

    DOEpatents

    Ludtka, Gerard M.; Kollie, Thomas G.; Watkin, David C.; Walton, David G.

    1998-01-01

    Method and apparatus for measuring the permeability of polymer film materials such as used in super-insulation powder-filled evacuated panels (PEPs) reduce the time required for testing from several years to weeks or months. The method involves substitution of a solid non-outgassing body having a free volume of between 0% and 25% of its total volume for the usual powder in the PEP to control the free volume of the "body-filled panel". Pressure versus time data for the test piece permit extrapolation to obtain long term performance of the candidate materials.

  16. Gross anatomy of network security

    NASA Technical Reports Server (NTRS)

    Siu, Thomas J.

    2002-01-01

    Information security involves many branches of effort, including information assurance, host level security, physical security, and network security. Computer network security methods and implementations are given a top-down description to permit a medically focused audience to anchor this information to their daily practice. The depth of detail of network functionality and security measures, like that of the study of human anatomy, can be highly involved. Presented at the level of major gross anatomical systems, this paper will focus on network backbone implementation and perimeter defenses, then diagnostic tools, and finally the user practices (the human element). Physical security measures, though significant, have been defined as beyond the scope of this presentation.

  17. Problems of millipound thrust measurement. The "Hansen Suspension"

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carta, David G.

    Considered in detail are problems which led to the need and use of the 'Hansen Suspension'. Also discussed are problems which are likely to be encountered in any low level thrust measuring system. The methods of calibration and the accuracies involved are given careful attention. With all parameters optimized and calibration techniques perfected, the system was found capable of a resolution of 10 {mu} lbs. A comparison of thrust measurements made by the 'Hansen Suspension' with measurements of a less sophisticated device leads to some surprising results.

  18. Measurement of thermal conductivity and thermal diffusivity using a thermoelectric module

    NASA Astrophysics Data System (ADS)

    Beltrán-Pitarch, Braulio; Márquez-García, Lourdes; Min, Gao; García-Cañadas, Jorge

    2017-04-01

    A proof of concept of using a thermoelectric module to measure both thermal conductivity and thermal diffusivity of bulk disc samples at room temperature is demonstrated. The method involves the calculation of the integral area from an impedance spectrum, which empirically correlates with the thermal properties of the sample through an exponential relationship. This relationship was obtained employing different reference materials. The impedance spectroscopy measurements are performed in a very simple setup, comprising a thermoelectric module, which is soldered at its bottom side to a Cu block (heat sink) and thermally connected with the sample at its top side employing thermal grease. Random and systematic errors of the method were calculated for the thermal conductivity (18.6% and 10.9%, respectively) and thermal diffusivity (14.2% and 14.7%, respectively) employing a BCR724 standard reference material. Although errors are somewhat high, the technique could be useful for screening purposes or high-throughput measurements at its current state. This new method establishes a new application for thermoelectric modules as thermal properties sensors. It involves the use of a very simple setup in conjunction with a frequency response analyzer, which provides a low cost alternative to most of currently available apparatus in the market. In addition, impedance analyzers are reliable and widely spread equipment, which facilities the sometimes difficult access to thermal conductivity facilities.

  19. Determination of the human spine curve based on laser triangulation.

    PubMed

    Poredoš, Primož; Čelan, Dušan; Možina, Janez; Jezeršek, Matija

    2015-02-05

    The main objective of the present method was to automatically obtain a spatial curve of the thoracic and lumbar spine based on a 3D shape measurement of a human torso with developed scoliosis. Manual determination of the spine curve, which was based on palpation of the thoracic and lumbar spinous processes, was found to be an appropriate way to validate the method. Therefore a new, noninvasive, optical 3D method for human torso evaluation in medical practice is introduced. Twenty-four patients with confirmed clinical diagnosis of scoliosis were scanned using a specially developed 3D laser profilometer. The measuring principle of the system is based on laser triangulation with one-laser-plane illumination. The measurement took approximately 10 seconds at 700 mm of the longitudinal translation along the back. The single point measurement accuracy was 0.1 mm. Computer analysis of the measured surface returned two 3D curves. The first curve was determined by manual marking (manual curve), and the second was determined by detecting surface curvature extremes (automatic curve). The manual and automatic curve comparison was given as the root mean square deviation (RMSD) for each patient. The intra-operator study involved assessing 20 successive measurements of the same person, and the inter-operator study involved assessing measurements from 8 operators. The results obtained for the 24 patients showed that the typical RMSD between the manual and automatic curve was 5.0 mm in the frontal plane and 1.0 mm in the sagittal plane, which is a good result compared with palpatory accuracy (9.8 mm). The intra-operator repeatability of the presented method in the frontal and sagittal planes was 0.45 mm and 0.06 mm, respectively. The inter-operator repeatability assessment shows that that the presented method is invariant to the operator of the computer program with the presented method. The main novelty of the presented paper is the development of a new, non-contact method that provides a quick, precise and non-invasive way to determine the spatial spine curve for patients with developed scoliosis and the validation of the presented method using the palpation of the spinous processes, where no harmful ionizing radiation is present.

  20. Characterization of Bond Strength of U-Mo Fuel Plates Using the Laser Shockwave Technique: Capabilities and Preliminary Results

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. A. Smith; D. L. Cottle; B. H. Rabin

    2013-09-01

    This report summarizes work conducted to-date on the implementation of new laser-based capabilities for characterization of bond strength in nuclear fuel plates, and presents preliminary results obtained from fresh fuel studies on as-fabricated monolithic fuel consisting of uranium-10 wt.% molybdenum alloys clad in 6061 aluminum by hot isostatic pressing. Characterization involves application of two complementary experimental methods, laser-shock testing and laser-ultrasonic imaging, collectively referred to as the Laser Shockwave Technique (LST), that allows the integrity, physical properties and interfacial bond strength in fuel plates to be evaluated. Example characterization results are provided, including measurement of layer thicknesses, elastic properties ofmore » the constituents, and the location and nature of generated debonds (including kissing bonds). LST provides spatially localized, non-contacting measurements with minimum specimen preparation, and is ideally suited for applications involving radioactive materials, including irradiated materials. The theoretical principles and experimental approaches employed in characterizing nuclear fuel plates are described, and preliminary bond strength measurement results are discussed, with emphasis on demonstrating the capabilities and limitations of these methods. These preliminary results demonstrate the ability to distinguish bond strength variations between different fuel plates. Although additional development work is necessary to validate and qualify the test methods, these results suggest LST is viable as a method to meet fuel qualification requirements to demonstrate acceptable bonding integrity.« less

  1. Optical measurement of isolated canine lung filtration coefficients at normal hematocrits.

    PubMed

    Klaesner, J W; Pou, N A; Parker, R E; Finney, C; Roselli, R J

    1997-12-01

    In this study, lung filtration coefficient (Kfc) values were measured in eight isolated canine lung preparations at normal hematocrit values using three methods: gravimetric, blood-corrected gravimetric, and optical. The lungs were kept in zone 3 conditions and subjected to an average venous pressure increase of 10.24 +/- 0.27 (SE) cmH2O. The resulting Kfc (ml . min-1 . cmH2O-1 . 100 g dry lung wt-1) measured with the gravimetric technique was 0.420 +/- 0.017, which was statistically different from the Kfc measured by the blood-corrected gravimetric method (0.273 +/- 0.018) or the product of the reflection coefficient (sigmaf) and Kfc measured optically (0. 272 +/- 0.018). The optical method involved the use of a Cellco filter cartridge to separate red blood cells from plasma, which allowed measurement of the concentration of the tracer in plasma at normal hematocrits (34 +/- 1.5). The permeability-surface area product was measured using radioactive multiple indicator-dilution methods before, during, and after venous pressure elevations. Results showed that the surface area of the lung did not change significantly during the measurement of Kfc. These studies suggest that sigmafKfc can be measured optically at normal hematocrits, that this measurement is not influenced by blood volume changes that occur during the measurement, and that the optical sigmafKfc agrees with the Kfc obtained via the blood-corrected gravimetric method.

  2. Josephson frequency meter for millimeter and submillimeter wavelengths

    NASA Technical Reports Server (NTRS)

    Anischenko, S. E.; Larkin, S. Y.; Chaikovsky, V. I.; Kabayev, P. V.; Kamyshin, V. V.

    1995-01-01

    Frequency measurements of electromagnetic oscillations of millimeter and submillimeter wavebands with frequency growth due to a number of reasons become more and more difficult. First, these frequencies are considered to be cutoffs for semiconductor converting devices and one has to use optical measurement methods instead of traditional ones with frequency transfer. Second, resonance measurement methods are characterized by using relatively narrow bands and optical ones are limited in frequency and time resolution due to the limited range and velocity of movement of their mechanical elements as well as the efficiency of these optical techniques decrease with the increase of wavelength due to diffraction losses. That requires a priori information on the radiation frequency band of the source involved. Method of measuring frequency of harmonic microwave signals in millimeter and submillimeter wavebands based on the ac Josephson effect in superconducting contacts is devoid of all the above drawbacks. This approach offers a number of major advantages over the more traditional measurement methods, that is one based on frequency conversion, resonance and interferometric techniques. It can be characterized by high potential accuracy, wide range of frequencies measured, prompt measurement and the opportunity to obtain a panoramic display of the results as well as full automation of the measuring process.

  3. Rapid measurement of plasma free fatty acid concentration and isotopic enrichment using LC/MS

    PubMed Central

    Persson, Xuan-Mai T.; Błachnio-Zabielska, Agnieszka Urszula; Jensen, Michael D.

    2010-01-01

    Measurements of plasma free fatty acids (FFA) concentration and isotopic enrichment are commonly used to evaluate FFA metabolism. Until now, gas chromatography-combustion-isotope ratio mass spectrometry (GC/C/IRMS) was the best method to measure isotopic enrichment in the methyl derivatives of 13C-labeled fatty acids. Although IRMS is excellent for analyzing enrichment, it requires time-consuming derivatization steps and is not optimal for measuring FFA concentrations. We developed a new, rapid, and reliable method for simultaneous quantification of 13C-labeled fatty acids in plasma using high-performance liquid chromatography-mass spectrometry (HPLC/MS). This method involves a very quick Dole extraction procedure and direct injection of the samples on the HPLC system. After chromatographic separation, the samples are directed to the mass spectrometer for electrospray ionization (ESI) and analysis in the negative mode using single ion monitoring. By employing equipment with two columns connected parallel to a mass spectrometer, we can double the throughput to the mass spectrometer, reducing the analysis time per sample to 5 min. Palmitate flux measured using this approach agreed well with the GC/C/IRMS method. This HPLC/MS method provides accurate and precise measures of FFA concentration and enrichment. PMID:20526002

  4. The Social Life of Learning Analytics: Cluster Analysis and the 'Performance' of Algorithmic Education

    ERIC Educational Resources Information Center

    Perrotta, Carlo; Williamson, Ben

    2018-01-01

    This paper argues that methods used for the classification and measurement of online education are not neutral and objective, but involved in the creation of the educational realities they claim to measure. In particular, the paper draws on material semiotics to examine cluster analysis as a 'performative device' that, to a significant extent,…

  5. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  6. Robust Coefficients Alpha and Omega and Confidence Intervals with Outlying Observations and Missing Data: Methods and Software

    ERIC Educational Resources Information Center

    Zhang, Zhiyong; Yuan, Ke-Hai

    2016-01-01

    Cronbach's coefficient alpha is a widely used reliability measure in social, behavioral, and education sciences. It is reported in nearly every study that involves measuring a construct through multiple items. With non-tau-equivalent items, McDonald's omega has been used as a popular alternative to alpha in the literature. Traditional estimation…

  7. Life Span as the Measure of Performance and Learning in a Business Gaming Simulation

    ERIC Educational Resources Information Center

    Thavikulwat, Precha

    2012-01-01

    This study applies the learning curve method of measuring learning to participants of a computer-assisted business gaming simulation that includes a multiple-life-cycle feature. The study involved 249 participants. It verified the workability of the feature and estimated the participants' rate of learning at 17.4% for every doubling of experience.…

  8. Tactile Acuity Charts: A Reliable Measure of Spatial Acuity

    PubMed Central

    Bruns, Patrick; Camargo, Carlos J.; Campanella, Humberto; Esteve, Jaume; Dinse, Hubert R.; Röder, Brigitte

    2014-01-01

    For assessing tactile spatial resolution it has recently been recommended to use tactile acuity charts which follow the design principles of the Snellen letter charts for visual acuity and involve active touch. However, it is currently unknown whether acuity thresholds obtained with this newly developed psychophysical procedure are in accordance with established measures of tactile acuity that involve passive contact with fixed duration and control of contact force. Here we directly compared tactile acuity thresholds obtained with the acuity charts to traditional two-point and grating orientation thresholds in a group of young healthy adults. For this purpose, two types of charts, using either Braille-like dot patterns or embossed Landolt rings with different orientations, were adapted from previous studies. Measurements with the two types of charts were equivalent, but generally more reliable with the dot pattern chart. A comparison with the two-point and grating orientation task data showed that the test-retest reliability of the acuity chart measurements after one week was superior to that of the passive methods. Individual thresholds obtained with the acuity charts agreed reasonably with the grating orientation threshold, but less so with the two-point threshold that yielded relatively distinct acuity estimates compared to the other methods. This potentially considerable amount of mismatch between different measures of tactile acuity suggests that tactile spatial resolution is a complex entity that should ideally be measured with different methods in parallel. The simple test procedure and high reliability of the acuity charts makes them a promising complement and alternative to the traditional two-point and grating orientation thresholds. PMID:24504346

  9. Evaluating the methods used for measuring cerebral blood flow at rest and during exercise in humans.

    PubMed

    Tymko, Michael M; Ainslie, Philip N; Smith, Kurt J

    2018-05-16

    The first accounts of measuring cerebral blood flow (CBF) in humans were made by Angelo Mosso in ~1880, who recorded brain pulsations in patients with skull defects. In 1890, Charles Roy and Charles Sherrington determined in animals that brain pulsations-assessed via a similar method used by Mosso-were altered during a variety of stimuli including sensory nerve stimulation, asphyxia, and pharmacological interventions. Between 1880 and 1944, measurements for CBF were typically relied on skull abnormalities in humans. Thereafter, Kety and Schmidt introduced a new methodological approach in 1945 that involved nitrous oxide dilution combined with serial arterial and jugular venous blood sampling. Less than a decade later (1950's), several research groups employed the Kety-Schmidt technique to assess the effects of exercise on global CBF and metabolism; these studies demonstrated an uncoupling of CBF and metabolism during exercise, which was contrary to early hypotheses. However, there were several limitations to this technique related to low temporal resolution and the inability to measure regional CBF. These limitations were overcome in the 1960's when transcranial Doppler ultrasound (TCD) was developed as a method to measure beat-by-beat cerebral blood velocity. Between 1990 and 2010, TCD further progressed our understanding of CBF regulation and allowed for insight into other mechanistic factors, independent of local metabolism, involved in regulating CBF during exercise. Recently, it was discovered that TCD may not be accurate under several physiological conditions. Other measures of indexing CBF such as Duplex ultrasound and magnetic resonance imaging, although not without some limitations, may be more applicable for future investigations.

  10. Study on the effect of measuring methods on incident photon-to-electron conversion efficiency of dye-sensitized solar cells by home-made setup.

    PubMed

    Guo, Xiao-Zhi; Luo, Yan-Hong; Zhang, Yi-Duo; Huang, Xiao-Chun; Li, Dong-Mei; Meng, Qing-Bo

    2010-10-01

    An experimental setup is built for the measurement of monochromatic incident photon-to-electron conversion efficiency (IPCE) of solar cells. With this setup, three kinds of IPCE measuring methods as well as the convenient switching between them are achieved. The setup can also measure the response time and waveform of the short-circuit current of solar cell. Using this setup, IPCE results of dye-sensitized solar cells (DSCs) are determined and compared under different illumination conditions with each method. It is found that the IPCE values measured by AC method involving the lock-in technique are sincerely influenced by modulation frequency and bias illumination. Measurements of the response time and waveform of short-circuit current have revealed that this effect can be explained by the slow response of DSCs. To get accurate IPCE values by this method, the measurement should be carried out with a low modulation frequency and under bias illumination. The IPCE values measured by DC method under the bias light illumination will be disturbed since the short-circuit current increased with time continuously due to the temperature rise of DSC. Therefore, temperature control of DSC is considered necessary for IPCE measurement especially in DC method with bias light illumination. Additionally, high bias light intensity (>2 sun) is found to decrease the IPCE values due to the ion transport limitation of the electrolyte.

  11. [Metacognition in psychotic disorders: from concepts to intervention].

    PubMed

    de Jong, S; van Donkersgoed, R J M; Arends, J; Lysaker, P H; Wunderink, L; van der Gaag, M; Aleman, A; Pijnenborg, G H M

    2016-01-01

    Persons with a psychotic disorder commonly experience difficulties with what is considered to be metacognitive capacity. In this article we discuss several definitions of this concept, the measurement instruments involved and the clinical interventions that target this concept. To present a review of various frequently used definitions of metacognition and related concepts and to describe the measurement instruments involved and the treatment options available for improving the metacognitive capacity of persons with a psychotic disorder. We present an overview of several definitions of metacognition in psychotic disorders and we discuss frequently used measurement instruments and treatment options. The article focuses on recent developments in a model devised by Semerari et al. The measurement instrument involved (Metacognition Assessment Scale - A) is discussed in terms of it being an addition to existing methods. On the basis of the literature it appears that metacognition and related concepts are measurable constructs, although definitions and instruments vary considerably. The new conceptualisation of social information processing also leads to the development of a new form of psychotherapy that aims to help patients suffering from psychotic disorders to improve metacognitive capacity. There seems to be evidence that metacognitive abilities are a possible target for treatment, but further research is needed.

  12. A hybrid method combining the surface integral equation method and ray tracing for the numerical simulation of high frequency diffraction involved in ultrasonic NDT

    NASA Astrophysics Data System (ADS)

    Bonnet, M.; Collino, F.; Demaldent, E.; Imperiale, A.; Pesudo, L.

    2018-05-01

    Ultrasonic Non-Destructive Testing (US NDT) has become widely used in various fields of applications to probe media. Exploiting the surface measurements of the ultrasonic incident waves echoes after their propagation through the medium, it allows to detect potential defects (cracks and inhomogeneities) and characterize the medium. The understanding and interpretation of those experimental measurements is performed with the help of numerical modeling and simulations. However, classical numerical methods can become computationally very expensive for the simulation of wave propagation in the high frequency regime. On the other hand, asymptotic techniques are better suited to model high frequency scattering over large distances but nevertheless do not allow accurate simulation of complex diffraction phenomena. Thus, neither numerical nor asymptotic methods can individually solve high frequency diffraction problems in large media, as those involved in UNDT controls, both quickly and accurately, but their advantages and limitations are complementary. Here we propose a hybrid strategy coupling the surface integral equation method and the ray tracing method to simulate high frequency diffraction under speed and accuracy constraints. This strategy is general and applicable to simulate diffraction phenomena in acoustic or elastodynamic media. We provide its implementation and investigate its performances for the 2D acoustic diffraction problem. The main features of this hybrid method are described and results of 2D computational experiments discussed.

  13. Quantitative Balance and Gait Measurement in Patients with Frontotemporal Dementia and Alzheimer Diseases: A Pilot Study.

    PubMed

    Velayutham, Selva Ganapathy; Chandra, Sadanandavalli Retnaswami; Bharath, Srikala; Shankar, Ravi Girikamatha

    2017-01-01

    Alzhiemers disease and Frontotemporal dementia are common neurodegenerative dementias with a wide prevalence. Falls are a common cause of morbidity in these patients. Identifying subclinical involvement of these parameters might serve as a tool in differential analysis of these distinct parameters involved in these conditions and also help in planning preventive strategies to prevent falls. Eight patients in age and gender matched patients in each group were compared with normal controls. Standardizes methods of gait and balance aseesment were done in all persons. Results revealed subclinical involvement of gait and balancesin all groups specially during divided attention. The parameters were significantly more affected in patients. Patients with AD and FTD had involement of over all ambulation index balance more affected in AD patients FTD patients showed step cycle, stride length abnormalities. There is balance and gait involvement in normal ageing as well as patients with AD and FTD. The pattern of involvement in AD correlates with WHERE pathway involvement and FTD with frontal subcortical circuits involvement. Identification the differential patterns of involvement in subclinical stage might help to differentiate normal ageing and the different types of cortical dementias. This could serve as an additional biomarker and also assist in initiating appropriate training methods to prevent future falls.

  14. Statistically qualified neuro-analytic failure detection method and system

    DOEpatents

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    2002-03-02

    An apparatus and method for monitoring a process involve development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two stages: deterministic model adaption and stochastic model modification of the deterministic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics, augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation error minimization technique. Stochastic model modification involves qualifying any remaining uncertainty in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system. Illustrative of the method and apparatus, the method is applied to a peristaltic pump system.

  15. The Impact of Direct Involvement I and Direct Involvement II Experiences on Secondary School Students' Social Capital, as Measured by Co-Cognitive Factors of the Operation Houndstooth Intervention Theory

    ERIC Educational Resources Information Center

    Sands, Michelle M.; Heilbronner, Nancy N.

    2014-01-01

    A mixed-methods study grounded in Renzulli's Operation Houndstooth Intervention Theory examined the impact of different types of volunteer experiences on the six co-cognitive factors (Optimism, Courage, Romance With a Topic/Discipline, Sensitivity to Human Concerns, Physical/Mental Energy, and Vision/Sense of Destiny) associated with the…

  16. In Search of Easy-to-Use Methods for Calibrating ADCP's for Velocity and Discharge Measurements

    USGS Publications Warehouse

    Oberg, K.; ,

    2002-01-01

    A cost-effective procedure for calibrating acoustic Doppler current profilers (ADCP) in the field was presented. The advantages and disadvantages of various methods which are used for calibrating ADCP were discussed. The proposed method requires the use of differential global positioning system (DGPS) with sub-meter accuracy and standard software for collecting ADCP data. The method involves traversing a long (400-800 meter) course at a constant compass heading and speed, while collecting simultaneous DGPS and ADCP data.

  17. A Method of Flight Measurement of Spins

    NASA Technical Reports Server (NTRS)

    Soule, Hartley A; Scudder, Nathan F

    1932-01-01

    A method is described involving the use of recording turn meters and accelerometers and a sensitive altimeter, by means of which all of the physical quantities necessary for the complete determination of the flight path, motion, attitude, forces, and couples of a fully developed spin can be obtained in flight. Data are given for several spins of two training type airplanes which indicate that the accuracy of the results obtained with the method is satisfactory.

  18. Analysis of Water Volume Changes and Temperature Measurement Location Effect to the Accuracy of RTP Power Calibration

    NASA Astrophysics Data System (ADS)

    Lanyau, T.; Hamzah, N. S.; Jalal Bayar, A. M.; Karim, J. Abdul; Phongsakorn, P. K.; Suhaimi, K. Mohammad; Hashim, Z.; Razi, H. Md; Fazli, Z. Mohd; Ligam, A. S.; Mustafa, M. K. A.

    2018-01-01

    Power calibration is one of the important aspect for safe operation of the reactor. In RTP, the calorimetric method has been applied in reactor power calibration. This method involves measurement of water temperature in the RTP tank. Water volume and location of the temperature measurement may play an important role to the accuracy of the measurement. In this study, the analysis of water volume changes and thermocouple location effect to the power calibration accuracy has been done. The changes of the water volume are controlled by the variation of water level in reactor tank. The water level is measured by the ultrasonic measurement device. Temperature measurement has been done by thermocouple placed at three different locations. The accuracy of the temperature trend from various condition of measurement has been determined and discussed in this paper.

  19. Mapping Capacitive Coupling Among Pixels in a Sensor Array

    NASA Technical Reports Server (NTRS)

    Seshadri, Suresh; Cole, David M.; Smith, Roger M.

    2010-01-01

    An improved method of mapping the capacitive contribution to cross-talk among pixels in an imaging array of sensors (typically, an imaging photodetector array) has been devised for use in calibrating and/or characterizing such an array. The method involves a sequence of resets of subarrays of pixels to specified voltages and measurement of the voltage responses of neighboring non-reset pixels.

  20. Devices and tasks involved in the objective assessment of standing dynamic balancing - A systematic literature review.

    PubMed

    Petró, Bálint; Papachatzopoulou, Alexandra; Kiss, Rita M

    2017-01-01

    Static balancing assessment is often complemented with dynamic balancing tasks. Numerous dynamic balancing assessment methods have been developed in recent decades with their corresponding balancing devices and tasks. The aim of this systematic literature review is to identify and categorize existing objective methods of standing dynamic balancing ability assessment with an emphasis on the balancing devices and tasks being used. Three major scientific literature databases (Science Direct, Web of Science, PLoS ONE) and additional sources were used. Studies had to use a dynamic balancing device and a task described in detail. Evaluation had to be based on objectively measureable parameters. Functional tests without instrumentation evaluated exclusively by a clinician were excluded. A total of 63 articles were included. The data extracted during full-text assessment were: author and date; the balancing device with the balancing task and the measured parameters; the health conditions, size, age and sex of participant groups; and follow-up measurements. A variety of dynamic balancing assessment devices were identified and categorized as 1) Solid ground, 2) Balance board, 3) Rotating platform, 4) Horizontal translational platform, 5) Treadmill, 6) Computerized Dynamic Posturography, and 7) Other devices. The group discrimination ability of the methods was explored and the conclusions of the studies were briefly summarized. Due to the wide scope of this search, it provides an overview of balancing devices and do not represent the state-of-the-art of any single method. The identified dynamic balancing assessment methods are offered as a catalogue of candidate methods to complement static assessments used in studies involving postural control.

  1. Rapid analysis of effluents generated by the dairy industry for fat determination by preconcentration in nylon membranes and attenuated total reflectance infrared spectroscopy measurement.

    PubMed

    Moliner Martínez, Y; Muñoz-Ortuño, M; Herráez-Hernández, R; Campíns-Falcó, P

    2014-02-01

    This paper describes a new approach for the determination of fat in the effluents generated by the dairy industry which is based on the retention of fat in nylon membranes and measurement of the absorbances on the membrane surface by ATR-IR spectroscopy. Different options have been evaluated for retaining fat in the membranes using milk samples of different origin and fat content. Based on the results obtained, a method is proposed for the determination of fat in effluents which involves the filtration of 1 mL of the samples through 0.45 µm nylon membranes of 13 mm diameter. The fat content is then determined by measuring the absorbance of band at 1745 cm(-1). The proposed method can be used for the direct estimation of fat at concentrations in the 2-12 mg/L interval with adequate reproducibility. The intraday precision, expressed as coefficients of variation CVs, were ≤ 11%, whereas the interday CVs were ≤ 20%. The method shows a good tolerance towards conditions typically found in the effluents generated by the dairy industry. The most relevant features of the proposed method are simplicity and speed as the samples can be characterized in a few minutes. Sample preparation does not involve either additional instrumentation (such as pumps or vacuum equipment) or organic solvents or other chemicals. Therefore, the proposed method can be considered a rapid, simple and cost-effective alternative to gravimetric methods for controlling fat content in these effluents during production or cleaning processes. © 2013 Published by Elsevier B.V.

  2. [Oxidative stress. Should it be measured in the diabetic patient?].

    PubMed

    Villa-Caballero, L; Nava-Ocampo, A A; Frati-Munari, A C; Ponce-Monter, H

    2000-01-01

    Oxidative stress has been defined as a loss of counterbalance between free radical or reactive oxygen species production and the antioxidant systems, with negative effects on carbohydrates, lipids, and proteins. It is also involved in the progression of different chronic diseases and apoptosis. Diabetes mellitus is associated to a high oxidative stress level through different biochemical pathways, i.e. protein glycosylation, glucose auto-oxidation, and the polyol pathway, mainly induced by hyperglycemia. Oxidative stress could also be involved in the pathogenesis of atherosclerotic lesions and other chronic diabetic complications. Measurement of oxidative stress could be useful to investigate its role in the initiation and development processes of chronic diabetic complications and also to evaluate preventive actions, including antioxidative therapy. Different attempts have been made to obtain a practical, accurate, specific, and sensitive method to evaluate oxidative stress in clinical practice. However, this ideal method is not currently available to date and the usefulness of the current methods needs to be confirmed in daily practice. We suggest quantifying oxidated and reduced glutation (GSSG/GSH) and the thiobarbituric reactive substances (TBARS) with currently alternatives. Currently available alternative methods while we await better options.

  3. Multiple Hypothesis Testing for Experimental Gingivitis Based on Wilcoxon Signed Rank Statistics

    PubMed Central

    Preisser, John S.; Sen, Pranab K.; Offenbacher, Steven

    2011-01-01

    Dental research often involves repeated multivariate outcomes on a small number of subjects for which there is interest in identifying outcomes that exhibit change in their levels over time as well as to characterize the nature of that change. In particular, periodontal research often involves the analysis of molecular mediators of inflammation for which multivariate parametric methods are highly sensitive to outliers and deviations from Gaussian assumptions. In such settings, nonparametric methods may be favored over parametric ones. Additionally, there is a need for statistical methods that control an overall error rate for multiple hypothesis testing. We review univariate and multivariate nonparametric hypothesis tests and apply them to longitudinal data to assess changes over time in 31 biomarkers measured from the gingival crevicular fluid in 22 subjects whereby gingivitis was induced by temporarily withholding tooth brushing. To identify biomarkers that can be induced to change, multivariate Wilcoxon signed rank tests for a set of four summary measures based upon area under the curve are applied for each biomarker and compared to their univariate counterparts. Multiple hypothesis testing methods with choice of control of the false discovery rate or strong control of the family-wise error rate are examined. PMID:21984957

  4. Development of a test method for carbonyl compounds from stationary source emissions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhihua Fan; Peterson, M.R.; Jayanty, R.K.M.

    1997-12-31

    Carbonyl compounds have received increasing attention because of their important role in ground-level ozone formation. The common method used for the measurement of aldehydes and ketones is 2,4-dinitrophenylhydrazine (DNPH) derivatization followed by high performance liquid chromatography and ultra violet (HPLC-UV) analysis. One of the problems associated with this method is the low recovery for certain compounds such as acrolein. This paper presents a study in the development of a test method for the collection and measurement of carbonyl compounds from stationary source emissions. This method involves collection of carbonyl compounds in impingers, conversion of carbonyl compounds to a stable derivativemore » with O-2,3,4,5,6-pentafluorobenzyl hydroxylamine hydrochloride (PFBHA), and separation and measurement by electron capture gas chromatography (GC-ECD). Eight compounds were selected for the evaluation of this method: formaldehyde, acetaldehyde, acrolein, acetone, butanal, methyl ethyl ketone (MEK), methyl isobutyl ketone (MIBK), and hexanal.« less

  5. Size and shape measurement in contemporary cephalometrics.

    PubMed

    McIntyre, Grant T; Mossey, Peter A

    2003-06-01

    The traditional method of analysing cephalograms--conventional cephalometric analysis (CCA)--involves the calculation of linear distance measurements, angular measurements, area measurements, and ratios. Because shape information cannot be determined from these 'size-based' measurements, an increasing number of studies employ geometric morphometric tools in the cephalometric analysis of craniofacial morphology. Most of the discussions surrounding the appropriateness of CCA, Procrustes superimposition, Euclidean distance matrix analysis (EDMA), thin-plate spline analysis (TPS), finite element morphometry (FEM), elliptical Fourier functions (EFF), and medial axis analysis (MAA) have centred upon mathematical and statistical arguments. Surprisingly, little information is available to assist the orthodontist in the clinical relevance of each technique. This article evaluates the advantages and limitations of the above methods currently used to analyse the craniofacial morphology on cephalograms and investigates their clinical relevance and possible applications.

  6. Minority carrier diffusion length extraction in Cu2ZnSn(Se,S)4 solar cells

    NASA Astrophysics Data System (ADS)

    Gokmen, Tayfun; Gunawan, Oki; Mitzi, David B.

    2013-09-01

    We report measurement of minority carrier diffusion length (Ld) for high performance Cu2ZnSn(S,Se)4 (CZTSSe) solar cells in comparison with analogous Cu(In,Ga)(S,Se)2 (CIGSSe) devices. Our Ld extraction method involves performing systematic measurements of the internal quantum efficiency combined with separate capacitance-voltage measurement. This method also enables the measurement of the absorption coefficient of the absorber material as a function of wavelength in a finished device. The extracted values of Ld for CZTSSe samples are at least factor of 2 smaller than those for CIGSSe samples. Combined with minority carrier lifetime (τ) data measured by time-resolved photoluminescence, we deduce the minority carrier mobility (μe), which is also relatively low for the CZTSSe samples.

  7. [Electormagnetic field of the mobile phone base station: case study].

    PubMed

    Bieńkowski, Paweł; Zubrzak, Bartłomiej; Surma, Robert

    2011-01-01

    The paper presents changes in the electromagnetic field intensity in a school building and its surrounding after the mobile phone base station installation on the roof of the school. The comparison of EMF intensity measured before the base station was launched (electromagnetic background measurement) and after starting its operation (two independent control measurements) is discussed. Analyses of measurements are presented and the authors also propose the method of the electromagnetic field distribution adjustment in the area of radiation antennas side lobe to reduce the intensity of the EMF level in the base station proximity. The presented method involves the regulation of the inclination. On the basis of the measurements, it was found that the EMF intensity increased in the building and its surroundings, but the values measured with wide margins meet the requirements of the Polish law on environmental protection.

  8. A graph-based semantic similarity measure for the gene ontology.

    PubMed

    Alvarez, Marco A; Yan, Changhui

    2011-12-01

    Existing methods for calculating semantic similarities between pairs of Gene Ontology (GO) terms and gene products often rely on external databases like Gene Ontology Annotation (GOA) that annotate gene products using the GO terms. This dependency leads to some limitations in real applications. Here, we present a semantic similarity algorithm (SSA), that relies exclusively on the GO. When calculating the semantic similarity between a pair of input GO terms, SSA takes into account the shortest path between them, the depth of their nearest common ancestor, and a novel similarity score calculated between the definitions of the involved GO terms. In our work, we use SSA to calculate semantic similarities between pairs of proteins by combining pairwise semantic similarities between the GO terms that annotate the involved proteins. The reliability of SSA was evaluated by comparing the resulting semantic similarities between proteins with the functional similarities between proteins derived from expert annotations or sequence similarity. Comparisons with existing state-of-the-art methods showed that SSA is highly competitive with the other methods. SSA provides a reliable measure for semantics similarity independent of external databases of functional-annotation observations.

  9. Minimally invasive surgical method to detect sound processing in the cochlear apex by optical coherence tomography

    NASA Astrophysics Data System (ADS)

    Ramamoorthy, Sripriya; Zhang, Yuan; Petrie, Tracy; Fridberger, Anders; Ren, Tianying; Wang, Ruikang; Jacques, Steven L.; Nuttall, Alfred L.

    2016-02-01

    Sound processing in the inner ear involves separation of the constituent frequencies along the length of the cochlea. Frequencies relevant to human speech (100 to 500 Hz) are processed in the apex region. Among mammals, the guinea pig cochlear apex processes similar frequencies and is thus relevant for the study of speech processing in the cochlea. However, the requirement for extensive surgery has challenged the optical accessibility of this area to investigate cochlear processing of signals without significant intrusion. A simple method is developed to provide optical access to the guinea pig cochlear apex in two directions with minimal surgery. Furthermore, all prior vibration measurements in the guinea pig apex involved opening an observation hole in the otic capsule, which has been questioned on the basis of the resulting changes to cochlear hydrodynamics. Here, this limitation is overcome by measuring the vibrations through the unopened otic capsule using phase-sensitive Fourier domain optical coherence tomography. The optically and surgically advanced method described here lays the foundation to perform minimally invasive investigation of speech-related signal processing in the cochlea.

  10. Cultural adaptation and translation of measures: an integrated method.

    PubMed

    Sidani, Souraya; Guruge, Sepali; Miranda, Joyal; Ford-Gilboe, Marilyn; Varcoe, Colleen

    2010-04-01

    Differences in the conceptualization and operationalization of health-related concepts may exist across cultures. Such differences underscore the importance of examining conceptual equivalence when adapting and translating instruments. In this article, we describe an integrated method for exploring conceptual equivalence within the process of adapting and translating measures. The integrated method involves five phases including selection of instruments for cultural adaptation and translation; assessment of conceptual equivalence, leading to the generation of a set of items deemed to be culturally and linguistically appropriate to assess the concept of interest in the target community; forward translation; back translation (optional); and pre-testing of the set of items. Strengths and limitations of the proposed integrated method are discussed. (c) 2010 Wiley Periodicals, Inc.

  11. Method for determining the concentration of atomic species in gases and solids

    DOEpatents

    Loge, G.W.

    1998-02-03

    Method is described for determining the concentration of atomic species in gases and solids. The method involves measurement of at least two emission intensities from a species in a sample that is excited by incident laser radiation. This generates a plasma therein after a sufficient time period has elapsed and during a second time period, permits an instantaneous temperature to be established within the sample. The concentration of the atomic species to be determined is then derived from the known emission intensity of a predetermined concentration of that species in the sample at the measured temperature, a quantity which is measured prior to the determination of the unknown concentration, and the actual measured emission from the unknown species, or by this latter emission and the emission intensity of a species having known concentration within the sample such as nitrogen for gaseous air samples. 4 figs.

  12. A review on creatinine measurement techniques.

    PubMed

    Mohabbati-Kalejahi, Elham; Azimirad, Vahid; Bahrami, Manouchehr; Ganbari, Ahmad

    2012-08-15

    This paper reviews the entire recent global tendency for creatinine measurement. Creatinine biosensors involve complex relationships between biology and micro-mechatronics to which the blood is subjected. Comparison between new and old methods shows that new techniques (e.g. Molecular Imprinted Polymers based algorithms) are better than old methods (e.g. Elisa) in terms of stability and linear range. All methods and their details for serum, plasma, urine and blood samples are surveyed. They are categorized into five main algorithms: optical, electrochemical, impedometrical, Ion Selective Field-Effect Transistor (ISFET) based technique and chromatography. Response time, detection limit, linear range and selectivity of reported sensors are discussed. Potentiometric measurement technique has the lowest response time of 4-10 s and the lowest detection limit of 0.28 nmol L(-1) belongs to chromatographic technique. Comparison between various techniques of measurements indicates that the best selectivity belongs to MIP based and chromatographic techniques. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Use of the single-breath method of estimating cardiac output during exercise-stress testing.

    NASA Technical Reports Server (NTRS)

    Buderer, M. C.; Rummel, J. A.; Sawin, C. F.; Mauldin, D. G.

    1973-01-01

    The single-breath cardiac output measurement technique of Kim et al. (1966) has been modified for use in obtaining cardiac output measurements during exercise-stress tests on Apollo astronauts. The modifications involve the use of a respiratory mass spectrometer for data acquisition and a digital computer program for data analysis. The variation of the modified method for triplicate steady-state cardiac output measurements was plus or minus 1 liter/min. The combined physiological and methodological variation seen during a set of three exercise tests on a series of subjects was 1 to 2.5 liter/min. Comparison of the modified method with the direct Fick technique showed that although the single-breath values were consistently low, the scatter of data was small and the correlation between the two methods was high. Possible reasons for the low single-breath cardiac output values are discussed.

  14. A Multicriteria Decision Making Approach for Estimating the Number of Clusters in a Data Set

    PubMed Central

    Peng, Yi; Zhang, Yong; Kou, Gang; Shi, Yong

    2012-01-01

    Determining the number of clusters in a data set is an essential yet difficult step in cluster analysis. Since this task involves more than one criterion, it can be modeled as a multiple criteria decision making (MCDM) problem. This paper proposes a multiple criteria decision making (MCDM)-based approach to estimate the number of clusters for a given data set. In this approach, MCDM methods consider different numbers of clusters as alternatives and the outputs of any clustering algorithm on validity measures as criteria. The proposed method is examined by an experimental study using three MCDM methods, the well-known clustering algorithm–k-means, ten relative measures, and fifteen public-domain UCI machine learning data sets. The results show that MCDM methods work fairly well in estimating the number of clusters in the data and outperform the ten relative measures considered in the study. PMID:22870181

  15. Dynamic deformation image de-blurring and image processing for digital imaging correlation measurement

    NASA Astrophysics Data System (ADS)

    Guo, X.; Li, Y.; Suo, T.; Liu, H.; Zhang, C.

    2017-11-01

    This paper proposes a method for de-blurring of images captured in the dynamic deformation of materials. De-blurring is achieved based on the dynamic-based approach, which is used to estimate the Point Spread Function (PSF) during the camera exposure window. The deconvolution process involving iterative matrix calculations of pixels, is then performed on the GPU to decrease the time cost. Compared to the Gauss method and the Lucy-Richardson method, it has the best result of the image restoration. The proposed method has been evaluated by using the Hopkinson bar loading system. In comparison to the blurry image, the proposed method has successfully restored the image. It is also demonstrated from image processing applications that the de-blurring method can improve the accuracy and the stability of the digital imaging correlation measurement.

  16. Quantitative analysis of anti-inflammatory drugs using FTIR-ATR spectrometry

    NASA Astrophysics Data System (ADS)

    Hassib, Sonia T.; Hassan, Ghaneya S.; El-Zaher, Asmaa A.; Fouad, Marwa A.; Taha, Enas A.

    2017-11-01

    Four simple, accurate, sensitive and economic Attenuated Total Reflectance-Fourier Transform Infrared Spectroscopic (ATR-FTIR) methods have been developed for the quantitative estimation of some non-steroidal anti-inflammatory drugs. The first method involves the determination of Etodolac by direct measurement of the absorbance at 1716 cm- 1. In the second method, the second derivative of the IR spectra of Tolfenamic acid and its reported degradation product (2-chlorobenzoic acid) was used and the amplitudes were measured at 1084.27 cm- 1 and 1056.02 cm- 1 for Tolfenamic acid and 2-chlorobenzoic acid, respectively. The third method used the first derivative of the IR spectra of Bumadizone and its reported degradation product, N,N-diphenylhydrazine and the amplitudes were measured at 2874.98 cm- 1 and 2160.32 cm- 1 for Bumadizone and N,N-diphenylhydrazine, respectively. The fourth method depends on measuring the amplitude of Diacerein at 1059.18 cm- 1 and of rhein, its reported degradation product, at 1079.32 cm- 1 in their first derivative spectra. The four methods were successfully applied on the pharmaceutical formulations by extracting the active constituent in chloroform and the extract was directly measured in liquid phase mode using a specific cell. Moreover, validation of these methods was carried out following International Conference of Harmonisation (ICH) guidelines.

  17. Enhanced automated spiral bevel gear inspection

    NASA Technical Reports Server (NTRS)

    Frint, Harold K.; Glasow, Warren

    1992-01-01

    Presented here are the results of a manufacturing and technology program to define, develop, and evaluate an enhanced inspection system for spiral bevel gears. The method uses a multi-axis coordinate measuring machine which maps the working surface of the tooth and compares it with nominal reference values stored in the machine's computer. The enhanced technique features a means for automatically calculating corrective grinding machine settings, involving both first and second order changes, to control the tooth profile to within specified tolerance limits. This enhanced method eliminates the subjective decision making involved in the tooth patterning method, still in use today, which compares contract patterns obtained when the gear is set to run under light load in a rolling test machine. It produces a higher quality gear with significant inspection time and cost savings.

  18. Monte Carlo simulation of air sampling methods for the measurement of radon decay products.

    PubMed

    Sima, Octavian; Luca, Aurelian; Sahagia, Maria

    2017-08-01

    A stochastic model of the processes involved in the measurement of the activity of the 222 Rn decay products was developed. The distributions of the relevant factors, including air sampling and radionuclide collection, are propagated using Monte Carlo simulation to the final distribution of the measurement results. The uncertainties of the 222 Rn decay products concentrations in the air are realistically evaluated. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review

    PubMed Central

    Pollard, Beth; Johnston, Marie; Dixon, Diane

    2007-01-01

    Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the clinician report measures appeared less well developed. It would be of value if new measures defined the construct of interest and, that the construct, be part of theoretical model. By ensuring measures are both theoretically and empirically valid then improvements in subjective health outcome measures should be possible. PMID:17343739

  20. Polarisation in spin-echo experiments: Multi-point and lock-in measurements

    NASA Astrophysics Data System (ADS)

    Tamtögl, Anton; Davey, Benjamin; Ward, David J.; Jardine, Andrew P.; Ellis, John; Allison, William

    2018-02-01

    Spin-echo instruments are typically used to measure diffusive processes and the dynamics and motion in samples on ps and ns time scales. A key aspect of the spin-echo technique is to determine the polarisation of a particle beam. We present two methods for measuring the spin polarisation in spin-echo experiments. The current method in use is based on taking a number of discrete readings. The implementation of a new method involves continuously rotating the spin and measuring its polarisation after being scattered from the sample. A control system running on a microcontroller is used to perform the spin rotation and to calculate the polarisation of the scattered beam based on a lock-in amplifier. First experimental tests of the method on a helium spin-echo spectrometer show that it is clearly working and that it has advantages over the discrete approach, i.e., it can track changes of the beam properties throughout the experiment. Moreover, we show that real-time numerical simulations can perfectly describe a complex experiment and can be easily used to develop improved experimental methods prior to a first hardware implementation.

  1. Comparison of pathway and center of gravity of the calcaneus on non-involved and involved sides according to eccentric and concentric strengthening in patients with achilles tendinopathy.

    PubMed

    Yu, Jaeho; Lee, Gyuchang

    2012-01-01

    This study compares the changes in pathway and center of gravity (COG) on the calcaneus of non-involved and involved sides according to eccentric and concentric strengthening in patients with unilateral Achilles tendinopathy. The goal was to define the biomechanical changes according to eccentric strengthening for the development of clinical guidelines. Eighteen patients with Achilles tendinopathy were recruited at the K Rehabilitation Hospital in Seoul. The subjects were instructed to perform 5 sessions of concentric strengthening. The calcaneal pathway was measured using a three-dimensional (3D) motion analyzer, and COG was measured by a force plate. Subsequently, eccentric strengthening was implemented, and identical variables were measured. Concentric and eccentric strengthening was carried out on both the involved and non-involved sides. There was no significant difference in the calcaneal pathway in patients with Achilles tendinopathy during concentric and eccentric strengthening. However, during eccentric strengthening, the calcaneal pathway significantly increased on the involved side compared to the non-involved side for all variables excluding the z-axis. COG significantly decreased on the involved side when compared to the non-involved side in patients with Achilles tendinopathy during eccentric and concentric strengthening. During concentric strengthening, all variables of the COG significantly increased on the involved side compared to the non-involved side. Compared with eccentric strengthening, concentric strengthening decreased the stability of ankle joints and increased the movement distance of the calcaneus in patients with Achilles tendinopathy. Furthermore, eccentric strengthening was verified to be an effective exercise method for prevention of Achilles tendinopathy through the reduction of forward and backward path length of foot pressure. The regular application of eccentric strengthening was found to be effective in the secondary prevention of Achilles tendinopathy in a clinical setting.

  2. Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  3. Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1998-01-01

    The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.

  4. Quantifying the measurement uncertainty of results from environmental analytical methods.

    PubMed

    Moser, J; Wegscheider, W; Sperka-Gottlieb, C

    2001-07-01

    The Eurachem-CITAC Guide Quantifying Uncertainty in Analytical Measurement was put into practice in a public laboratory devoted to environmental analytical measurements. In doing so due regard was given to the provisions of ISO 17025 and an attempt was made to base the entire estimation of measurement uncertainty on available data from the literature or from previously performed validation studies. Most environmental analytical procedures laid down in national or international standards are the result of cooperative efforts and put into effect as part of a compromise between all parties involved, public and private, that also encompasses environmental standards and statutory limits. Central to many procedures is the focus on the measurement of environmental effects rather than on individual chemical species. In this situation it is particularly important to understand the measurement process well enough to produce a realistic uncertainty statement. Environmental analytical methods will be examined as far as necessary, but reference will also be made to analytical methods in general and to physical measurement methods where appropriate. This paper describes ways and means of quantifying uncertainty for frequently practised methods of environmental analysis. It will be shown that operationally defined measurands are no obstacle to the estimation process as described in the Eurachem/CITAC Guide if it is accepted that the dominating component of uncertainty comes from the actual practice of the method as a reproducibility standard deviation.

  5. Picosecond pulse measurements using the active laser medium

    NASA Technical Reports Server (NTRS)

    Bernardin, James P.; Lawandy, N. M.

    1990-01-01

    A simple method for measuring the pulse lengths of synchronously pumped dye lasers which does not require the use of an external nonlinear medium, such as a doubling crystal or two-photon fluorescence cell, to autocorrelate the pulses is discussed. The technique involves feeding the laser pulses back into the dye jet, thus correlating the output pulses with the intracavity pulses to obtain pulse length signatures in the resulting time-averaged laser power. Experimental measurements were performed using a rhodamine 6G dye laser pumped by a mode-locked frequency-doubled Nd:YAG laser. The results agree well with numerical computations, and the method proves effective in determining lengths of picosecond laser pulses.

  6. Long distance measurement-device-independent quantum key distribution with entangled photon sources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Feihu; Qi, Bing; Liao, Zhongfa

    2013-08-05

    We present a feasible method that can make quantum key distribution (QKD), both ultra-long-distance and immune, to all attacks in the detection system. This method is called measurement-device-independent QKD (MDI-QKD) with entangled photon sources in the middle. By proposing a model and simulating a QKD experiment, we find that MDI-QKD with one entangled photon source can tolerate 77 dB loss (367 km standard fiber) in the asymptotic limit and 60 dB loss (286 km standard fiber) in the finite-key case with state-of-the-art detectors. Our general model can also be applied to other non-QKD experiments involving entanglement and Bell state measurements.

  7. Comparative analysis of quantitative methodologies for Vibrionaceae biofilms.

    PubMed

    Chavez-Dozal, Alba A; Nourabadi, Neda; Erken, Martina; McDougald, Diane; Nishiguchi, Michele K

    2016-11-01

    Multiple symbiotic and free-living Vibrio spp. grow as a form of microbial community known as a biofilm. In the laboratory, methods to quantify Vibrio biofilm mass include crystal violet staining, direct colony-forming unit (CFU) counting, dry biofilm cell mass measurement, and observation of development of wrinkled colonies. Another approach for bacterial biofilms also involves the use of tetrazolium (XTT) assays (used widely in studies of fungi) that are an appropriate measure of metabolic activity and vitality of cells within the biofilm matrix. This study systematically tested five techniques, among which the XTT assay and wrinkled colony measurement provided the most reproducible, accurate, and efficient methods for the quantitative estimation of Vibrionaceae biofilms.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Onodera, Yasuhito; Bissell, Mina

    Disclosed are methods in which glucose metabolism is correlated to oncogenesis through certain specific pathways; inhibition of certain enzymes is shown to interfere with oncogenic signaling, and measurement of certain enzyme levels is correlated with patient survival. The present methods comprise measuring level of expression of at least one of the enzymes involved in glucose uptake or metabolism, wherein increased expression of the at least one of the enzymes relative to expression in a normal cell correlates with poor prognosis of disease in a patient. Preferably the genes whose expression level is measured include GLUT3, PFKP, GAPDH, ALDOC, LDHA andmore » GFPT2. Also disclosed are embodiments directed towards downregulating the expression of some genes in glucose uptake and metabolism.« less

  9. Elasticity measurement of nasal cartilage as a function of temperature using optical coherence elastography

    NASA Astrophysics Data System (ADS)

    Liu, Chih Hao; Skryabina, M. N.; Singh, Manmohan; Li, Jiasong; Wu, Chen; Sobol, E.; Larin, Kirill V.

    2015-03-01

    Current clinical methods of reconstruction surgery involve laser reshaping of nasal cartilage. The process of stress relaxation caused by laser heating is the primary method to achieve nasal cartilage reshaping. Based on this, a rapid, non-destructive and accurate elasticity measurement would allow for a more robust reshaping procedure. In this work, we have utilized a phase-stabilized swept source optical coherence elastography (PhSSSOCE) to quantify the Young's modulus of porcine nasal septal cartilage during the relaxation process induced by heating. The results show that PhS-SSOCE was able to monitor changes in elasticity of hyaline cartilage, and this method could potentially be applied in vivo during laser reshaping therapies.

  10. Electrochemical Assay of Gold-Plating Solutions

    NASA Technical Reports Server (NTRS)

    Chiodo, R.

    1982-01-01

    Gold content of plating solution is assayed by simple method that required only ordinary electrochemical laboratory equipment and materials. Technique involves electrodeposition of gold from solution onto electrode, the weight gain of which is measured. Suitable fast assay methods are economically and practically necessary in electronics and decorative-plating industries. If gold content in plating bath is too low, poor plating may result, with consequent economic loss to user.

  11. A Constrained-Clustering Approach to the Analysis of Remote Sensing Data.

    DTIC Science & Technology

    1983-01-01

    One old and two new clustering methods were applied to the constrained-clustering problem of separating different agricultural fields based on multispectral remote sensing satellite data. (Constrained-clustering involves double clustering in multispectral measurement similarity and geographical location.) The results of applying the three methods are provided along with a discussion of their relative strengths and weaknesses and a detailed description of their algorithms.

  12. MULTI-SITE EVALUATIONS OF CANDIDATE METHODOLOGIES FOR DETERMINING COARSE PARTICULATE (PM 10-2.5) CONCENTRATIONS: AUGUST 2005 UPDATED REPORT REGARDING SECOND-GENERATION AND NEW PM 10-2.5 SAMPLERS

    EPA Science Inventory

    Multi-site field studies were conducted to evaluate the performance of sampling methods for measuring the coarse fraction of PM10 (PM10 2.5) in ambient air. The field studies involved the use of both time-integrated filter-based and direct continuous methods. Despite operationa...

  13. A new approach to measuring tortuosity

    NASA Astrophysics Data System (ADS)

    Wert, Amanda; Scott, Sherry E.

    2012-03-01

    The detection and measurement of the tortuosity - i.e. the bending and winding - of vessels has been shown to be potentially useful in the assessment of cancer progression and treatment response. Although several metrics for tortuosity are used, no single one measure is able to capture all types of tortuosity. This report presents a new multiscale technique for measuring vessel tortuosity. The approach is based on a method - called the ergodicity defect - which gives a scale-dependent measure of deviation from ergodicity. Ergodicity is a concept that captures the manner in which trajectories or signals sample the space; thus, ergodicity and vessel tortuosity both involve the notion of how a signal samples space. Here we begin to explore this connection. We first apply the ergodicity defect tortuosity measure to both 2D and 3D synthetic data in order to demonstrate the response of the method to three types of tortuosity observed in clinical patterns. We then implement the technique on segmented vessels extracted from brain tumor MRA images. Results indicate that the method can be effectively used to detect and measure several types of vessel tortuosity.

  14. Strategic planning--a plan for excellence for South Haven Health System.

    PubMed

    Urbanski, Joanne; Baskel, Maureen; Martelli, Mary

    2011-01-01

    South Haven Health System has developed an innovative approach to strategic planning. The key to success of this process has been the multidisciplinary involvement of all stakeholders from the first planning session through the final formation of a strategic plan with measurable objectives for each goal. The process utilizes a Conversation Café method for identifying opportunities and establishing goals, Strategic Oversight Teams to address each goal and a Champion for implementation of each objective. Progress is measured quarterly by Strategic Oversight Team report cards. Transparency of communication within the organization and the sharing of information move the plan forward. The feedback from participant evaluations has been overwhelmingly positive. They are involved and excited.

  15. Radius of Curvature of the Cornea--An Experiment for the Life-Science Physics Lab

    ERIC Educational Resources Information Center

    MacLatchy, C. S.

    1978-01-01

    Presents a quantitative laboratory experiment in geometrical optics. It involves the student in the measurement of the radius of curvature of the cornea and is based on an old method devised by Kohlrausch in 1839. (Author/GA)

  16. RAMAN SPECTROSCOPY-BASED METABOLOMICS: EVALUATION OF SAMPLE PREPARATION AND OPTICAL ACCESSORIES

    EPA Science Inventory

    The field of metabonomics/metabolomics involves observing endogenous metabolites from organisms that change in response to exposure to a stressor or chemical of interest. Methods are being developed for measuring the Raman spectra of low-concentration metabolites in urine. The ...

  17. Used Solvent Testing and Reclamation. Volume 1. Cold-Cleaning Solvents

    DTIC Science & Technology

    1988-12-01

    spectrometer, and specific gravity meter involve buying routine cleaning supplies , and should not exceed $50. Consequently, these methods were...in addition to routine cleaning supplies . The K13V measurement requires periodic supplies of Kauri-butanol solution. TLC analysis requires glass

  18. NMR spectroscopy for assessing lipid oxidation

    USDA-ARS?s Scientific Manuscript database

    Although lipid oxidation involves a variety of chemical reactions to produce numerous substances, most of traditional methods assessing lipid oxidation measure only one kind of oxidation product. For this reason, in general, one indicator of oxidation is not enough to accurately describe the oxidati...

  19. Reinecke's Salt Revisited. An Undergraduate Project Involving an Unknown Metal Complex.

    ERIC Educational Resources Information Center

    Searle, Graeme H.; And Others

    1989-01-01

    Describes 10 experiments for characterizing the chromium complex Reinecke's Salt. The properties of the complex, experimental procedures, and a discussion are provided. Analyses are presented for chromium, total ammonia, thiocyanate, ammonium ion, and hydrate water. Measurement methods are described. (YP)

  20. SUPERCRITICAL FLUID EXTRACTION OF SEMI-VOLATILE ORGANIC COMPOUNDS FROM PARTICLES

    EPA Science Inventory

    A nitrogen oxide flux chamber was modified to measure the flux of semi-volatile organic compounds (SVOCs). Part of the modification involved the development of methods to extract SVOCs from polyurethane foam (PUF), sand, and soil. Breakthroughs and extraction efficiencies were ...

  1. Determination of gas volume trapped in a closed fluid system

    NASA Technical Reports Server (NTRS)

    Hunter, W. F.; Jolley, J. E.

    1971-01-01

    Technique involves extracting known volume of fluid and measuring system before and after extraction, volume of entrapped gas is then computed. Formula derived from ideal gas laws is basis of this method. Technique is applicable to thermodynamic cycles and hydraulic systems.

  2. Reliability Driven Space Logistics Demand Analysis

    NASA Technical Reports Server (NTRS)

    Knezevic, J.

    1995-01-01

    Accurate selection of the quantity of logistic support resources has a strong influence on mission success, system availability and the cost of ownership. At the same time the accurate prediction of these resources depends on the accurate prediction of the reliability measures of the items involved. This paper presents a method for the advanced and accurate calculation of the reliability measures of complex space systems which are the basis for the determination of the demands for logistics resources needed during the operational life or mission of space systems. The applicability of the method presented is demonstrated through several examples.

  3. Disability: a model and measurement technique.

    PubMed Central

    Williams, R G; Johnston, M; Willis, L A; Bennett, A E

    1976-01-01

    Current methods of ranking or scoring disability tend to be arbitrary. A new method is put forward on the hypothesis that disability progresses in regular, cumulative patterns. A model of disability is defined and tested with the use of Guttman scale analysis. Its validity is indicated on data from a survey in the community and from postsurgical patients, and some factors involved in scale variation are identified. The model provides a simple measurement technique and has implications for the assessment of individual disadvantage, for the prediction of progress in recovery or deterioration, and for evaluation of the outcome of treatment regimes. PMID:953379

  4. Measurements and analysis in imaging for biomedical applications

    NASA Astrophysics Data System (ADS)

    Hoeller, Timothy L.

    2009-02-01

    A Total Quality Management (TQM) approach can be used to analyze data from biomedical optical and imaging platforms of tissues. A shift from individuals to teams, partnerships, and total participation are necessary from health care groups for improved prognostics using measurement analysis. Proprietary measurement analysis software is available for calibrated, pixel-to-pixel measurements of angles and distances in digital images. Feature size, count, and color are determinable on an absolute and comparative basis. Although changes in images of histomics are based on complex and numerous factors, the variation of changes in imaging analysis to correlations of time, extent, and progression of illness can be derived. Statistical methods are preferred. Applications of the proprietary measurement software are available for any imaging platform. Quantification of results provides improved categorization of illness towards better health. As health care practitioners try to use quantified measurement data for patient diagnosis, the techniques reported can be used to track and isolate causes better. Comparisons, norms, and trends are available from processing of measurement data which is obtained easily and quickly from Scientific Software and methods. Example results for the class actions of Preventative and Corrective Care in Ophthalmology and Dermatology, respectively, are provided. Improved and quantified diagnosis can lead to better health and lower costs associated with health care. Systems support improvements towards Lean and Six Sigma affecting all branches of biology and medicine. As an example for use of statistics, the major types of variation involving a study of Bone Mineral Density (BMD) are examined. Typically, special causes in medicine relate to illness and activities; whereas, common causes are known to be associated with gender, race, size, and genetic make-up. Such a strategy of Continuous Process Improvement (CPI) involves comparison of patient results to baseline data using F-statistics. Self-parings over time are also useful. Special and common causes are identified apart from aging in applying the statistical methods. In the future, implementation of imaging measurement methods by research staff, doctors, and concerned patient partners result in improved health diagnosis, reporting, and cause determination. The long-term prospects for quantified measurements are better quality in imaging analysis with applications of higher utility for heath care providers.

  5. MRI to predict nipple-areola complex (NAC) involvement: An automatic method to compute the 3D distance between the NAC and tumor.

    PubMed

    Giannini, Valentina; Bianchi, Veronica; Carabalona, Silvia; Mazzetti, Simone; Maggiorotto, Furio; Kubatzki, Franziska; Regge, Daniele; Ponzone, Riccardo; Martincich, Laura

    2017-12-01

    To assess the role in predicting nipple-areola complex (NAC) involvement of a newly developed automatic method which computes the 3D tumor-NAC distance. Ninety-nine patients scheduled to nipple sparing mastectomy (NSM) underwent magnetic resonance (MR) examination at 1.5 T, including sagittal T2w and dynamic contrast enhanced (DCE)-MR imaging. An automatic method was developed to segment the NAC and the tumor and to compute the 3D distance between them. The automatic measurement was compared with manual axial and sagittal 2D measurements. NAC involvement was defined by the presence of invasive ductal or lobular carcinoma and/or ductal carcinoma in situ or ductal intraepithelial neoplasia (DIN1c - DIN3). Tumor-NAC distance was computed on 95/99 patients (25 NAC+), as three tumors were not correctly segmented (sensitivity = 97%), and 1 NAC was not detected (sensitivity = 99%). The automatic 3D distance reached the highest area under the receiver operating characteristic (ROC) curve (0.830) with respect to the manual axial (0.676), sagittal (0.664), and minimum distances (0.664). At the best cut-off point of 21 mm, the 3D distance obtained sensitivity = 72%, specificity = 80%, positive predictive value = 56%, and negative predictive value = 89%. This method could provide a reproducible biomarker to preoperatively select breast cancer patients candidates to NSM, thus helping surgical planning and intraoperative management of patients. © 2017 Wiley Periodicals, Inc.

  6. Measurement of Surface Tension of Solid Cu by Improved Multiphase Equilibrium

    NASA Astrophysics Data System (ADS)

    Nakamoto, Masashi; Liukkonen, Matti; Friman, Michael; Heikinheimo, Erkki; Hämäläinen, Marko; Holappa, Lauri

    2008-08-01

    The surface tension of solid Cu was measured with the multiphase equilibrium (MPE) method in a Pb-Cu system at 700 °C, 800 °C, and 900 °C. A special focus was on the measurement of angles involved in MPE. First, the effect of reading error in each angle measurement on the final result of surface tension of solid was simulated. It was found that the two groove measurements under atmosphere conditions are the primary sources of error in the surface tension of solid in the present system. Atomic force microscopy (AFM) was applied to these angle measurements as a new method with high accuracy. The obtained surface-tension values of solid Cu in the present work were 1587, 1610, and 1521 mN/m at 700 °C, 800 °C, and 900 °C, respectively, representing reasonable temperature dependence.

  7. Instrumentation and method for measuring NIR light absorbed in tissue during MR imaging in medical NIRS measurements

    NASA Astrophysics Data System (ADS)

    Myllylä, Teemu S.; Sorvoja, Hannu S. S.; Nikkinen, Juha; Tervonen, Osmo; Kiviniemi, Vesa; Myllylä, Risto A.

    2011-07-01

    Our goal is to provide a cost-effective method for examining human tissue, particularly the brain, by the simultaneous use of functional magnetic resonance imaging (fMRI) and near-infrared spectroscopy (NIRS). Due to its compatibility requirements, MRI poses a demanding challenge for NIRS measurements. This paper focuses particularly on presenting the instrumentation and a method for the non-invasive measurement of NIR light absorbed in human tissue during MR imaging. One practical method to avoid disturbances in MR imaging involves using long fibre bundles to enable conducting the measurements at some distance from the MRI scanner. This setup serves in fact a dual purpose, since also the NIRS device will be less disturbed by the MRI scanner. However, measurements based on long fibre bundles suffer from light attenuation. Furthermore, because one of our primary goals was to make the measuring method as cost-effective as possible, we used high-power light emitting diodes instead of more expensive lasers. The use of LEDs, however, limits the maximum output power which can be extracted to illuminate the tissue. To meet these requirements, we improved methods of emitting light sufficiently deep into tissue. We also show how to measure NIR light of a very small power level that scatters from the tissue in the MRI environment, which is characterized by strong electromagnetic interference. In this paper, we present the implemented instrumentation and measuring method and report on test measurements conducted during MRI scanning. These measurements were performed in MRI operating rooms housing 1.5 Tesla-strength closed MRI scanners (manufactured by GE) in the Dept. of Diagnostic Radiology at the Oulu University Hospital.

  8. Josephson frequency meter for millimeter and submillimeter wavelengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anischenko, S.E.; Larkin, S.Y.; Chaikovsky, V.I.

    1994-12-31

    Frequency measurements of electromagnetic oscillations of millimeter and submillimeter wavebands with frequency growth due to a number of reasons become more and more difficult. First, these frequencies are considered to be cutoff for semiconductor converting devices and one has to use optical measurement methods instead of traditional ones with frequency transfer. Second, resonance measurement methods are characterized by using relatively narrow bands and optical ones are limited in frequency and time resolution due to the limited range and velocity of movement of their mechanical elements as well as the efficiency of these optical techniques decreases with the increase of wavelengthmore » due to diffraction losses. That requires the apriori information on the radiation frequency band of the source involved. Method of measuring frequency of harmonic microwave signals in millimeter and submillimeter wavebands based on the ac Josephson effect in superconducting contacts is devoid of all the above drawbacks. This approach offers a number of major advantages over the more traditional measurement methods, that is the one based on frequency conversion, resonance and interferrometric techniques. It can be characterized by high potential accuracy, wide range of frequencies measured, prompt measurement and the opportunity to obtain panoramic display of the results as well as full automation of the measuring process.« less

  9. Future directions for H sub x O sub y detection, executive summary

    NASA Technical Reports Server (NTRS)

    1986-01-01

    New methods for the measurement of OH radicals were assessed as were currently available and possible future methods for the other H sub x O sub y species, HO2 and H2O2. The workshop participants were invited from different groups: modelers of atmospheric photochemistry, experimentalists measuring H sub x O y species with laser and nonlaser methods, and chemists and physicists familiar with such experiments but not involved in atmospheric monitoring. There were three major conclusions from the workshop concerning the OH radical. First, it was felt that local measurements made by laser techniques would be ready within 2 or 3 years to furnish reliable measurements at the level of 1,000,000 cu. cm. Second, measurements at this level of sensitivity and with attainable levels of precision could indeed be used to make useful and interesting tests of the fast photochemistry of the troposphere. It is important, however, that the measurements be carefully designed, with respect to spatial and temporal averaging, if there is to be a meaningful comparison between results from two experimental methods or a measurement and a model. Third, nonlocal measurements using released reactants and tracers would also be very useful. These could be made on a regional or global basis, although they still require experimental design including choice of compounds.

  10. Methods of measurement signal acquisition from the rotational flow meter for frequency analysis

    NASA Astrophysics Data System (ADS)

    Świsulski, Dariusz; Hanus, Robert; Zych, Marcin; Petryka, Leszek

    One of the simplest and commonly used instruments for measuring the flow of homogeneous substances is the rotational flow meter. The main part of such a device is a rotor (vane or screw) rotating at a speed which is the function of the fluid or gas flow rate. A pulse signal with a frequency proportional to the speed of the rotor is obtained at the sensor output. For measurements in dynamic conditions, a variable interval between pulses prohibits the analysis of the measuring signal. Therefore, the authors of the article developed a method involving the determination of measured values on the basis of the last inter-pulse interval preceding the moment designated by the timing generator. For larger changes of the measured value at a predetermined time, the value can be determined by means of extrapolation of the two adjacent interpulse ranges, assuming a linear change in the flow. The proposed methods allow analysis which requires constant spacing between measurements, allowing for an analysis of the dynamics of changes in the test flow, eg. using a Fourier transform. To present the advantages of these methods simulations of flow measurement were carried out with a DRH-1140 rotor flow meter from the company Kobold.

  11. Probe systems for measuring static pressure and turbulence intensity in fluid streams

    NASA Technical Reports Server (NTRS)

    Rossow, Vernon J. (Inventor)

    1993-01-01

    A method and an apparatus for measuring time-averaged static or ambient pressure and turbulence intensity in a turbulent stream are discussed. The procedure involves placing a plurality of probes in the stream. Each probe responds in a different manner to characteristics of the fluid stream, preferably as a result of having varying cross sections. The responses from the probes are used to eliminate unwanted components in the measured quantities for accurate determination of selected characteristics.

  12. Measuring H(+) Pumping and Membrane Potential Formation in Sealed Membrane Vesicle Systems.

    PubMed

    Wielandt, Alex Green; Palmgren, Michael G; Fuglsang, Anja Thoe; Günther-Pomorski, Thomas; Justesen, Bo Højen

    2016-01-01

    The activity of enzymes involved in active transport of matter across lipid bilayers can conveniently be assayed by measuring their consumption of energy, such as ATP hydrolysis, while it is more challenging to directly measure their transport activities as the transported substrate is not converted into a product and only moves a few nanometers in space. Here, we describe two methods for the measurement of active proton pumping across lipid bilayers and the concomitant formation of a membrane potential, applying the dyes 9-amino-6-chloro-2-methoxyacridine (ACMA) and oxonol VI. The methods are exemplified by assaying transport of the Arabidopsis thaliana plasma membrane H(+)-ATPase (proton pump), which after heterologous expression in Saccharomyces cerevisiae and subsequent purification has been reconstituted in proteoliposomes.

  13. AC and DC conductivity due to hopping mechanism in double ion doped ceramics

    NASA Astrophysics Data System (ADS)

    Rizwana, Mahboob, Syed; Sarah, P.

    2018-04-01

    Sr1-2xNaxNdxBi4Ti4O15 (x = 0.1, 0.2 and 0.4) system is prepared by sol gel method involving Pechini process of modified polymeric precursor method. Phase identification is done using X-ray diffraction. Conduction in prepared materials involves different mechanisms and is explained through detailed AC and DC conductivity studies. AC conductivity studies carried out on the samples at different frequencies and different temperatures gives more information about electrical transport. Exponents used in two term power relation helps us to understand the different hopping mechanism involved at low as well as high frequencies. Activation energies calculated from the Arrhenius plots are used to calculate activation energies at different temperatures and frequencies. Hopping frequency calculated from the measured data explains hopping of charge carriers at different temperatures. DC conductivity studies help us to know the role of oxygen vacancies in conduction.

  14. Pure phase encode magnetic field gradient monitor.

    PubMed

    Han, Hui; MacGregor, Rodney P; Balcom, Bruce J

    2009-12-01

    Numerous methods have been developed to measure MRI gradient waveforms and k-space trajectories. The most promising new strategy appears to be magnetic field monitoring with RF microprobes. Multiple RF microprobes may record the magnetic field evolution associated with a wide variety of imaging pulse sequences. The method involves exciting one or more test samples and measuring the time evolution of magnetization through the FIDs. Two critical problems remain. The gradient waveform duration is limited by the sample T(2)*, while the k-space maxima are limited by gradient dephasing. The method presented is based on pure phase encode FIDs and solves the above two problems in addition to permitting high strength gradient measurement. A small doped water phantom (1-3 mm droplet, T(1), T(2), T(2)* < 100 micros) within a microprobe is excited by a series of closely spaced broadband RF pulses each followed by FID single point acquisition. Two trial gradient waveforms have been chosen to illustrate the technique, neither of which could be measured by the conventional RF microprobe measurement. The first is an extended duration gradient waveform while the other illustrates the new method's ability to measure gradient waveforms with large net area and/or high amplitude. The new method is a point monitor with simple implementation and low cost hardware requirements.

  15. Strategies of experiment standardization and response optimization in a rat model of hemorrhagic shock and chronic hypertension.

    PubMed

    Reynolds, Penny S; Tamariz, Francisco J; Barbee, Robert Wayne

    2010-04-01

    Exploratory pilot studies are crucial to best practice in research but are frequently conducted without a systematic method for maximizing the amount and quality of information obtained. We describe the use of response surface regression models and simultaneous optimization methods to develop a rat model of hemorrhagic shock in the context of chronic hypertension, a clinically relevant comorbidity. Response surface regression model was applied to determine optimal levels of two inputs--dietary NaCl concentration (0.49%, 4%, and 8%) and time on the diet (4, 6, 8 weeks)--to achieve clinically realistic and stable target measures of systolic blood pressure while simultaneously maximizing critical oxygen delivery (a measure of vulnerability to hemorrhagic shock) and body mass M. Simultaneous optimization of the three response variables was performed though a dimensionality reduction strategy involving calculation of a single aggregate measure, the "desirability" function. Optimal conditions for inducing systolic blood pressure of 208 mmHg, critical oxygen delivery of 4.03 mL/min, and M of 290 g were determined to be 4% [NaCl] for 5 weeks. Rats on the 8% diet did not survive past 7 weeks. Response surface regression model and simultaneous optimization method techniques are commonly used in process engineering but have found little application to date in animal pilot studies. These methods will ensure both the scientific and ethical integrity of experimental trials involving animals and provide powerful tools for the development of novel models of clinically interacting comorbidities with shock.

  16. Assessment of cosmetic ingredients in the in vitro reconstructed human epidermis test method EpiSkin™ using HPLC/UPLC-spectrophotometry in the MTT-reduction assay.

    PubMed

    Alépée, N; Hibatallah, J; Klaric, M; Mewes, K R; Pfannenbecker, U; McNamee, P

    2016-06-01

    Cosmetics Europe recently established HPLC/UPLC-spectrophotometry as a suitable alternative endpoint detection system for measurement of formazan in the MTT-reduction assay of reconstructed human tissue test methods irrespective of the test system involved. This addressed a known limitation for such test methods that use optical density for measurement of formazan and may be incompatible for evaluation of strong MTT reducer and/or coloured chemicals. To build on the original project, Cosmetics Europe has undertaken a second study that focuses on evaluation of chemicals with functionalities relevant to cosmetic products. Such chemicals were primarily identified from the Scientific Committee on Consumer Safety (SCCS) 2010 memorandum (addendum) on the in vitro test EpiSkin™ for skin irritation testing. Fifty test items were evaluated in which both standard photometry and HPLC/UPLC-spectrophotometry were used for endpoint detection. The results obtained in this study: 1) provide further support for Within Laboratory Reproducibility of HPLC-UPLC-spectrophotometry for measurement of formazan; 2) demonstrate, through use a case study with Basazol C Blue pr. 8056, that HPLC/UPLC-spectrophotometry enables determination of an in vitro classification even when this is not possible using standard photometry and 3) addresses the question raised by SCCS in their 2010 memorandum (addendum) to consider an endpoint detection system not involving optical density quantification in in vitro reconstructed human epidermis skin irritation test methods. Copyright © 2016 Elsevier Ltd. All rights reserved.

  17. Direct nuclear reaction experiments for stellar nucleosynthesis

    NASA Astrophysics Data System (ADS)

    Cherubini, S.

    2017-09-01

    During the last two decades indirect methods where proposed and used in many experiments in order to measure nuclear cross sections between charged particles at stellar energies. These are among the lowest to be measured in nuclear physics. One of these methods, the Trojan Horse method, is based on the Quasi-Free reaction mechanism and has proved to be particularly flexible and reliable. It allowed for the measurement of the cross sections of various reactions of astrophysical interest using stable beams. The use and reliability of indirect methods become even more important when reactions induced by Radioactive Ion Beams are considered, given the much lower intensity generally available for these beams. The first Trojan Horse measurement of a process involving the use of a Radioactive Ion Beam dealt with the ^{18} F(p, α ^{15} O process in Nova conditions. To obtain pieces of information on this process, in particular about its cross section at Nova energies, the Trojan Horse method was applied to the ^{18} F(d, α ^{15} O)n three body reaction. In order to establish the reliability of the Trojan Horse method approach, the Treiman-Yang criterion is an important test and it will be addressed briefly in this paper.

  18. A method for measuring the inertia properties of rigid bodies

    NASA Astrophysics Data System (ADS)

    Gobbi, M.; Mastinu, G.; Previati, G.

    2011-01-01

    A method for the measurement of the inertia properties of rigid bodies is presented. Given a rigid body and its mass, the method allows to measure (identify) the centre of gravity location and the inertia tensor during a single test. The proposed technique is based on the analysis of the free motion of a multi-cable pendulum to which the body under consideration is connected. The motion of the pendulum and the forces acting on the system are recorded and the inertia properties are identified by means of a proper mathematical procedure based on a least square estimation. After the body is positioned on the test rig, the full identification procedure takes less than 10 min. The natural frequencies of the pendulum and the accelerations involved are quite low, making this method suitable for many practical applications. In this paper, the proposed method is described and two test rigs are presented: the first is developed for bodies up to 3500 kg and the second for bodies up to 400 kg. A validation of the measurement method is performed with satisfactory results. The test rig holds a third part quality certificate according to an ISO 9001 standard and could be scaled up to measure the inertia properties of huge bodies, such as trucks, airplanes or even ships.

  19. The Role of Psychological and Physiological Factors in Decision Making under Risk and in a Dilemma

    PubMed Central

    Fooken, Jonas; Schaffner, Markus

    2016-01-01

    Different methods to elicit risk attitudes of individuals often provide differing results despite a common theory. Reasons for such inconsistencies may be the different influence of underlying factors in risk-taking decisions. In order to evaluate this conjecture, a better understanding of underlying factors across methods and decision contexts is desirable. In this paper we study the difference in result of two different risk elicitation methods by linking estimates of risk attitudes to gender, age, and personality traits, which have been shown to be related. We also investigate the role of these factors during decision-making in a dilemma situation. For these two decision contexts we also investigate the decision-maker's physiological state during the decision, measured by heart rate variability (HRV), which we use as an indicator of emotional involvement. We found that the two elicitation methods provide different individual risk attitude measures which is partly reflected in a different gender effect between the methods. Personality traits explain only relatively little in terms of driving risk attitudes and the difference between methods. We also found that risk taking and the physiological state are related for one of the methods, suggesting that more emotionally involved individuals are more risk averse in the experiment. Finally, we found evidence that personality traits are connected to whether individuals made a decision in the dilemma situation, but risk attitudes and the physiological state were not indicative for the ability to decide in this decision context. PMID:26834591

  20. Quasi-Uniform High Speed Foam Crush Testing Using a Guided Drop Mass Impact

    NASA Technical Reports Server (NTRS)

    Jones, Lisa E. (Technical Monitor); Kellas, Sotiris

    2004-01-01

    A relatively simple method for measuring the dynamic crush response of foam materials at various loading rates is described. The method utilizes a drop mass impact configuration with mass and impact velocity selected such that the crush speed remains approximately uniform during the entire sample crushing event. Instrumentation, data acquisition, and data processing techniques are presented, and limitations of the test method are discussed. The objective of the test method is to produce input data for dynamic finite element modeling involving crash and energy absorption characteristics of foam materials.

  1. An evaluation of a bioelectrical impedance analyser for the estimation of body fat content.

    PubMed Central

    Maughan, R J

    1993-01-01

    Measurement of body composition is an important part of any assessment of health or fitness. Hydrostatic weighing is generally accepted as the most reliable method for the measurement of body fat content, but is inconvenient. Electrical impedance analysers have recently been proposed as an alternative to the measurement of skinfold thickness. Both these latter methods are convenient, but give values based on estimates obtained from population studies. This study compared values of body fat content obtained by hydrostatic weighing, skinfold thickness measurement and electrical impedance on 50 (28 women, 22 men) healthy volunteers. Mean(s.e.m.) values obtained by the three methods were: hydrostatic weighing, 20.5(1.2)%; skinfold thickness, 21.8(1.0)%; impedance, 20.8(0.9)%. The results indicate that the correlation between the skinfold method and hydrostatic weighing (0.931) is somewhat higher than that between the impedance method and hydrostatic weighing (0.830). This is, perhaps, not surprising given the fact that the impedance method is based on an estimate of total body water which is then used to calculate body fat content. The skinfold method gives an estimate of body density, and the assumptions involved in the conversion from body density to body fat content are the same for both methods. PMID:8457817

  2. Comparing Single-Point and Multi-point Calibration Methods in Modulated DSC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Buskirk, Caleb Griffith

    2017-06-14

    Heat capacity measurements for High Density Polyethylene (HDPE) and Ultra-high Molecular Weight Polyethylene (UHMWPE) were performed using Modulated Differential Scanning Calorimetry (mDSC) over a wide temperature range, -70 to 115 °C, with a TA Instruments Q2000 mDSC. The default calibration method for this instrument involves measuring the heat capacity of a sapphire standard at a single temperature near the middle of the temperature range of interest. However, this method often fails for temperature ranges that exceed a 50 °C interval, likely because of drift or non-linearity in the instrument's heat capacity readings over time or over the temperature range. Therefore,more » in this study a method was developed to calibrate the instrument using multiple temperatures and the same sapphire standard.« less

  3. Engaging parents to increase youth physical activity a systematic review.

    PubMed

    O'Connor, Teresia M; Jago, Russell; Baranowski, Tom

    2009-08-01

    Parents are often involved in interventions to engage youth in physical activity, but it is not clear which methods for involving parents are effective. A systematic review was conducted of interventions with physical activity and parental components among healthy youth to identify how best to involve parents in physical activity interventions for children. Identified intervention studies were reviewed in 2008 for study design, description of family components, and physical activity outcomes. The quality of reporting was assessed using the CONSORT checklist for reporting on trials of nonpharmacologic treatments. The literature search identified 1227 articles, 35 of which met review criteria. Five of the 14 RCTs met > or =70% of CONSORT checklist items. Five general procedures for involving parents were identified: (1) face-to-face educational programs or parent training, (2) family participatory exercise programs, (3) telephone communication, (4) organized activities, and (5) educational materials sent home. Lack of uniformity in reporting trials, multiple pilot studies, and varied measurements of physical activity outcomes prohibited systematic conclusions. Interventions with educational or training programs during family visits or via telephone communication with parents appear to offer some promise. There is little evidence for effectiveness of family involvement methods in programs for promoting physical activity in children, because of the heterogeneity of study design, study quality, and outcome measures used. There is a need to build an evidence base of more-predictive models of child physical activity that include parent and child mediating variables and procedures that can effect changes in these variables for future family-based physical activity interventions.

  4. Concept for an off-line gain stabilisation method.

    PubMed

    Pommé, S; Sibbens, G

    2004-01-01

    Conceptual ideas are presented for an off-line gain stabilisation method for spectrometry, in particular for alpha-particle spectrometry at low count rate. The method involves list mode storage of individual energy and time stamp data pairs. The 'Stieltjes integral' of measured spectra with respect to a reference spectrum is proposed as an indicator for gain instability. 'Exponentially moving averages' of the latter show the gain shift as a function of time. With this information, the data are relocated stochastically on a point-by-point basis.

  5. Method for analyzing microbial communities

    DOEpatents

    Zhou, Jizhong [Oak Ridge, TN; Wu, Liyou [Oak Ridge, TN

    2010-07-20

    The present invention provides a method for quantitatively analyzing microbial genes, species, or strains in a sample that contains at least two species or strains of microorganisms. The method involves using an isothermal DNA polymerase to randomly and representatively amplify genomic DNA of the microorganisms in the sample, hybridizing the resultant polynucleotide amplification product to a polynucleotide microarray that can differentiate different genes, species, or strains of microorganisms of interest, and measuring hybridization signals on the microarray to quantify the genes, species, or strains of interest.

  6. Chromatic dispersion estimation based on heterodyne detection for coherent optical communication systems

    NASA Astrophysics Data System (ADS)

    Li, Yong; Yang, Aiying; Guo, Peng; Qiao, Yaojun; Lu, Yueming

    2018-01-01

    We propose an accurate and nondata-aided chromatic dispersion (CD) estimation method involving the use of the cross-correlation function of two heterodyne detection signals for coherent optical communication systems. Simulations are implemented to verify the feasibility of the proposed method for 28-GBaud coherent systems with different modulation formats. The results show that the proposed method has high accuracy for measuring CD and has good robustness against laser phase noise, amplified spontaneous emission noise, and nonlinear impairments.

  7. Methods for sampling geographically mobile female traders in an East African market setting

    PubMed Central

    Achiro, Lillian; Kwena, Zachary A.; McFarland, Willi; Neilands, Torsten B.; Cohen, Craig R.; Bukusi, Elizabeth A.; Camlin, Carol S.

    2018-01-01

    Background The role of migration in the spread of HIV in sub-Saharan Africa is well-documented. Yet migration and HIV research have often focused on HIV risks to male migrants and their partners, or migrants overall, often failing to measure the risks to women via their direct involvement in migration. Inconsistent measures of mobility, gender biases in those measures, and limited data sources for sex-specific population-based estimates of mobility have contributed to a paucity of research on the HIV prevention and care needs of migrant and highly mobile women. This study addresses an urgent need for novel methods for developing probability-based, systematic samples of highly mobile women, focusing on a population of female traders operating out of one of the largest open air markets in East Africa. Our method involves three stages: 1.) identification and mapping of all market stall locations using Global Positioning System (GPS) coordinates; 2.) using female market vendor stall GPS coordinates to build the sampling frame using replicates; and 3.) using maps and GPS data for recruitment of study participants. Results The location of 6,390 vendor stalls were mapped using GPS. Of these, 4,064 stalls occupied by women (63.6%) were used to draw four replicates of 128 stalls each, and a fifth replicate of 15 pre-selected random alternates for a total of 527 stalls assigned to one of five replicates. Staff visited 323 stalls from the first three replicates and from these successfully recruited 306 female vendors into the study for a participation rate of 94.7%. Mobilization strategies and involving traders association representatives in participant recruitment were critical to the study’s success. Conclusion The study’s high participation rate suggests that this geospatial sampling method holds promise for development of probability-based samples in other settings that serve as transport hubs for highly mobile populations. PMID:29324780

  8. The processing and transmission of EEG data

    NASA Technical Reports Server (NTRS)

    Schulze, A. E.

    1974-01-01

    Interest in sleep research was stimulated by the discovery of a number of physiological changes that occur during sleep and by the observed effects of sleep on physical and mental performance and status. The use of the relatively new methods of EEG measurement, transmission, and automatic scoring makes sleep analysis and categorization feasible. Sleep research involving the use of the EEG as a fundamental input has the potential of answering many unanswered questions involving physical and mental behavior, drug effects, circadian rhythm, and anesthesia.

  9. Enhancing maximum measurable sound reduction index using sound intensity method and strong receiving room absorption.

    PubMed

    Hongisto, V; Lindgren, M; Keränen, J

    2001-01-01

    The sound intensity method is usually recommended instead of the pressure method in the presence of strong flanking transmission. Especially when small and/or heavy specimens are tested, the flanking often causes problems in laboratories practicing only the pressure method. The purpose of this study was to determine experimentally the difference between the maximum sound reduction indices obtained by the intensity method, RI,max, and by the pressure method, Rmax. In addition, the influence of adding room absorption to the receiving room was studied. The experiments were carried out in an ordinary two-room test laboratory. The exact value of RI,max was estimated by applying a fitting equation to the measured data points. The fitting equation involved the dependence of the pressure-intensity indicator on measured acoustical parameters. In an empty receiving room, the difference between RI,max and Rmax was 4-15 dB, depending on frequency. When the average reverberation time was reduced from 3.5 to 0.6 s, the values of RI,max increased by 2-10 dB compared to the results in the empty room. Thus, it is possible to measure wall structures having 9-22 dB better sound reduction index using the intensity method than with the pressure method. This facilitates the measurements of small and/or heavy specimens in the presence of flanking. Moreover, when new laboratories are designed, the intensity method is an alternative to the pressure method which presupposes expensive isolation structures between the rooms.

  10. MERCURY MEASUREMENTS FOR SOLIDS MADE RAPIDLY, SIMPLY, AND INEXPENSIVELY

    EPA Science Inventory

    While traditional methods for determining mercury in solid samples involve the use of aggressive chemicals to dissolve the matrix and the use of other chemicals to properly reduce the mercury to the volatile elemental form, pyrolysis-based analyzers can be used by directly weighi...

  11. 75 FR 29991 - Marine Mammals; receipt of application for permit amendment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... research on cetacean behavior, sound production, and responses to sound. The research methods include... amendment to Permit No. 14241 to conduct research on marine mammals. ADDRESSES: The application and related... animal hears and measures vocalization, behavior, and physiological parameters. Research also involves...

  12. A Novel Yeast Genomics Method for Identifying New Breast Cancer Susceptibility

    DTIC Science & Technology

    2005-05-01

    selectable marker and tracing this marker through several passages in nonselective medium. The selectable marker will be the hygromycin phosphotransferase ... hygromycin and sensitivity to (32), thereby providing both positive and negative selectivity. The assay involved measurement of the frequency of gancyclovir

  13. Delving Deeper: Transforming Shapes Physically and Analytically

    ERIC Educational Resources Information Center

    Rathouz, Margaret; Novak, Christopher; Clifford, John

    2013-01-01

    Constructing formulas "from scratch" for calculating geometric measurements of shapes--for example, the area of a triangle--involves reasoning deductively and drawing connections between different methods (Usnick, Lamphere, and Bright 1992). Visual and manipulative models also play a role in helping students understand the underlying…

  14. GPA, GMAT, and Scale: A Method of Quantification of Admissions Criteria.

    ERIC Educational Resources Information Center

    Sobol, Marion G.

    1984-01-01

    Multiple regression analysis was used to establish a scale, measuring college student involvement in campus activities, work experience, technical background, references, and goals. This scale was tested to see whether it improved the prediction of success in graduate school. (Author/MLW)

  15. Currency crisis indication by using ensembles of support vector machine classifiers

    NASA Astrophysics Data System (ADS)

    Ramli, Nor Azuana; Ismail, Mohd Tahir; Wooi, Hooy Chee

    2014-07-01

    There are many methods that had been experimented in the analysis of currency crisis. However, not all methods could provide accurate indications. This paper introduces an ensemble of classifiers by using Support Vector Machine that's never been applied in analyses involving currency crisis before with the aim of increasing the indication accuracy. The proposed ensemble classifiers' performances are measured using percentage of accuracy, root mean squared error (RMSE), area under the Receiver Operating Characteristics (ROC) curve and Type II error. The performances of an ensemble of Support Vector Machine classifiers are compared with the single Support Vector Machine classifier and both of classifiers are tested on the data set from 27 countries with 12 macroeconomic indicators for each country. From our analyses, the results show that the ensemble of Support Vector Machine classifiers outperforms single Support Vector Machine classifier on the problem involving indicating a currency crisis in terms of a range of standard measures for comparing the performance of classifiers.

  16. Reliability improvement methods for sapphire fiber temperature sensors

    NASA Astrophysics Data System (ADS)

    Schietinger, C.; Adams, B.

    1991-08-01

    Mechanical, optical, electrical, and software design improvements can be brought to bear in the enhancement of fiber-optic sapphire-fiber temperature measurement tool reliability in harsh environments. The optical fiber thermometry (OFT) equipment discussed is used in numerous process industries and generally involves a sapphire sensor, an optical transmission cable, and a microprocessor-based signal analyzer. OFT technology incorporating sensors for corrosive environments, hybrid sensors, and two-wavelength measurements, are discussed.

  17. Optical and mechanical nondestructive tests for measuring tomato fruit firmness

    NASA Astrophysics Data System (ADS)

    Manivel-Chávez, Ricardo A.; Garnica-Romo, M. G.; Arroyo-Correa, Gabriel; Aranda-Sánchez, Jorge I.

    2011-08-01

    Ripening is one of the most important processes to occur in fruits which involve changes in color, flavor, and texture. An important goal in quality control of fruits is to substitute traditional sensory testing methods with reliable nondestructive tests (NDT). In this work we study the firmness of tomato fruits by using optical and mechanical NDT. Optical and mechanical parameters, measured along the tomato shelf life, are shown.

  18. Cleanliness evaluation of rough surfaces with diffuse IR reflectance

    NASA Technical Reports Server (NTRS)

    Pearson, L. H.

    1995-01-01

    Contamination on bonding surfaces has been determined to be a primary cause for degraded bond strength in certain solid rocket motor bondlines. Hydrocarbon and silicone based organic contaminants that are airborne or directly introduced to a surface are a significant source of contamination. Diffuse infrared (IR) reflectance has historically been used as an effective technique for detection of organic contaminants, however, common laboratory methods involving the use of a Fourier transform IR spectrometer (FTIR) are impractical for inspecting the large bonding surface areas found on solid rocket motors. Optical methods involving the use of acousto-optic tunable filters and fixed bandpass optical filters are recommended for increased data acquisition speed. Testing and signal analysis methods are presented which provide for simultaneous measurement of contamination concentration and roughness level on rough metal surfaces contaminated with hydrocarbons.

  19. Analysis of radon and thoron progeny measurements based on air filtration.

    PubMed

    Stajic, J M; Nikezic, D

    2015-02-01

    Measuring of radon and thoron progeny concentrations in air, based on air filtration, was analysed in order to assess the reliability of the method. Changes of radon and thoron progeny activities on the filter during and after air sampling were investigated. Simulation experiments were performed involving realistic measuring parameters. The sensitivity of results (radon and thoron concentrations in air) to the variations of alpha counting in three and five intervals was studied. The concentration of (218)Po showed up to be the most sensitive to these changes, as was expected because of its short half-life. The well-known method for measuring of progeny concentrations based on air filtration is rather unreliable and obtaining unrealistic or incorrect results appears to be quite possible. A simple method for quick estimation of radon potential alpha energy concentration (PAEC), based on measurements of alpha activity in a saturation regime, was proposed. Thoron PAEC can be determined from the saturation activity on the filter, through beta or alpha measurements. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  20. Techniques For Measuring Absorption Coefficients In Crystalline Materials

    NASA Astrophysics Data System (ADS)

    Klein, Philipp H.

    1981-10-01

    Absorption coefficients smaller than 0.001 cm-1 can, with more or less difficulty, be measured by several techniques. With diligence, all methods can be refined to permit measurement of absorption coefficients as small as 0.00001 cm-1. Spectral data are most readily obtained by transmission (spectrophotometric) methods, using multiple internal reflection to increase effective sample length. Emissivity measurements, requiring extreme care in the elimination of detector noise and stray light, nevertheless afford the most accessible spectral data in the 0.0001 to 0.00001 cm-1 range. Single-wavelength informa-tion is most readily obtained with modifications of laser calorimetry. Thermo-couple detection of energy absorbed from a laser beam is convenient, but involves dc amplification techniques and is susceptible to stray-light problems. Photoacoustic detection, using ac methods, tends to diminish errors of these types, but at some expense in experimental complexity. Laser calorimetry has been used for measurements of absorption coefficients as small as 0.000003 cm-1. Both transmission and calorimetric data, taken as functions of intensity, have been used for measurement of nonlinear absorption coefficients.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nose, Y.

    Methods were developed for generating an integrated, statistical model of the anatomical structures within the human thorax relevant to radioisotope powered artificial heart implantation. These methods involve measurement and analysis of anatomy in four areas: chest wall, pericardium, vascular connections, and great vessels. A model for the prediction of thorax outline from radiograms was finalized. These models were combined with 100 radiograms to arrive at a size distribution representing the adult male and female populations. (CH)

  2. Revivification of a method for identifying longleaf pine timber and its application to southern pine relicts in southeastern Virginia

    Treesearch

    Thomas L. Eberhardt; Philip M. Sheridan; Arvind A.R. Bhuta

    2011-01-01

    Abstract: Longleaf pine (Pinus palustris Mill.) cannot be distinguished from the other southern pines based on wood anatomy alone. A method that involves measuring pith and second annual ring diameters, reported by Arthur Koehler in 1932 (The Southern Lumberman, 145: 36–37), was revisited as an option for identifying longleaf pine timbers and stumps. Cross-section...

  3. Intracellular recording of action potentials by nanopillar electroporation.

    PubMed

    Xie, Chong; Lin, Ziliang; Hanson, Lindsey; Cui, Yi; Cui, Bianxiao

    2012-02-12

    Action potentials have a central role in the nervous system and in many cellular processes, notably those involving ion channels. The accurate measurement of action potentials requires efficient coupling between the cell membrane and the measuring electrodes. Intracellular recording methods such as patch clamping involve measuring the voltage or current across the cell membrane by accessing the cell interior with an electrode, allowing both the amplitude and shape of the action potentials to be recorded faithfully with high signal-to-noise ratios. However, the invasive nature of intracellular methods usually limits the recording time to a few hours, and their complexity makes it difficult to simultaneously record more than a few cells. Extracellular recording methods, such as multielectrode arrays and multitransistor arrays, are non-invasive and allow long-term and multiplexed measurements. However, extracellular recording sacrifices the one-to-one correspondence between the cells and electrodes, and also suffers from significantly reduced signal strength and quality. Extracellular techniques are not, therefore, able to record action potentials with the accuracy needed to explore the properties of ion channels. As a result, the pharmacological screening of ion-channel drugs is usually performed by low-throughput intracellular recording methods. The use of nanowire transistors, nanotube-coupled transistors and micro gold-spine and related electrodes can significantly improve the signal strength of recorded action potentials. Here, we show that vertical nanopillar electrodes can record both the extracellular and intracellular action potentials of cultured cardiomyocytes over a long period of time with excellent signal strength and quality. Moreover, it is possible to repeatedly switch between extracellular and intracellular recording by nanoscale electroporation and resealing processes. Furthermore, vertical nanopillar electrodes can detect subtle changes in action potentials induced by drugs that target ion channels.

  4. Intracellular recording of action potentials by nanopillar electroporation

    NASA Astrophysics Data System (ADS)

    Xie, Chong; Lin, Ziliang; Hanson, Lindsey; Cui, Yi; Cui, Bianxiao

    2012-03-01

    Action potentials have a central role in the nervous system and in many cellular processes, notably those involving ion channels. The accurate measurement of action potentials requires efficient coupling between the cell membrane and the measuring electrodes. Intracellular recording methods such as patch clamping involve measuring the voltage or current across the cell membrane by accessing the cell interior with an electrode, allowing both the amplitude and shape of the action potentials to be recorded faithfully with high signal-to-noise ratios. However, the invasive nature of intracellular methods usually limits the recording time to a few hours, and their complexity makes it difficult to simultaneously record more than a few cells. Extracellular recording methods, such as multielectrode arrays and multitransistor arrays, are non-invasive and allow long-term and multiplexed measurements. However, extracellular recording sacrifices the one-to-one correspondence between the cells and electrodes, and also suffers from significantly reduced signal strength and quality. Extracellular techniques are not, therefore, able to record action potentials with the accuracy needed to explore the properties of ion channels. As a result, the pharmacological screening of ion-channel drugs is usually performed by low-throughput intracellular recording methods. The use of nanowire transistors, nanotube-coupled transistors and micro gold-spine and related electrodes can significantly improve the signal strength of recorded action potentials. Here, we show that vertical nanopillar electrodes can record both the extracellular and intracellular action potentials of cultured cardiomyocytes over a long period of time with excellent signal strength and quality. Moreover, it is possible to repeatedly switch between extracellular and intracellular recording by nanoscale electroporation and resealing processes. Furthermore, vertical nanopillar electrodes can detect subtle changes in action potentials induced by drugs that target ion channels.

  5. Social cohesion through football: a quasi-experimental mixed methods design to evaluate a complex health promotion program

    PubMed Central

    2010-01-01

    Social isolation and disengagement fragments local communities. Evidence indicates that refugee families are highly vulnerable to social isolation in their countries of resettlement. Research to identify approaches to best address this is needed. Football United is a program that aims to foster social inclusion and cohesion in areas with high refugee settlement in New South Wales, Australia, through skills and leadership development, mentoring, and the creation of links with local community and corporate leaders and organisations. The Social Cohesion through Football study's broad goal is to examine the implementation of a complex health promotion program, and to analyse the processes involved in program implementation. The study will consider program impact on individual health and wellbeing, social inclusion and cohesion, as well as analyse how the program by necessity interacts and adapts to context during implementation, a concept we refer to as plasticity. The proposed study will be the first prospective cohort impact study to our knowledge to assess the impact of a comprehensive integrated program using football as a vehicle for fostering social inclusion and cohesion in communities with high refugee settlement. Methods/design A quasi-experimental cohort study design with treatment partitioning involving four study sites. The study employs a 'dose response' model, comparing those with no involvement in the Football United program with those with lower or higher levels of participation. A range of qualitative and quantitative measures will be used in the study. Study participants' emotional well being, resilience, ethnic identity and other group orientation, feelings of social inclusion and belonging will be measured using a survey instrument complemented by relevant data drawn from in-depth interviews, self reporting measures and participant observation. The views of key informants from the program and the wider community will also be solicited. Discussion The complexity of the Football United program poses challenges for measurement, and requires the study design to be responsive to the dynamic nature of the program and context. Assessment of change is needed at multiple levels, drawing on mixed methods and multidisciplinary approaches in implementation and evaluation. Attention to these challenges has underpinned the design and methods in the Social Cohesion through Football study, which will use a unique and innovative combination of measures that have not been applied together previously in social inclusion/cohesion and sport and social inclusion/cohesion program research. PMID:20920361

  6. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment.

    PubMed

    O'Brien, Katie M; Upson, Kristen; Cook, Nancy R; Weinberg, Clarice R

    2016-02-01

    Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. We compared adjustment methods, including novel approaches, using simulated case-control data. Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals.

  7. Father involvement in Mexican origin families: Preliminary development of culturally-informed measure

    PubMed Central

    Roubinov, Danielle S.; Luecken, Linda J.; Gonzales, Nancy A.; Crnic, Keith A.

    2015-01-01

    Objectives An increasing body of research has documented the significant influence of father involvement on children’s development and overall well-being. However, extant research has predominately focused on middle-class Caucasian samples with little examination of fathering in ethnic minority and low-income families, particularly during the infancy period. The present study evaluated measures of early father involvement (paternal engagement, accessibility, and responsibility) that were adapted to capture important cultural values relevant to the paternal role in Mexican origin families. Methods A sample of 180 Mexican origin mothers (M age = 28.3) and 83 Mexican origin fathers (M age = 31.5) were interviewed during the perinatal period. Results Descriptive analyses indicated that Mexican origin fathers are involved in meaningful levels of direct interaction with their infant. A two-factor model of paternal responsibility was supported by factor analyses, consisting of a behavioral responsibility factor aligned with previous literature and culturally-derived positive machismo factor. Qualities of the romantic relationship, cultural orientation, and maternal employment status were related to indices of father involvement. Conclusions These preliminary results contribute to understanding of the transition to fatherhood among low-income Mexican origin men and bring attention to the demographic, social, and cultural contexts in which varying levels of father involvement may emerge. PMID:26237543

  8. Novel method for measurement of transistor gate length using energy-filtered transmission electron microscopy

    NASA Astrophysics Data System (ADS)

    Lee, Sungho; Kim, Tae-Hoon; Kang, Jonghyuk; Yang, Cheol-Woong

    2016-12-01

    As the feature size of devices continues to decrease, transmission electron microscopy (TEM) is becoming indispensable for measuring the critical dimension (CD) of structures. Semiconductors consist primarily of silicon-based materials such as silicon, silicon dioxide, and silicon nitride, and the electrons transmitted through a plan-view TEM sample provide diverse information about various overlapped silicon-based materials. This information is exceedingly complex, which makes it difficult to clarify the boundary to be measured. Therefore, we propose a simple measurement method using energy-filtered TEM (EF-TEM). A precise and effective measurement condition was obtained by determining the maximum value of the integrated area ratio of the electron energy loss spectrum at the boundary to be measured. This method employs an adjustable slit allowing only electrons with a certain energy range to pass. EF-TEM imaging showed a sharp transition at the boundary when the energy-filter’s passband centre was set at 90 eV, with a slit width of 40 eV. This was the optimum condition for the CD measurement of silicon-based materials involving silicon nitride. Electron energy loss spectroscopy (EELS) and EF-TEM images were used to verify this method, which makes it possible to measure the transistor gate length in a dynamic random access memory manufactured using 35 nm process technology. This method can be adapted to measure the CD of other non-silicon-based materials using the EELS area ratio of the boundary materials.

  9. The engagement of children with disabilities in health-related technology design processes: identifying methodology.

    PubMed

    Allsop, Matthew J; Holt, Raymond J; Levesley, Martin C; Bhakta, Bipinchandra

    2010-01-01

    This review aims to identify research methodology that is suitable for involving children with disabilities in the design of healthcare technology, such as assistive technology and rehabilitation equipment. A review of the literature included the identification of methodology that is available from domains outside of healthcare and suggested a selection of available methods. The need to involve end users within the design of healthcare technology was highlighted, with particular attention to the need for greater levels of participation from children with disabilities within all healthcare research. Issues that may arise when trying to increase such involvement included the need to consider communication via feedback and tailored information, the need to measure levels of participation occurring in current research, and caution regarding the use of proxy information. Additionally, five suitable methods were highlighted that are available for use with children with disabilities in the design of healthcare technology. The methods identified in the review need to be put into practice to establish effective and, if necessary, novel ways of designing healthcare technology when end users are children with disabilities.

  10. Measurement of charge transport through organic semiconducting devices

    NASA Astrophysics Data System (ADS)

    Klenkler, Richard A.

    2007-12-01

    In this thesis, two important and unexplored areas of organic semiconductor device physics are investigated: The first area involves determining the effect of energy barriers and intermixing at the interfaces between hole transport layers (HTLs). This effect was discerned by first establishing a method of pressure-laminating successive solution coated HTLs to gether. It was found that in the range of 0.8--3.0 MPa a pressure-laminated interface between two identical HTLs causes no measurable perturbation to charge transport. By this method, 2 different HTLs can be sandwiched together to create a discrete interface, and by inserting a mixed HTL in the middle an intermixed interface between the 2 HTLs can be simulated. With these sandwiched devices, charge injection across discrete versus intermixed interfaces were compared using time-of-flight measurements. For the hole transport materials investigated, no perturbation to the overall charge transport was observed with the discrete interface, however in contrast the rate of charge transport was clearly reduced through the intermixed interface. The second area that was investigated pertains to the development of a bulk mobility measurement technique that has a higher resolution than existing methods. The approach that was used involved decoupling the charge carrier transient signal from the device charging circuit. With this approach, the RC time constant constraint that limits the resolution of existing methods is eliminated. The resulting method, termed the photoinduced electroluminescence (EL) mobility measurement technique, was then used to compare the electron mobility of the metal chelate, AlQ3 to that of the novel triazine material, BTB. Results showed that BTB demonstrated an order of magnitude higher mobility than AlQ3. Overall, these findings have broad implications regarding device design. The pressure-lamination method could be used, e.g., as a diagnostic tool to help in the design of multilayer xerographic photoreceptors, such as those that include an abrasion resistant overcoat. Further, the photoinduced EL technique could be use as a tool to help characterize charge flow and balance in organic light emitting devices amongst others.

  11. Simultaneous determination of rosuvastatin and propranolol in their binary mixture by synchronous spectrofluorimetry

    NASA Astrophysics Data System (ADS)

    El-Abasawi, Nasr M.; Attia, Khalid A. M.; Abo-serie, Ahmad A. M.; Morshedy, Samir; Abdel-Fattah, Ashraf

    2018-06-01

    Simultaneous determination of rosuvastatin calcium and propranolol hydrochloride using the first derivative synchronous spectrofluorimetry was described. This method involves measuring the synchronous fluorescence of both drugs in ethanol using, Δ λ = 60 nm then the first derivative was recorded and the peak amplitudes were measured at 350 and 374 nm for rosuvastatin calcium and propranolol hydrochloride, respectively. Under the optimum conditions, the linear ranges of rosuvastatin calcium and propranolol hydrochloride were 0.2-2 μg/mL and 0.1-1 μg/mL, respectively. The method was used for quantitative analysis of the drugs in raw materials and pharmaceutical dosage form. The validity of the proposed method was assessed according to an international conference on harmonization (ICH) guidelines.

  12. A method for measuring sulfide toxicity in the nematode Caenorhabditis elegans.

    PubMed

    Livshits, Leonid; Gross, Einav

    2017-01-01

    Cysteine catabolism by gut microbiota produces high levels of sulfide. Excessive sulfide can interfere with colon function, and therefore may be involved in the etiology and risk of relapse of ulcerative colitis, an inflammatory bowel disease affecting millions of people worldwide. Therefore, it is crucial to understand how cells/animals regulate the detoxification of sulfide generated by bacterial cysteine catabolism in the gut. Here we describe a simple and cost-effective way to explore the mechanism of sulfide toxicity in the nematode Caenorhabditis elegans ( C. elegans ). •A rapid cost-effective method to quantify and study sulfide tolerance in C. elegans and other free-living nematodes.•A cost effective method to measure the concentration of sulfide in the inverted plate assay.

  13. Isosteric heat of hydrogen adsorption on MOFs: comparison between adsorption calorimetry, sorption isosteric method, and analytical models

    NASA Astrophysics Data System (ADS)

    Kloutse, A. F.; Zacharia, R.; Cossement, D.; Chahine, R.; Balderas-Xicohténcatl, R.; Oh, H.; Streppel, B.; Schlichtenmayer, M.; Hirscher, M.

    2015-12-01

    Isosteric heat of adsorption is an important parameter required to describe the thermal performance of adsorptive storage systems. It is most frequently calculated from adsorption isotherms measured over wide ranges of pressure and temperature, using the so-called adsorption isosteric method. Direct quantitative estimation of isosteric heats on the other hand is possible using the coupled calorimetric-volumetric method, which involves simultaneous measurement of heat and adsorption. In this work, we compare the isosteric heats of hydrogen adsorption on microporous materials measured by both methods. Furthermore, the experimental data are compared with the isosteric heats obtained using the modified Dubinin-Astakhov, Tóth, and Unilan adsorption analytical models to establish the reliability and limitations of simpler methods and assumptions. To this end, we measure the hydrogen isosteric heats on five prototypical metal-organic frameworks: MOF-5, Cu-BTC, Fe-BTC, MIL-53, and MOF-177 using both experimental methods. For all MOFs, we find a very good agreement between the isosteric heats measured using the calorimetric and isosteric methods throughout the range of loading studied. Models' prediction on the other hand deviates from both experiments depending on the MOF studied and the range of loading. Under low-loadings of less than 5 mol kg-1, the isosteric heat of hydrogen adsorption decreases in the order Cu-BTC > MIL-53 > MOF-5 > Fe-BTC > MOF-177. The order of isosteric heats is coherent with the strength of hydrogen interaction revealed from previous thermal desorption spectroscopy measurements.

  14. TELEPHONIC PRESENTATION: MERCURY MEASUREMENTS FOR SOLIDS MADE RAPIDLY, SIMPLY, AND INEXPENSIVELY

    EPA Science Inventory

    While traditional methods for determining mercury in solid samples involve the use of aggressive chemicals to dissolve the matrix and the use of other chemicals to properly reduce the mercury to the volatile elemental form, pyrolysis-based analyzers can be used by directly weighi...

  15. Associations between Sleep Characteristics, Seasonal Depressive Symptoms, Lifestyle, and ADHD Symptoms in Adults

    ERIC Educational Resources Information Center

    Bijlenga, Denise; van der Heijden, Kristiaan B.; Breuk, Minda; van Someren, Eus J. W.; Lie, Maria E. H.; Boonstra, A. Marije; Swaab, Hanna J. T.; Kooij, J. J. Sandra

    2013-01-01

    Objective: The authors explored associations between ADHD symptoms, seasonal depressive symptoms, lifestyle, and health. Method: Adult ADHD patients ("n" = 202) and controls ("n" = 189) completed the ASESA questionnaire involving lifestyle, eating pattern, and physical and psychological health, and validated measures on ADHD…

  16. Evaluating the Fine Arts Program at the Center for Excellence in Disabilities

    ERIC Educational Resources Information Center

    Schlosnagle, Leo; McBean, Amanda L.; Cutlip, Milisa; Panzironi, Helen; Jarmolowicz, David P.

    2014-01-01

    Art programs for people with disabilities may encourage creativity, promote engagement, emphasize inclusion, and extend access and opportunities for community involvement. This mixed methods study utilized quantitative and qualitative data, repeated measures, action research, and stakeholder collaboration to develop and implement an evaluation…

  17. Teaching with Spreadsheets: An Example from Heat Transfer.

    ERIC Educational Resources Information Center

    Drago, Peter

    1993-01-01

    Provides an activity which measures the heat transfer through an insulated cylindrical tank, allowing the student to gain a better knowledge of both the physics involved and the working of spreadsheets. Provides both a spreadsheet solution and a maximum-minimum method of solution for the problem. (MVL)

  18. Single charging events on colloidal particles in a nonpolar liquid with surfactant

    NASA Astrophysics Data System (ADS)

    Schreuer, Caspar; Vandewiele, Stijn; Brans, Toon; Strubbe, Filip; Neyts, Kristiaan; Beunis, Filip

    2018-01-01

    Electrical charging of colloidal particles in nonpolar liquids due to surfactant additives is investigated intensively, motivated by its importance in a variety of applications. Most methods rely on average electrophoretic mobility measurements of many particles, which provide only indirect information on the charging mechanism. In the present work, we present a method that allows us to obtain direct information on the charging mechanism, by measuring the charge fluctuations on individual particles with a precision higher than the elementary charge using optical trapping electrophoresis. We demonstrate the capabilities of the method by studying the influence of added surfactant OLOA 11000 on the charging of single colloidal PMMA particles in dodecane. The particle charge and the frequency of charging events are investigated both below and above the critical micelle concentration (CMC) and with or without applying a DC offset voltage. It is found that at least two separate charging mechanisms are present below the critical micelle concentration. One mechanism is a process where the particle is stripped from negatively charged ionic molecules. An increase in the charging frequency with increased surfactant concentration suggests a second mechanism that involves single surfactant molecules. Above the CMC, neutral inverse micelles can also be involved in the charging process.

  19. Tissue-Informative Mechanism for Wearable Non-invasive Continuous Blood Pressure Monitoring

    NASA Astrophysics Data System (ADS)

    Woo, Sung Hun; Choi, Yun Young; Kim, Dae Jung; Bien, Franklin; Kim, Jae Joon

    2014-10-01

    Accurate continuous direct measurement of the blood pressure is currently available thru direct invasive methods via intravascular needles, and is mostly limited to use during surgical procedures or in the intensive care unit (ICU). Non-invasive methods that are mostly based on auscultation or cuff oscillometric principles do provide relatively accurate measurement of blood pressure. However, they mostly involve physical inconveniences such as pressure or stress on the human body. Here, we introduce a new non-invasive mechanism of tissue-informative measurement, where an experimental phenomenon called subcutaneous tissue pressure equilibrium is revealed and related for application in detection of absolute blood pressure. A prototype was experimentally verified to provide an absolute blood pressure measurement by wearing a watch-type measurement module that does not cause any discomfort. This work is supposed to contribute remarkably to the advancement of continuous non-invasive mobile devices for 24-7 daily-life ambulatory blood-pressure monitoring.

  20. Measurement of lung volumes from supine portable chest radiographs.

    PubMed

    Ries, A L; Clausen, J L; Friedman, P J

    1979-12-01

    Lung volumes in supine nonambulatory patients are physiological parameters often difficult to measure with current techniques (plethysmograph, gas dilution). Existing radiographic methods for measuring lung volumes require standard upright chest radiographs. Accordingly, in 31 normal supine adults, we determined helium-dilution functional residual and total lung capacities and measured planimetric lung field areas (LFA) from corresponding portable anteroposterior and lateral radiographs. Low radiation dose methods, which delivered less than 10% of that from standard portable X-ray technique, were utilized. Correlation between lung volume and radiographic LFA was highly significant (r = 0.96, SEE = 10.6%). Multiple-step regressions using height and chest diameter correction factors reduced variance, but weight and radiographic magnification factors did not. In 17 additional subjects studied for validation, the regression equations accurately predicted radiographic lung volume. Thus, this technique can provide accurate and rapid measurement of lung volume in studies involving supine patients.

  1. Time-domain system for identification of the natural resonant frequencies of aircraft relevant to electromagnetic compatibility testing

    NASA Astrophysics Data System (ADS)

    Adams, J. W.; Ondrejka, A. R.; Medley, H. W.

    1987-11-01

    A method of measuring the natural resonant frequencies of a structure is described. The measurement involves irradiating this structure, in this case a helicopter, with an impulsive electromagnetic (EM) field and receiving the echo reflected from the helicopter. Resonances are identified by using a mathematical algorithm based on Prony's method to operate on the digitized reflected signal. The measurement system consists of special TEM horns, pulse generators, a time-domain system, and Prony's algorithm. The frequency range covered is 5 megahertz to 250 megahertz. This range is determined by antenna and circuit characteristics. The measurement system is demonstrated, and measured data from several different helicopters are presented in different forms. These different forms are needed to determine which of the resonant frequencies are real and which are false. The false frequencies are byproducts of Prony's algorithm.

  2. Quantifying pCO2 in biological ocean acidification experiments: A comparison of four methods.

    PubMed

    Watson, Sue-Ann; Fabricius, Katharina E; Munday, Philip L

    2017-01-01

    Quantifying the amount of carbon dioxide (CO2) in seawater is an essential component of ocean acidification research; however, equipment for measuring CO2 directly can be costly and involve complex, bulky apparatus. Consequently, other parameters of the carbonate system, such as pH and total alkalinity (AT), are often measured and used to calculate the partial pressure of CO2 (pCO2) in seawater, especially in biological CO2-manipulation studies, including large ecological experiments and those conducted at field sites. Here we compare four methods of pCO2 determination that have been used in biological ocean acidification experiments: 1) Versatile INstrument for the Determination of Total inorganic carbon and titration Alkalinity (VINDTA) measurement of dissolved inorganic carbon (CT) and AT, 2) spectrophotometric measurement of pHT and AT, 3) electrode measurement of pHNBS and AT, and 4) the direct measurement of CO2 using a portable CO2 equilibrator with a non-dispersive infrared (NDIR) gas analyser. In this study, we found these four methods can produce very similar pCO2 estimates, and the three methods often suited to field-based application (spectrophotometric pHT, electrode pHNBS and CO2 equilibrator) produced estimated measurement uncertainties of 3.5-4.6% for pCO2. Importantly, we are not advocating the replacement of established methods to measure seawater carbonate chemistry, particularly for high-accuracy quantification of carbonate parameters in seawater such as open ocean chemistry, for real-time measures of ocean change, nor for the measurement of small changes in seawater pCO2. However, for biological CO2-manipulation experiments measuring differences of over 100 μatm pCO2 among treatments, we find the four methods described here can produce similar results with careful use.

  3. Quantifying pCO2 in biological ocean acidification experiments: A comparison of four methods

    PubMed Central

    Fabricius, Katharina E.; Munday, Philip L.

    2017-01-01

    Quantifying the amount of carbon dioxide (CO2) in seawater is an essential component of ocean acidification research; however, equipment for measuring CO2 directly can be costly and involve complex, bulky apparatus. Consequently, other parameters of the carbonate system, such as pH and total alkalinity (AT), are often measured and used to calculate the partial pressure of CO2 (pCO2) in seawater, especially in biological CO2-manipulation studies, including large ecological experiments and those conducted at field sites. Here we compare four methods of pCO2 determination that have been used in biological ocean acidification experiments: 1) Versatile INstrument for the Determination of Total inorganic carbon and titration Alkalinity (VINDTA) measurement of dissolved inorganic carbon (CT) and AT, 2) spectrophotometric measurement of pHT and AT, 3) electrode measurement of pHNBS and AT, and 4) the direct measurement of CO2 using a portable CO2 equilibrator with a non-dispersive infrared (NDIR) gas analyser. In this study, we found these four methods can produce very similar pCO2 estimates, and the three methods often suited to field-based application (spectrophotometric pHT, electrode pHNBS and CO2 equilibrator) produced estimated measurement uncertainties of 3.5–4.6% for pCO2. Importantly, we are not advocating the replacement of established methods to measure seawater carbonate chemistry, particularly for high-accuracy quantification of carbonate parameters in seawater such as open ocean chemistry, for real-time measures of ocean change, nor for the measurement of small changes in seawater pCO2. However, for biological CO2-manipulation experiments measuring differences of over 100 μatm pCO2 among treatments, we find the four methods described here can produce similar results with careful use. PMID:28957378

  4. Absolute cavity pyrgeometer

    DOEpatents

    Reda, Ibrahim

    2013-10-29

    Implementations of the present disclosure involve an apparatus and method to measure the long-wave irradiance of the atmosphere or long-wave source. The apparatus may involve a thermopile, a concentrator and temperature controller. The incoming long-wave irradiance may be reflected from the concentrator to a thermopile receiver located at the bottom of the concentrator to receive the reflected long-wave irradiance. In addition, the thermopile may be thermally connected to a temperature controller to control the device temperature. Through use of the apparatus, the long-wave irradiance of the atmosphere may be calculated from several measurements provided by the apparatus. In addition, the apparatus may provide an international standard of pyrgeometers' calibration that is traceable back to the International System of Units (SI) rather than to a blackbody atmospheric simulator.

  5. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds

    USGS Publications Warehouse

    Crowell, Sara E.; Wells-Berlin, Alicia M.; Therrien, Ronald E.; Yannuzzi, Sally E.; Carr, Catherine E.

    2016-01-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000−3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  6. Method and apparatus for eliminating luminol interference material

    NASA Technical Reports Server (NTRS)

    Jeffers, E. L.; Thomas, R. R. (Inventor)

    1979-01-01

    A method and apparatus for removing porphyrins from a fluid sample which are unrelated to the number of bacteria present in the sample and prior to combining the sample with luminol reagent to produce a light reaction is disclosed. The method involves a pre-incubation of the sample with a dilute concentration of hydrogen peroxide which inactivates the interfering soluble porphyrins. Further, by delaying taking a light measurement for a predetermined time period after combining the hydrogen peroxide-treated water sample with a luminol reagent, the luminescence produced by the reaction of the luminol reagent with ions present in the solution, being short lived, will have died out so that only porphyrins within the bacteria which have been released by rupturing the cells with the sodium hydroxide in the luminol reagent, will be measured. The measurement thus obtained can then be related to the concentration of live and dead bacteria in the fluid sample.

  7. In-air hearing of a diving duck: A comparison of psychoacoustic and auditory brainstem response thresholds.

    PubMed

    Crowell, Sara E; Wells-Berlin, Alicia M; Therrien, Ronald E; Yannuzzi, Sally E; Carr, Catherine E

    2016-05-01

    Auditory sensitivity was measured in a species of diving duck that is not often kept in captivity, the lesser scaup. Behavioral (psychoacoustics) and electrophysiological [the auditory brainstem response (ABR)] methods were used to measure in-air auditory sensitivity, and the resulting audiograms were compared. Both approaches yielded audiograms with similar U-shapes and regions of greatest sensitivity (2000-3000 Hz). However, ABR thresholds were higher than psychoacoustic thresholds at all frequencies. This difference was least at the highest frequency tested using both methods (5700 Hz) and greatest at 1000 Hz, where the ABR threshold was 26.8 dB higher than the behavioral measure of threshold. This difference is commonly reported in studies involving many different species. These results highlight the usefulness of each method, depending on the testing conditions and availability of the animals.

  8. Frost sensor for use in defrost controls for refrigeration

    DOEpatents

    French, Patrick D.; Butz, James R.; Veatch, Bradley D.; O'Connor, Michael W.

    2002-01-01

    An apparatus and method for measuring the total thermal resistance to heat flow from the air to the evaporative cooler fins of a refrigeration system. The apparatus is a frost sensor that measures the reduction in heat flow due to the added thermal resistance of ice (reduced conduction) as well as the reduction in heat flow due to the blockage of airflow (reduced convection) from excessive ice formation. The sensor triggers a defrost cycle when needed, instead of on a timed interval. The invention is also a method for control of frost in a system that transfers heat from air to a refrigerant along a thermal path. The method involves measuring the thermal conductivity of the thermal path from the air to the refrigerant, recognizing a reduction in thermal conductivity due to the thermal insulation effect of the frost and due to the loss of airflow from excessive ice formation; and controlling the defrosting of the system.

  9. Quantitating Iron in Serum Ferritin by Use of ICP-MS

    NASA Technical Reports Server (NTRS)

    Smith, Scott M.; Gillman, Patricia L.

    2003-01-01

    A laboratory method has been devised to enable measurement of the concentration of iron bound in ferritin from small samples of blood (serum). Derived partly from a prior method that depends on large samples of blood, this method involves the use of an inductively-coupled-plasma mass spectrometer (ICP-MS). Ferritin is a complex of iron with the protein apoferritin. Heretofore, measurements of the concentration of serum ferritin (as distinguished from direct measurements of the concentration of iron in serum ferritin) have been used to assess iron stores in humans. Low levels of serum ferritin could indicate the first stage of iron depletion. High levels of serum ferritin could indicate high levels of iron (for example, in connection with hereditary hemochromatosis an iron-overload illness that is characterized by progressive organ damage and can be fatal). However, the picture is complicated: A high level of serum ferritin could also indicate stress and/or inflammation instead of (or in addition to) iron overload, and low serum iron concentration could indicate inflammation rather than iron deficiency. Only when concentrations of both serum iron and serum ferritin increase and decrease together can the patient s iron status be assessed accurately. Hence, in enabling accurate measurement of the iron content of serum ferritin, the present method can improve the diagnosis of the patient s iron status. The prior method of measuring the concentration of iron involves the use of an atomic-absorption spectrophotometer with a graphite furnace. The present method incorporates a modified version of the sample- preparation process of the prior method. First, ferritin is isolated; more specifically, it is immobilized by immunoprecipitation with rabbit antihuman polyclonal antibody bound to agarose beads. The ferritin is then separated from other iron-containing proteins and free iron by a series of centrifugation and wash steps. Next, the ferritin is digested with nitric acid to extract its iron content. Finally, a micronebulizer is used to inject the sample of the product of the digestion into the ICPMS for analysis of its iron content. The sensitivity of the ICP-MS is high enough to enable it to characterize samples smaller than those required in the prior method (samples can be 0.15 to 0.60 mL).

  10. A technique for fast and accurate measurement of hand volumes using Archimedes' principle.

    PubMed

    Hughes, S; Lau, J

    2008-03-01

    A new technique for measuring hand volumes using Archimedes principle is described. The technique involves the immersion of a hand in a water container placed on an electronic balance. The volume is given by the change in weight divided by the density of water. This technique was compared with the more conventional technique of immersing an object in a container with an overflow spout and collecting and weighing the volume of overflow water. The hand volume of two subjects was measured. Hand volumes were 494 +/- 6 ml and 312 +/- 7 ml for the immersion method and 476 +/- 14 ml and 302 +/- 8 ml for the overflow method for the two subjects respectively. Using plastic test objects, the mean difference between the actual and measured volume was -0.3% and 2.0% for the immersion and overflow techniques respectively. This study shows that hand volumes can be obtained more quickly than the overflow method. The technique could find an application in clinics where frequent hand volumes are required.

  11. Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review.

    PubMed

    Chung, Stephanie T; Chacko, Shaji K; Sunehag, Agneta L; Haymond, Morey W

    2015-12-01

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. © 2015 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  12. Comparison of Five System Identification Algorithms for Rotorcraft Higher Harmonic Control

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    1998-01-01

    This report presents an analysis and performance comparison of five system identification algorithms. The methods are presented in the context of identifying a frequency-domain transfer matrix for the higher harmonic control (HHC) of helicopter vibration. The five system identification algorithms include three previously proposed methods: (1) the weighted-least- squares-error approach (in moving-block format), (2) the Kalman filter method, and (3) the least-mean-squares (LMS) filter method. In addition there are two new ones: (4) a generalized Kalman filter method and (5) a generalized LMS filter method. The generalized Kalman filter method and the generalized LMS filter method were derived as extensions of the classic methods to permit identification by using more than one measurement per identification cycle. Simulation results are presented for conditions ranging from the ideal case of a stationary transfer matrix and no measurement noise to the more complex cases involving both measurement noise and transfer-matrix variation. Both open-loop identification and closed- loop identification were simulated. Closed-loop mode identification was more challenging than open-loop identification because of the decreasing signal-to-noise ratio as the vibration became reduced. The closed-loop simulation considered both local-model identification, with measured vibration feedback and global-model identification with feedback of the identified uncontrolled vibration. The algorithms were evaluated in terms of their accuracy, stability, convergence properties, computation speeds, and relative ease of implementation.

  13. Randomized controlled trial of parent-enhanced CBT compared with individual CBT for obsessive-compulsive disorder in young people.

    PubMed

    Reynolds, Shirley A; Clark, Sarah; Smith, Holly; Langdon, Peter E; Payne, Ruth; Bowers, Gemma; Norton, Elisabeth; McIlwham, Harriet

    2013-12-01

    Obsessive-compulsive disorder (OCD) in young people can be effectively treated with Cognitive Behavior Therapy (CBT). Practice guidelines in the United Kingdom recommend that CBT be delivered with parental or family involvement; however, there is no evidence from randomized trials that this enhances effectiveness. The aim of this trial was to assess if CBT with high parental involvement was more effective than CBT with low parental involvement (individual CBT) in reducing symptoms of OCD. Fifty young people ages 12-17 years with OCD were randomly allocated to individual CBT or parent-enhanced CBT. In parent-enhanced CBT parents attended all treatment sessions; in individual CBT, parents attended only Sessions 1, 7, and the final session. Participants received up to 14 sessions of CBT. Data were analyzed using intent-to-treat and per-protocol methods. The primary outcome measure was the Children's Yale-Brown Obsessive Compulsion Scale (Scahill et al., 1997). Both forms of CBT significantly reduced symptoms of OCD and anxiety. Change in OCD symptoms was maintained at 6 months. Per-protocol analysis suggested that parent-enhanced CBT may be associated with significantly larger reductions in anxiety symptoms. High and low parental involvement in CBT for OCD in young people were both effective, and there was no evidence that 1 method of delivery was superior on the primary outcome measure. However, this study was small. Future trials should be adequately powered and examine interactions with the age of the young person and comorbid anxiety disorders.

  14. Flip-flop method: A new T1-weighted flow-MRI for plants studies.

    PubMed

    Buy, Simon; Le Floch, Simon; Tang, Ning; Sidiboulenouar, Rahima; Zanca, Michel; Canadas, Patrick; Nativel, Eric; Cardoso, Maida; Alibert, Eric; Dupont, Guillaume; Ambard, Dominique; Maurel, Christophe; Verdeil, Jean-Luc; Bertin, Nadia; Goze-Bac, Christophe; Coillot, Christophe

    2018-01-01

    The climate warming implies an increase of stress of plants (drought and torrential rainfall). The understanding of plant behavior, in this context, takes a major importance and sap flow measurement in plants remains a key issue for plant understanding. Magnetic Resonance Imaging (MRI) which is well known to be a powerful tool to access water quantity can be used to measure moving water. We describe a novel flow-MRI method which takes advantage of inflow slice sensitivity. The method involves the slice selectivity in the context of multi slice spin echo sequence. Two sequences such as a given slice is consecutively inflow and outflow sensitive are performed, offering the possiblility to perform slow flow sensitive imaging in a quite straigthforward way. The method potential is demonstrated by imaging both a slow flow measurement on a test bench (as low as 10 μm.s-1) and the Poiseuille's profile of xylemian sap flow velocity in the xylematic tissues of a tomato plant stem.

  15. The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement

    DOE PAGES

    Hua, Zilong; Ban, Heng; Hurley, David H.

    2015-05-05

    A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less

  16. The study of frequency-scan photothermal reflectance technique for thermal diffusivity measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hua, Zilong; Ban, Heng; Hurley, David H.

    A frequency scan photothermal reflectance technique to measure thermal diffusivity of bulk samples is studied in this manuscript. Similar to general photothermal reflectance methods, an intensity-modulated heating laser and a constant intensity probe laser are used to determine the surface temperature response under sinusoidal heating. The approach involves fixing the distance between the heating and probe laser spots, recording the phase lag of reflected probe laser intensity with respect to the heating laser frequency modulation, and extracting thermal diffusivity using the phase lag – (frequency) 1/2 relation. The experimental validation is performed on three samples (SiO 2, CaF 2 andmore » Ge), which have a wide range of thermal diffusivities. The measured thermal diffusivity values agree closely with literature values. Lastly, compared to the commonly used spatial scan method, the experimental setup and operation of the frequency scan method are simplified, and the uncertainty level is equal to or smaller than that of the spatial scan method.« less

  17. L2 Milestone: Neutron Capture Cross Sections from Surrogate (p, d) Measurements: Determination of the Unknown 87Y(n, g) Cross Section and Assessment of the Method Via the 90Zr(n, g) Benchmark Case: Theory Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Escher, J. E.

    Cross sections for compound-nuclear reactions involving unstable targets are important for many applications, but can often not be measured directly. Here we describe a method for extracting cross sections for neutron-capture on unstable isotopes from indirect (surrogate) measurements. The surrogate reaction, which produces the compound nucleus of interest, has to be described and the decay of the nucleus has to be modeled. We outline the approach for one-neutron pickup and report on the determination of the 90Zr(n, γ ) reaction from surrogate 92Zr(p,d) data, which is compared to the directly-measured capture cross section and thus provides a benchmark for themore » method. We then apply the method to determine the 87Y(n, γ ) cross section, which has not been measured directly. The work was carried out in the context of an LLNL L2 Milestone. This report addresses the theory aspects of the milestone. A complementary document summarizes the experimental efforts [1].« less

  18. Measurement of vibration using phase only correlation technique

    NASA Astrophysics Data System (ADS)

    Balachandar, S.; Vipin, K.

    2017-08-01

    A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.

  19. Method and apparatus for measuring irradiated fuel profiles

    DOEpatents

    Lee, David M.

    1982-01-01

    A new apparatus is used to substantially instantaneously obtain a profile of an object, for example a spent fuel assembly, which profile (when normalized) has unexpectedly been found to be substantially identical to the normalized profile of the burnup monitor Cs-137 obtained with a germanium detector. That profile can be used without normalization in a new method of identifying and monitoring in order to determine for example whether any of the fuel has been removed. Alternatively, two other new methods involve calibrating that profile so as to obtain a determination of fuel burnup (which is important for complying with safeguards requirements, for utilizing fuel to an optimal extent, and for storing spent fuel in a minimal amount of space). Using either of these two methods of determining burnup, one can reduce the required measurement time significantly (by more than an order of magnitude) over existing methods, yet retain equal or only slightly reduced accuracy.

  20. Measurement of surface tension by sessile drop tensiometer with superoleophobic surface

    NASA Astrophysics Data System (ADS)

    Kwak, Wonshik; Park, Jun Kwon; Yoon, Jinsung; Lee, Sanghyun; Hwang, Woonbong

    2018-03-01

    A sessile drop tensiometer provides a simple and efficient method of determining the surface tension of various liquids. The technique involves obtaining the shape of an axisymmetric liquid droplet and iterative fitting of the Young-Laplace equation, which balances the gravitational deformation of the drop. Since the advent of high quality digital cameras and desktop computers, this process has been automated with precision. However, despite its appealing simplicity, there are complications and limitations in a sessile drop tensiometer, i.e., it must dispense spherical droplets with low surface tension. We propose a method of measuring surface tension using a sessile drop tensiometer with a superoleophobic surface fabricated by acidic etching and anodization for liquids with low surface tension and investigate the accuracy of the measurement by changing the wettability of the measuring plate surface.

  1. Low-frequency sound speed and attenuation in sandy seabottom from long-range broadband acoustic measurements.

    PubMed

    Wan, Lin; Zhou, Ji-Xun; Rogers, Peter H

    2010-08-01

    A joint China-U.S. underwater acoustics experiment was conducted in the Yellow Sea with a very flat bottom and a strong and sharp thermocline. Broadband explosive sources were deployed both above and below the thermocline along two radial lines up to 57.2 km and a quarter circle with a radius of 34 km. Two inversion schemes are used to obtain the seabottom sound speed. One is based on extracting normal mode depth functions from the cross-spectral density matrix. The other is based on the best match between the calculated and measured modal arrival times for different frequencies. The inverted seabottom sound speed is used as a constraint condition to extract the seabottom sound attenuation by three methods. The first method involves measuring the attenuation coefficients of normal modes. In the second method, the seabottom sound attenuation is estimated by minimizing the difference between the theoretical and measured modal amplitude ratios. The third method is based on finding the best match between the measured and modeled transmission losses (TLs). The resultant seabottom attenuation, averaged over three independent methods, can be expressed as alpha=(0.33+/-0.02)f(1.86+/-0.04)(dB/m kHz) over a frequency range of 80-1000 Hz.

  2. MEASUREMENT OF THE VISCOELASTIC PROPERTIES OF WATER-SATURATED CLAY SEDIMENTS.

    DTIC Science & Technology

    The complex shear modulus of both kaolin -water and bentonite-water mixtures has been determined in the laboratory. The method involved measuring the...range two to forty-three kHz. Dispersed sediments behaved like Newtonian liquids. Undispersed sediments, however, were viscoelastic in character, and...their shear moduli exhibited no dependence on frequency. For undispersed kaolin mixtures, a typical result is (21.6 + i 1.2) x 1,000 dynes per square

  3. Short- and long-term memory effects in intensified array detectors - Influence on airborne laser fluorosensor measurements

    NASA Astrophysics Data System (ADS)

    Bristow, Michael P.; Edmonds, Curtis M.; Bundy, Donald H.; Turner, Rudolpha M.

    1989-02-01

    Phosphorescence and thermoluminescence memory effects in the phosphors of image intensifiers are investigated, with application to the performance improvement of intensified optical multichannel analyzers. Algorithms have been developed which can be used to remove these effects from airborne measurements of laser-induced fluorescence spectra of aquatic and terrestrial targets. The present method can be adapted to situations involving different gating routines, repetition rates, and diode group sizes.

  4. Spectrofluorimetric Method for Estimation of Curcumin in Rat Blood Plasma: Development and Validation

    NASA Astrophysics Data System (ADS)

    Trivedi, J.; Variya, B.; Gandhi, H.; Rathod, S. P.

    2016-01-01

    Curcumin is a medicinally important phytoconstituent of curcuminoids. The present study describes development of a simple method for estimation of curcumin in rat plasma. This method involves the use of spectrofluorimetry for evaluation of curcumin at 257 (Ex) and 504 nm (Em). Sample preparation involves only two steps: extraction of curcumin and drying the extract. Following this procedure, the samples are reconstituted with ethyl acetate, and relative fluorescence intensity is measured using a spectrofluorimeter. The method was validated as per CDER guidelines. The linearity of the method was found to be in the range of 100-500 ng/mL with accuracy and precision lying within 2% RSD. The LOD and LOQ were found to be 15.3 and 46.1 ng/mL, respectively. The method was applied for pharmacokinetic evaluation in rats, and AUC, Cmax, and Tmax were found to be 5580 ± 1006 h × ng/mL, 1526 ± 209 ng/mL, and 2.97 ± 0.28 h, respectively, with a plasma half-life of 1.14 ± 0.27 h.

  5. Method and apparatus for characterizing and enhancing the functional performance of machine tools

    DOEpatents

    Barkman, William E; Babelay, Jr., Edwin F; Smith, Kevin Scott; Assaid, Thomas S; McFarland, Justin T; Tursky, David A; Woody, Bethany; Adams, David

    2013-04-30

    Disclosed are various systems and methods for assessing and improving the capability of a machine tool. The disclosure applies to machine tools having at least one slide configured to move along a motion axis. Various patterns of dynamic excitation commands are employed to drive the one or more slides, typically involving repetitive short distance displacements. A quantification of a measurable merit of machine tool response to the one or more patterns of dynamic excitation commands is typically derived for the machine tool. Examples of measurable merits of machine tool performance include workpiece surface finish, and the ability to generate chips of the desired length.

  6. Calibration of an arbitrarily arranged projection moiré system for 3D shape measurement

    NASA Astrophysics Data System (ADS)

    Tang, Ying; Yao, Jun; Zhou, Yihao; Sun, Chen; Yang, Peng; Miao, Hong; Chen, Jubing

    2018-05-01

    An arbitrarily arranged projection moiré system is presented for three-dimensional shape measurement. We develop a model for projection moiré system and derive a universal formula expressing the relation between height and phase variation before and after we put the object on the reference plane. With so many system parameters involved, a system calibration technique is needed. In this work, we provide a robust and accurate calibration method for an arbitrarily arranged projection moiré system. The system no longer puts restrictions on the configuration of the optical setup. Real experiments have been conducted to verify the validity of this method.

  7. Determination of the time delay in the case of two-path propagation on the basis of the attenuation characteristics for two adjacent frequencies

    NASA Technical Reports Server (NTRS)

    Gilroi, H. G.

    1979-01-01

    Pronounced fading occurring in the line of sight radio links at frequencies below 10 GHz can be traced to the effects of multipath propagation. Modulation disturbances depend on travel time differences between the direct wave and the wave which is reflected at atmospheric layers. A method described for the determination of the time delay is based on an indirect approach which utilizes the difference in fading at various frequencies. The method was employed in measurements involving a distance of 181 km. The results obtained in the measurement are discussed.

  8. Prediction of slant path rain attenuation statistics at various locations

    NASA Technical Reports Server (NTRS)

    Goldhirsh, J.

    1977-01-01

    The paper describes a method for predicting slant path attenuation statistics at arbitrary locations for variable frequencies and path elevation angles. The method involves the use of median reflectivity factor-height profiles measured with radar as well as the use of long-term point rain rate data and assumed or measured drop size distributions. The attenuation coefficient due to cloud liquid water in the presence of rain is also considered. Absolute probability fade distributions are compared for eight cases: Maryland (15 GHz), Texas (30 GHz), Slough, England (19 and 37 GHz), Fayetteville, North Carolina (13 and 18 GHz), and Cambridge, Massachusetts (13 and 18 GHz).

  9. Testing the MODIS Satellite Retrieval of Aerosol Fine-Mode Fraction

    NASA Technical Reports Server (NTRS)

    Anderson, Theodore L.; Wu, Yonghua; Chu, D. Allen; Schmid, Beat; Redemann, Jens; Dubovik, Oleg

    2005-01-01

    Satellite retrievals of the fine-mode fraction (FMF) of midvisible aerosol optical depth, tau, are potentially valuable for constraining chemical transport models and for assessing the global distribution of anthropogenic aerosols. Here we compare satellite retrievals of FMF from the Moderate Resolution Imaging Spectroradiometer (MODIS) to suborbital data on the submicrometer fraction (SMF) of tau. SMF is a closely related parameter that is directly measurable by in situ techniques. The primary suborbital method uses in situ profiling of SMF combined with airborne Sun photometry both to validate the in situ estimate of ambient extinction and to take into account the aerosol above the highest flight level. This method is independent of the satellite retrieval and has well-known accuracy but entails considerable logistical and technical difficulties. An alternate method uses Sun photometer measurements near the surface and an empirical relation between SMF and the Angstrom exponent, A, a measure of the wavelength dependence of optical depth or extinction. Eleven primary and fifteen alternate comparisons are examined involving varying mixtures of dust, sea salt, and pollution in the vicinity of Korea and Japan. MODIS ocean retrievals of FMF are shown to be systematically higher than suborbital estimates of SMF by about 0.2. The most significant cause of this discrepancy involves the relationship between 5 and fine-mode partitioning; in situ measurements indicate a systematically different relationship from what is assumed in the satellite retrievals. Based on these findings, we recommend: (1) satellite programs should concentrate on retrieving and validating since an excellent validation program is in place for doing this, and (2) suborbital measurements should be used to derive relationships between A and fine-mode partitioning to allow interpretation of the satellite data in terms of fine-mode aerosol optical depth.

  10. Measuring quality in anatomic pathology.

    PubMed

    Raab, Stephen S; Grzybicki, Dana Marie

    2008-06-01

    This article focuses mainly on diagnostic accuracy in measuring quality in anatomic pathology, noting that measuring any quality metric is complex and demanding. The authors discuss standardization and its variability within and across areas of care delivery and efforts involving defining and measuring error to achieve pathology quality and patient safety. They propose that data linking error to patient outcome are critical for developing quality improvement initiatives targeting errors that cause patient harm in addition to using methods of root cause analysis, beyond those traditionally used in cytologic-histologic correlation, to assist in the development of error reduction and quality improvement plans.

  11. On the consistency among different approaches for nuclear track scanning and data processing

    NASA Astrophysics Data System (ADS)

    Inozemtsev, K. O.; Kushin, V. V.; Kodaira, S.; Shurshakov, V. A.

    2018-04-01

    The article describes various approaches for space radiation track measurement using CR-39™ detector (Tastrak). The results of comparing different methods for track scanning and data processing are presented. Basic algorithms for determination of track parameters are described. Every approach involves individual set of measured track parameters. For two sets, track scanning is sufficient in the plane of detector surface (2-D measurement), third set requires scanning in the additional projection (3-D measurement). An experimental comparison of considered techniques was made with the use of accelerated heavy ions Ar, Fe and Kr.

  12. Applications of the Trojan Horse method in nuclear astrophysics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Spitaleri, Claudio, E-mail: spitaleri@lns.infn.it

    2015-02-24

    The study of the energy production in stars and related nucleosyntesis processes requires increasingly precise knowledge of the nuclear reaction cross section and reaction rates at interaction energy. In order to overcome the experimental difficulties, arising from small cross-sections involved in charge particle induced reactions at astrophysical energies, and from the presence of electron screening, it was necessary to introduce indirect methods. Trough these methods it is possible to measure cross sections at very small energies and retrieve information on electron screening effect when ultra-low energy direct measurements are available. The Trojan Horse Method (THM) represents the indirect technique tomore » determine the bare nucleus astrophysical S-factor for reactions between charged particles at astrophysical energies. The basic theory of the THM is discussed in the case of non-resonant.« less

  13. Transmission electron microscopy of amyloid fibrils.

    PubMed

    Gras, Sally L; Waddington, Lynne J; Goldie, Kenneth N

    2011-01-01

    Transmission Electron Microscopy of negatively stained and cryo-prepared specimens allows amyloid fibrils to be visualised at high resolution in a dried or a hydrated state, and is an essential method for characterising the morphology of fibrils and pre-fibrillar species. We outline the key steps involved in the preparation and observation of samples using negative staining and cryo-electron preservation. We also discuss methods to measure fibril characteristics, such as fibril width, from electron micrographs.

  14. Automated dynamic analytical model improvement for damped structures

    NASA Technical Reports Server (NTRS)

    Fuh, J. S.; Berman, A.

    1985-01-01

    A method is described to improve a linear nonproportionally damped analytical model of a structure. The procedure finds the smallest changes in the analytical model such that the improved model matches the measured modal parameters. Features of the method are: (1) ability to properly treat complex valued modal parameters of a damped system; (2) applicability to realistically large structural models; and (3) computationally efficiency without involving eigensolutions and inversion of a large matrix.

  15. A new experimental method for determining local airloads on rotor blades in forward flight

    NASA Astrophysics Data System (ADS)

    Berton, E.; Maresca, C.; Favier, D.

    This paper presents a new approach for determining local airloads on helicopter rotor blade sections in forward flight. The method is based on the momentum equation in which all the terms are expressed by means of the velocity field measured by a laser Doppler velocimeter. The relative magnitude of the different terms involved in the momentum and Bernoulli equations is estimated and the results are encouraging.

  16. Replications and Extensions in Arousal Assessment for Sex Offenders with Developmental Disabilities

    ERIC Educational Resources Information Center

    Reyes, Jorge R.; Vollmer, Timothy R.; Hall, Astrid

    2011-01-01

    Three adult male sex offenders with developmental disabilities participated in phallometric assessments that involved repeated measures of arousal when exposed to various stimuli. Arousal assessment outcomes were similar to those obtained by Reyes et al. (2006). Additional data-analysis methods provided further information about sexual…

  17. PRESENTED 04/05/2006: MERCURY MEASUREMENTS FOR SOLIDS MADE RAPIDLY, SIMPLY, AND INEXPENSIVELY

    EPA Science Inventory

    While traditional methods for determining mercury in solid samples involve the use of aggressive chemicals to dissolve the matrix and the use of other chemicals to properly reduce the mercury to the volatile elemental form, pyrolysis-based analyzers can be used by directly weighi...

  18. PRESENTED MAY 10, 2005, MERCURY MEASUREMENTS FOR SOLIDS MADE RAPIDLY, SIMPLY, AND INEXPENSIVELY

    EPA Science Inventory

    While traditional methods for determining mercury in solid samples involve the use of aggressive chemicals to dissolve the matrix and the use of other chemicals to properly reduce the mercury to the volatile elemental form, pyrolysis-based analyzers can be used by directly weighi...

  19. Social Capital: Its Constructs and Survey Development

    ERIC Educational Resources Information Center

    Enfield, Richard P.; Nathaniel, Keith C.

    2013-01-01

    This article reports on experiences and methods of adapting a valid adult social capital assessment to youth audiences in order to measure social capital and sense of place. The authors outline the process of adapting, revising, prepiloting, piloting, and administering a youth survey exploring young people's sense of community, involvement in the…

  20. 78 FR 23743 - Proposed Information Collection; Comment Request; Generic Clearance for Questionnaire Pretesting...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-22

    ...; Generic Clearance for Questionnaire Pretesting Research AGENCY: Census Bureau, Commerce. ACTION: Notice.... This research program will be used by the Census Bureau and survey sponsors to improve questionnaires... involve one of the following methods of identifying measurement problems with the questionnaire or survey...

  1. Are Drinking Motives Associated with Sexual "Hookups" among College Student Drinkers?

    ERIC Educational Resources Information Center

    Dvorak, Robert D.; Kuvaas, Nicholas J.; Kilwein, Tess M.; Wray, Tyler B.; Stevenson, Brittany L.; Sargent, Emily M.

    2016-01-01

    Objective: This study examined associations between drinking motivation, alcohol use, and sexual hookups among college students. Participants: Participants (n = 755 Midwest college student drinkers; 61% female) ranged in age from 18 to 24. Methods: Participants completed online measures of alcohol involvement (use and motives) and sexual activity.…

  2. Examining Teachers' Understanding of Attention Deficit Hyperactivity Disorder

    ERIC Educational Resources Information Center

    Guerra, Federico; Tiwari, Ashwini; Das, Ajay; Cavazos Vela, Javier; Sharma, Manisha

    2017-01-01

    The aim of this study was to examine teachers' knowledge, misconceptions and concerns about students with attention deficit hyperactivity disorder (ADHD). This mixed methods study involved 173 school teachers from five elementary schools. Knowledge of Attention Deficit Disorders Scale (KADDS) was used to measure teachers' knowledge and…

  3. The Relations among Cumulative Risk, Parenting, and Behavior Problems during Early Childhood

    ERIC Educational Resources Information Center

    Trentacosta, Christopher J.; Hyde, Luke W.; Shaw, Daniel S.; Dishion, Thomas J.; Gardner, Frances; Wilson, Melvin

    2008-01-01

    Background: This study examined relations among cumulative risk, nurturant and involved parenting, and behavior problems across early childhood. Methods: Cumulative risk, parenting, and behavior problems were measured in a sample of low-income toddlers participating in a family-centered program to prevent conduct problems. Results: Path analysis…

  4. Dietary Adherence Monitoring Tool for Free-living, Controlled Feeding Studies

    USDA-ARS?s Scientific Manuscript database

    Objective: To devise a dietary adherence monitoring tool for use in controlled human feeding trials involving free-living study participants. Methods: A scoring tool was devised to measure and track dietary adherence for an 8-wk randomized trial evaluating the effects of two different dietary patter...

  5. QUANTIFICATION OF ENTEROVIRUS AND HEPATITIS A VIRUSES IN WELLS AND SPRINGS IN EAST TENNESSEE USING REAL-TIME REVERSE TRANSCIPTION PCR

    EPA Science Inventory

    This project involves development, validation testing and application of a fast, efficient method of quantitatively measuring occurrence and concentration of common human viral pathogens, enterovirus and hepatitis A virus, in ground water samples using real-time reverse transcrip...

  6. Composite Indices of Development and Poverty: An Application to MDGs

    ERIC Educational Resources Information Center

    De Muro, Pasquale; Mazziotta, Matteo; Pareto, Adriano

    2011-01-01

    The measurement of development or poverty as multidimensional phenomena is very difficult because there are several theoretical, methodological and empirical problems involved. The literature of composite indicators offers a wide variety of aggregation methods, all with their pros and cons. In this paper, we propose a new, alternative composite…

  7. Impact of a contextual intervention on child participation and parent competence among children with autism spectrum disorders: a pretest-posttest repeated-measures design.

    PubMed

    Dunn, Winnie; Cox, Jane; Foster, Lauren; Mische-Lawson, Lisa; Tanquary, Jennifer

    2012-01-01

    OBJECTIVE. We tested an occupational therapy contextual intervention for improving participation in children with autism spectrum disorders and for developing parental competence. METHOD. Using a repeated-measures pretest-posttest design, we evaluated the effectiveness of a contextually relevant reflective guidance occupational therapy intervention involving three components: authentic activity settings, family's daily routines, and the child's sensory processing patterns (Sensory Profile). We used these components to coach 20 parents in strategies to support their child's participation. Intervention sessions involved reflective discussion with parents to support them in identifying strategies to meet their goals and make joint plans for the coming week. We measured child participation (Canadian Occupational Performance Measure, Goal Attainment Scaling) and parent competence (Parenting Sense of Competence, Parenting Stress Index). RESULTS. Results indicated that parents felt more competent and children significantly increased participation in everyday life, suggesting that this approach is an effective occupational therapy intervention. Copyright © 2012 by the American Occupational Therapy Association, Inc.

  8. Correlation of two bioadhesion assays: the everted sac technique and the CAHN microbalance.

    PubMed

    Santos, C A; Jacob, J S; Hertzog, B A; Freedman, B D; Press, D L; Harnpicharnchai, P; Mathiowitz, E

    1999-08-27

    This contribution correlates two in vitro methods utilized to determine bioadhesion. One method, the everted intestinal sac technique, is a passive test for bioadhesion involving several polymer microspheres and a section of everted intestinal tissue. The other method, the CAHN microbalance, employs a CAHN dynamic contact angle analyzer with modified software to record the tensile forces measured as a single polymer microsphere is pulled from intestinal tissue. This study demonstrates that CAHN and everted sac experiments yield similar results when used to quantify the bioadhesive nature of polymer microsphere systems. A polymer showing high adhesion in one method also demonstrates high bioadhesion in the other method; polymers that exhibit high fracture strength and tensile work measurements with the CAHN microbalance also yield high binding percentages with the everted sac method. The polymers tested and reported here are poly(caprolactone) and different copolymer ratios of poly(fumaric-co-sebacic anhydride). The results of this correlation demonstrate that each method alone is a valuable indicator of bioadhesion.

  9. Neutron activation analysis of certified samples by the absolute method

    NASA Astrophysics Data System (ADS)

    Kadem, F.; Belouadah, N.; Idiri, Z.

    2015-07-01

    The nuclear reactions analysis technique is mainly based on the relative method or the use of activation cross sections. In order to validate nuclear data for the calculated cross section evaluated from systematic studies, we used the neutron activation analysis technique (NAA) to determine the various constituent concentrations of certified samples for animal blood, milk and hay. In this analysis, the absolute method is used. The neutron activation technique involves irradiating the sample and subsequently performing a measurement of the activity of the sample. The fundamental equation of the activation connects several physical parameters including the cross section that is essential for the quantitative determination of the different elements composing the sample without resorting to the use of standard sample. Called the absolute method, it allows a measurement as accurate as the relative method. The results obtained by the absolute method showed that the values are as precise as the relative method requiring the use of standard sample for each element to be quantified.

  10. Contextualizing and assessing the social capital of seniors in congregate housing residences: study design and methods

    PubMed Central

    Moore, Spencer; Shiell, Alan; Haines, Valerie; Riley, Therese; Collier, Carrie

    2005-01-01

    Background This article discusses the study design and methods used to contextualize and assess the social capital of seniors living in congregate housing residences in Calgary, Alberta. The project is being funded as a pilot project under the Institute of Aging, Canadian Institutes for Health Research. Design/Methods Working with seniors living in 5 congregate housing residencies in Calgary, the project uses a mixed method approach to develop grounded measures of the social capital of seniors. The project integrates both qualitative and quantitative methods in a 3-phase research design: 1) qualitative, 2) quantitative, and 3) qualitative. Phase 1 uses gender-specific focus groups; phase 2 involves the administration of individual surveys that include a social network module; and phase 3 uses anamolous-case interviews. Not only does the study design allow us to develop grounded measures of social capital but it also permits us to test how well the three methods work separately, and how well they fit together to achieve project goals. This article describes the selection of the study population, the multiple methods used in the research and a brief discussion of our conceptualization and measurement of social capital. PMID:15836784

  11. Evaluating a novel application of optical fibre evanescent field absorbance: rapid measurement of red colour in winegrape homogenates

    NASA Astrophysics Data System (ADS)

    Lye, Peter G.; Bradbury, Ronald; Lamb, David W.

    Silica optical fibres were used to measure colour (mg anthocyanin/g fresh berry weight) in samples of red wine grape homogenates via optical Fibre Evanescent Field Absorbance (FEFA). Colour measurements from 126 samples of grape homogenate were compared against the standard industry spectrophotometric reference method that involves chemical extraction and subsequent optical absorption measurements of clarified samples at 520 nm. FEFA absorbance on homogenates at 520 nm (FEFA520h) was correlated with the industry reference method measurements of colour (R2 = 0.46, n = 126). Using a simple regression equation colour could be predicted with a standard error of cross-validation (SECV) of 0.21 mg/g, with a range of 0.6 to 2.2 mg anthocyanin/g and a standard deviation of 0.33 mg/g. With a Ratio of Performance Deviation (RPD) of 1.6, the technique when utilizing only a single detection wavelength, is not robust enough to apply in a diagnostic sense, however the results do demonstrate the potential of the FEFA method as a fast and low-cost assay of colour in homogenized samples.

  12. In situ high-pressure measurement of crystal solubility by using neutron diffraction

    NASA Astrophysics Data System (ADS)

    Chen, Ji; Hu, Qiwei; Fang, Leiming; He, Duanwei; Chen, Xiping; Xie, Lei; Chen, Bo; Li, Xin; Ni, Xiaolin; Fan, Cong; Liang, Akun

    2018-05-01

    Crystal solubility is one of the most important thermo-physical properties and plays a key role in industrial applications, fundamental science, and geoscientific research. However, high-pressure in situ measurements of crystal solubility remain very challenging. Here, we present a method involving high-pressure neutron diffraction for making high-precision in situ measurements of crystal solubility as a function of pressure over a wide range of pressures. For these experiments, we designed a piston-cylinder cell with a large chamber volume for high-pressure neutron diffraction. The solution pressures are continuously monitored in situ based on the equation of state of the sample crystal. The solubility at a high pressure can be obtained by applying a Rietveld quantitative multiphase analysis. To evaluate the proposed method, we measured the high-pressure solubility of NaCl in water up to 610 MPa. At a low pressure, the results are consistent with the previous results measured ex situ. At a higher pressure, more reliable data could be provided by using an in situ high-pressure neutron diffraction method.

  13. Comparing the Medicaid Retrospective Drug Utilization Review Program Cost-Savings Methods Used by State Agencies.

    PubMed

    Prada, Sergio I

    2017-12-01

    The Medicaid Drug Utilization Review (DUR) program is a 2-phase process conducted by Medicaid state agencies. The first phase is a prospective DUR and involves electronically monitoring prescription drug claims to identify prescription-related problems, such as therapeutic duplication, contraindications, incorrect dosage, or duration of treatment. The second phase is a retrospective DUR and involves ongoing and periodic examinations of claims data to identify patterns of fraud, abuse, underutilization, drug-drug interaction, or medically unnecessary care, implementing corrective actions when needed. The Centers for Medicare & Medicaid Services requires each state to measure prescription drug cost-savings generated from its DUR programs on an annual basis, but it provides no guidance or unified methodology for doing so. To describe and synthesize the methodologies used by states to measure cost-savings using their Medicaid retrospective DUR program in federal fiscal years 2014 and 2015. For each state, the cost-savings methodologies included in the Medicaid DUR 2014 and 2015 reports were downloaded from Medicaid's website. The reports were then reviewed and synthesized. Methods described by the states were classified according to research designs often described in evaluation textbooks. In 2014, the most often used prescription drugs cost-savings estimation methodology for the Medicaid retrospective DUR program was a simple pre-post intervention method, without a comparison group (ie, 12 states). In 2015, the most common methodology used was a pre-post intervention method, with a comparison group (ie, 14 states). Comparisons of savings attributed to the program among states are still unreliable, because of a lack of a common methodology available for measuring cost-savings. There is great variation among states in the methods used to measure prescription drug utilization cost-savings. This analysis suggests that there is still room for improvement in terms of methodology transparency, which is important, because lack of transparency hinders states from learning from each other. Ultimately, the federal government needs to evaluate and improve its DUR program.

  14. Visualization of the equilibrium position of colloidal particles at fluid-water interfaces by deposition of nanoparticles

    NASA Astrophysics Data System (ADS)

    Sabapathy, Manigandan; Kollabattula, Viswas; Basavaraj, Madivala G.; Mani, Ethayaraja

    2015-08-01

    We present a general yet simple method to measure the contact angle of colloidal particles at fluid-water interfaces. In this method, the particles are spread at the required fluid-water interface as a monolayer. In the water phase a chemical reaction involving reduction of a metal salt such as aurochloric acid is initiated. The metal grows as a thin film or islands of nanoparticles on the particle surface exposed to the water side of the interface. Analyzing the images of particles by high resolution scanning microscopy (HRSEM), we trace the three phase contact line up to which deposition of the metal film occurs. From geometrical relations, the three phase contact angle is then calculated. We report the measurements of the contact angle of silica and polystyrene (PS) particles at different interfaces such as air-water, decane-water and octanol-water. We have also applied this method to measure the contact angle of surfactant treated polystyrene particles at the air-water interface, and we find a non-monotonic change of the contact angle with the concentration of the surfactant. Our results are compared with the well-known gel trapping technique and we find good comparison with previous measurements.We present a general yet simple method to measure the contact angle of colloidal particles at fluid-water interfaces. In this method, the particles are spread at the required fluid-water interface as a monolayer. In the water phase a chemical reaction involving reduction of a metal salt such as aurochloric acid is initiated. The metal grows as a thin film or islands of nanoparticles on the particle surface exposed to the water side of the interface. Analyzing the images of particles by high resolution scanning microscopy (HRSEM), we trace the three phase contact line up to which deposition of the metal film occurs. From geometrical relations, the three phase contact angle is then calculated. We report the measurements of the contact angle of silica and polystyrene (PS) particles at different interfaces such as air-water, decane-water and octanol-water. We have also applied this method to measure the contact angle of surfactant treated polystyrene particles at the air-water interface, and we find a non-monotonic change of the contact angle with the concentration of the surfactant. Our results are compared with the well-known gel trapping technique and we find good comparison with previous measurements. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr03369a

  15. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    DOE PAGES

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; ...

    2012-01-01

    Objective . Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods . We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results . Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relativemore » accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions . When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs.« less

  16. Protecting Privacy of Shared Epidemiologic Data without Compromising Analysis Potential

    PubMed Central

    Cologne, John; Grant, Eric J.; Nakashima, Eiji; Chen, Yun; Funamoto, Sachiyo; Katayama, Hiroaki

    2012-01-01

    Objective. Ensuring privacy of research subjects when epidemiologic data are shared with outside collaborators involves masking (modifying) the data, but overmasking can compromise utility (analysis potential). Methods of statistical disclosure control for protecting privacy may be impractical for individual researchers involved in small-scale collaborations. Methods. We investigated a simple approach based on measures of disclosure risk and analytical utility that are straightforward for epidemiologic researchers to derive. The method is illustrated using data from the Japanese Atomic-bomb Survivor population. Results. Masking by modest rounding did not adequately enhance security but rounding to remove several digits of relative accuracy effectively reduced the risk of identification without substantially reducing utility. Grouping or adding random noise led to noticeable bias. Conclusions. When sharing epidemiologic data, it is recommended that masking be performed using rounding. Specific treatment should be determined separately in individual situations after consideration of the disclosure risks and analysis needs. PMID:22505949

  17. Specter: linear deconvolution for targeted analysis of data-independent acquisition mass spectrometry proteomics.

    PubMed

    Peckner, Ryan; Myers, Samuel A; Jacome, Alvaro Sebastian Vaca; Egertson, Jarrett D; Abelin, Jennifer G; MacCoss, Michael J; Carr, Steven A; Jaffe, Jacob D

    2018-05-01

    Mass spectrometry with data-independent acquisition (DIA) is a promising method to improve the comprehensiveness and reproducibility of targeted and discovery proteomics, in theory by systematically measuring all peptide precursors in a biological sample. However, the analytical challenges involved in discriminating between peptides with similar sequences in convoluted spectra have limited its applicability in important cases, such as the detection of single-nucleotide polymorphisms (SNPs) and alternative site localizations in phosphoproteomics data. We report Specter (https://github.com/rpeckner-broad/Specter), an open-source software tool that uses linear algebra to deconvolute DIA mixture spectra directly through comparison to a spectral library, thus circumventing the problems associated with typical fragment-correlation-based approaches. We validate the sensitivity of Specter and its performance relative to that of other methods, and show that Specter is able to successfully analyze cases involving highly similar peptides that are typically challenging for DIA analysis methods.

  18. USGS tethered ACP platforms: New design means more safety and accuracy

    USGS Publications Warehouse

    Morlock, S.E.; Stewart, J.A.; Rehmel, M.S.

    2004-01-01

    The US Geological Survey has developed an innovative tethered platform that supports an Acoustic Current Profiler (ACP) in making stream-flow measurements (use of the term ACP in this article refers to a class of instruments and not a specific brand name or model). The tethered platform reduces the hazards involved in conventional methods of stream-flow measurement. The use of the platform reduces or eliminates time spent by personnel in streams and boats or on bridges and cableway and stream-flow measurement accuracy is increased.

  19. New determination of the fine structure constant from the electron value and QED.

    PubMed

    Gabrielse, G; Hanneke, D; Kinoshita, T; Nio, M; Odom, B

    2006-07-21

    Quantum electrodynamics (QED) predicts a relationship between the dimensionless magnetic moment of the electron (g) and the fine structure constant (alpha). A new measurement of g using a one-electron quantum cyclotron, together with a QED calculation involving 891 eighth-order Feynman diagrams, determine alpha(-1)=137.035 999 710 (96) [0.70 ppb]. The uncertainties are 10 times smaller than those of nearest rival methods that include atom-recoil measurements. Comparisons of measured and calculated g test QED most stringently, and set a limit on internal electron structure.

  20. A dye binding method for measurement of total protein in microalgae.

    PubMed

    Servaites, Jerome C; Faeth, Julia L; Sidhu, Sukh S

    2012-02-01

    Protein is a large component of the standing biomass of algae. The total protein content of algae is difficult to measure because of the problems encountered in extracting all of the protein from the cells. Here we modified an existing protein assay to measure total protein in microalgae cells that involves little or no extraction of protein from the cells. Aliquots of fresh or pretreated cells were spotted onto filter paper strips. After drying, the strips were stained in a 0.1% (w/v) solution of the protein stain Coomassie Brilliant Blue R-250 for 16 to 24 h and then destained. The stained protein spots were cut out from the paper, and dye was eluted in 1% (w/v) sodium dodecyl sulfate (SDS). Absorbance at 600 nm was directly proportional to protein concentration. Cells that were recalcitrant to taking up the dye could be either heated at 80°C for 10 min in 1% SDS or briefly sonicated for 3 min to facilitate penetration of the dye into the cells. Total protein measured in Chlorella vulgaris using this method compared closely with that measured using the total N method. Total protein concentrations were measured successfully in 12 algal species using this dye binding method. Copyright © 2011 Elsevier Inc. All rights reserved.

  1. Solving portfolio selection problems with minimum transaction lots based on conditional-value-at-risk

    NASA Astrophysics Data System (ADS)

    Setiawan, E. P.; Rosadi, D.

    2017-01-01

    Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.

  2. Comparing two periphyton collection methods commonly used for stream bioassessment and the development of numeric nutrient standards.

    PubMed

    Rodman, Ashley R; Scott, J Thad

    2017-07-01

    Periphyton is an important component of stream bioassessment, yet methods for quantifying periphyton biomass can differ substantially. A case study within the Arkansas Ozarks is presented to demonstrate the potential for linking chlorophyll-a (chl-a) and ash-free dry mass (AFDM) data sets amassed using two frequently used periphyton sampling protocols. Method A involved collecting periphyton from a known area on the top surface of variably sized rocks gathered from relatively swift-velocity riffles without discerning canopy cover. Method B involved collecting periphyton from the entire top surface of cobbles systematically gathered from riffle-run habitat where canopy cover was intentionally avoided. Chl-a and AFDM measurements were not different between methods (p = 0.123 and p = 0.550, respectively), and there was no interaction between method and time in the repeated measures structure of the study. However, significantly different seasonal distinctions were observed for chl-a and AFDM from all streams when data from the methods were combined (p < 0.001 and p = 0.012, respectively), with greater mean biomass in the cooler sampling months. Seasonal trends were likely the indirect results of varying temperatures. Although the size and range of this study were small, results suggest data sets collected using different methods may effectively be used together with some minor considerations due to potential confounding factors. This study provides motivation for the continued investigation of combining data sets derived from multiple methods of data collection, which could be useful in stream bioassessment and particularly important for the development of regional stream nutrient criteria for the southern Ozarks.

  3. Dynamic Nucleation of Supercooled Melts and Measurement of the Surface Tension and Viscosity

    NASA Technical Reports Server (NTRS)

    Trinh, E. H.; Ohsaka, K.

    1999-01-01

    We investigate the phenomenon of acoustic pressure-induced nucleation by using a novel approach involving the large amplitude resonant radial oscillations and collapse of a single bubble intentionally injected into a supercooled liquid. Using a combination of previously developed and proven techniques, the bubble is suspended in a fluid host by an ultrasonic field which supplies both the levitation capability as well as the forcing of the radial oscillations. We observe the effects of an increase in pressure (due to bubble collapse) in a region no larger than 100 microns within the supercooled melt to rigorously probe the hypothesis of pressure-induced nucleation of the solid phase. The use of single bubbles operating in narrow temporal and spatial scales will allow the direct and unambiguous correlation between the origin and location of the generation of the disturbance and the location and timing of the nucleation event. In a companion research effort, we are developing novel techniques for the non-contact measurements of the surface tension and viscosity of highly viscous supercooled liquids. Currently used non-invasive methods of surface tension measurement for the case of undercooled liquids generally rely of the quantitative determination of the resonance frequencies of drop shape oscillations, of the dynamics of surface capillary waves, or of the velocity of streaming flows. These methods become quickly ineffective when the liquid viscosity rises to a significant value. An alternate and accurate method which would be applicable to liquids of significant viscosity is therefore needed. We plan to develop such a capability by measuring the equilibrium shape of levitated undercooled melt droplets as they undergo solid-body rotation. The experimental measurement of the characteristic point of transition (bifurcation point) between axisymmetric and two-lobed shapes will be used to calculate the surface tension of the liquid. Such an approach has already been validated through the experimental verification of numerical modeling results. The experimental approach involves levitation, melting, and solidification of undercooled droplets using a hybrid ultrasonic-electrostatic technique in both a gaseous as well as a vacuum environment. A shape relaxation method will be investigated in order to derive a reliable method to measure the viscosity of undercooled melts. The analysis of the monotonic relaxation to equilibrium shape of a drastically deformed and super-critically damped free drop has been used to derive interfacial tension of immiscible liquid combinations where one of the component has high viscosity. A standard approach uses the initial elongation of a droplet through shear flows, but an equivalent method could involve the initial deformation of a drop levitated in a gas by ultrasonic radiation pressure, electric stresses, or even solid body rotation. The dynamic behavior of the free drop relaxing back to equilibrium shape will be modeled, and its characteristic time dependence should provide a quantitative means to evaluate the liquid viscosity.

  4. Extracting the redox orbitals in Li battery materials with high-resolution x-ray compton scattering spectroscopy.

    PubMed

    Suzuki, K; Barbiellini, B; Orikasa, Y; Go, N; Sakurai, H; Kaprzyk, S; Itou, M; Yamamoto, K; Uchimoto, Y; Wang, Yung Jui; Hafiz, H; Bansil, A; Sakurai, Y

    2015-02-27

    We present an incisive spectroscopic technique for directly probing redox orbitals based on bulk electron momentum density measurements via high-resolution x-ray Compton scattering. Application of our method to spinel Li_{x}Mn_{2}O_{4}, a lithium ion battery cathode material, is discussed. The orbital involved in the lithium insertion and extraction process is shown to mainly be the oxygen 2p orbital. Moreover, the manganese 3d states are shown to experience spatial delocalization involving 0.16±0.05 electrons per Mn site during the battery operation. Our analysis provides a clear understanding of the fundamental redox process involved in the working of a lithium ion battery.

  5. Validated stability-indicating spectrofluorimetric methods for the determination of ebastine in pharmaceutical preparations

    PubMed Central

    2011-01-01

    Two sensitive, selective, economic, and validated spectrofluorimetric methods were developed for the determination of ebastine (EBS) in pharmaceutical preparations depending on reaction with its tertiary amino group. Method I involves condensation of the drug with mixed anhydrides (citric and acetic anhydrides) producing a product with intense fluorescence, which was measured at 496 nm after excitation at 388 nm. Method (IIA) describes quantitative fluorescence quenching of eosin upon addition of the studied drug where the decrease in the fluorescence intensity was directly proportional to the concentration of ebastine; the fluorescence quenching was measured at 553 nm after excitation at 457 nm. This method was extended to (Method IIB) to apply first and second derivative synchronous spectrofluorimetric method (FDSFS & SDSFS) for the simultaneous analysis of EBS in presence of its alkaline, acidic, and UV degradation products. The proposed methods were successfully applied for the determination of the studied compound in its dosage forms. The results obtained were in good agreement with those obtained by a comparison method. Both methods were utilized to investigate the kinetics of the degradation of the drug. PMID:21385439

  6. Contrast computation methods for interferometric measurement of sensor modulation transfer function

    NASA Astrophysics Data System (ADS)

    Battula, Tharun; Georgiev, Todor; Gille, Jennifer; Goma, Sergio

    2018-01-01

    Accurate measurement of image-sensor frequency response over a wide range of spatial frequencies is very important for analyzing pixel array characteristics, such as modulation transfer function (MTF), crosstalk, and active pixel shape. Such analysis is especially significant in computational photography for the purposes of deconvolution, multi-image superresolution, and improved light-field capture. We use a lensless interferometric setup that produces high-quality fringes for measuring MTF over a wide range of frequencies (here, 37 to 434 line pairs per mm). We discuss the theoretical framework, involving Michelson and Fourier contrast measurement of the MTF, addressing phase alignment problems using a moiré pattern. We solidify the definition of Fourier contrast mathematically and compare it to Michelson contrast. Our interferometric measurement method shows high detail in the MTF, especially at high frequencies (above Nyquist frequency). We are able to estimate active pixel size and pixel pitch from measurements. We compare both simulation and experimental MTF results to a lens-free slanted-edge implementation using commercial software.

  7. Surface Tension and Viscosity Measurements in Microgravity: Some Results and Fluid Flow Observations during MSL-1

    NASA Technical Reports Server (NTRS)

    Hyer, Robert W.; Trapaga, G.; Flemings, M. C.

    1999-01-01

    The viscosity of a liquid metal was successfully measured for the first time by a containerless method, the oscillating drop technique. This method also provides a means to obtain a precise, non-contact measurement of the surface tension of the droplet. This technique involves exciting the surface of the molten sample and then measuring the resulting oscillations; the natural frequency of the oscillating sample is determined by its surface tension, and the damping of the oscillations by the viscosity. These measurements were performed in TEMPUS, a microgravity electromagnetic levitator (EML), on the Space Shuttle as a part of the First Microgravity Science Laboratory (MSL-1), which flew in April and July 1997 (STS-83 and STS-94). Some results of the surface tension and viscosity measurements are presented for Pd82Si18. Some observations of the fluid dynamic characteristics (dominant flow patterns, turbulent transition, cavitation, etc.) of levitated droplets are presented and discussed together with magnetohydrodynamic calculations, which were performed to justify these findings.

  8. Health-related quality-of-life assessments in diverse population groups in the United States.

    PubMed

    Stewart, A L; Nápoles-Springer, A

    2000-09-01

    Effectiveness research needs to represent the increasing diversity of the United States. Health-related quality-of-life (HRQOL) measures are often included as secondary treatment outcomes. Because most HRQOL measures were developed in nonminority, well-educated samples, we must determine whether such measures are conceptually and psychometrically equivalent in diverse subgroups. Without equivalence, overall findings and observed group differences may contain measurement bias. The objectives of this work were to discuss the nature of diversity, importance of ensuring the adequacy of HRQOL measures in diverse groups, methods for assessing comparability of HRQOL measures across groups, and methodological and analytical challenges. Integration of qualitative and quantitative methods is needed to achieve measurement adequacy in diverse groups. Little research explores conceptual equivalence across US subgroups; of the few studies of psychometric comparability, findings are inconsistent. Evidence is needed regarding whether current measures are comparable or need modifications to meet universality assumptions, and we need to determine the best methods for evaluating this. We recommend coordinated efforts to develop guidelines for assessing measurement adequacy across diverse subgroups, allocate resources for measurement studies in diverse populations, improve reporting of and access to measurement results by subgroups, and develop strategies for optimizing the universality of HRQOL measures and resolving inadequacies. We advocate culturally sensitive research that involves cultural subgroups throughout the research process. Because examining the cultural equivalence of HRQOL measures within the United States is somewhat new, we have a unique opportunity to shape the direction of this work through development and dissemination of appropriate methods.

  9. Computer assessment of atherosclerosis from angiographic images

    NASA Technical Reports Server (NTRS)

    Selzer, R. H.; Blankenhorn, D. H.; Brooks, S. H.; Crawford, D. W.; Cashin, W. L.

    1982-01-01

    A computer method for detection and quantification of atherosclerosis from angiograms has been developed and used to measure lesion change in human clinical trials. The technique involves tracking the vessel edges and measuring individual lesions as well as the overall irregularity of the arterial image. Application of the technique to conventional arterial-injection femoral and coronary angiograms is outlined and an experimental study to extend the technique to analysis of intravenous angiograms of the carotid and cornary arteries is described.

  10. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    NASA Technical Reports Server (NTRS)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  11. Optical contacting of quartz

    NASA Technical Reports Server (NTRS)

    Payne, L. L.

    1982-01-01

    The strength of the bond between optically contacted quartz surfaces was investigated. The Gravity Probe-B (GP-B) experiment to test the theories of general relativity requires extremely precise measurements. The quartz components of the instruments to make these measurements must be held together in a very stable unit. Optical contacting is suggested as a possible method of joining these components. The fundamental forces involved in optical contacting are reviewed and relates calculations of these forces to the results obtained in experiments.

  12. Development of a neural network technique for KSTAR Thomson scattering diagnostics.

    PubMed

    Lee, Seung Hun; Lee, J H; Yamada, I; Park, Jae Sun

    2016-11-01

    Neural networks provide powerful approaches of dealing with nonlinear data and have been successfully applied to fusion plasma diagnostics and control systems. Controlling tokamak plasmas in real time is essential to measure the plasma parameters in situ. However, the χ 2 method traditionally used in Thomson scattering diagnostics hampers real-time measurement due to the complexity of the calculations involved. In this study, we applied a neural network approach to Thomson scattering diagnostics in order to calculate the electron temperature, comparing the results to those obtained with the χ 2 method. The best results were obtained for 10 3 training cycles and eight nodes in the hidden layer. Our neural network approach shows good agreement with the χ 2 method and performs the calculation twenty times faster.

  13. A methodological pilot: parenting among women in substance abuse treatment.

    PubMed

    Lewin, Linda; Farkas, Kathleen; Niazi, Maryam

    2014-01-01

    Mothers who abuse substances are likely to have insecure emotional attachment with their children, placing their children at risk for social-emotional and psychiatric conditions. Sobriety does not inevitably improve parenting. We tested recruitment methods, audiovisual (AV) recording procedures, the protocol for identifying child abuse risk, the coding of mother-child interactions, and retention of the sample for repeated measures as the first phase in examining mother-child relational quality of women in substance abuse treatment. This innovative study involved AV recordings to capture the in-vivo mother-child interactional behaviors that were later coded and analyzed for mean scores on the 64-item Parent-Child Relational Quality Assessment. Repeated measurement was planned during treatment and two months after discharge from treatment. The pilot involved a small sample (n = 11) of mother-child (<6 years) dyads. Highest and lowest ratings of interaction behaviors were identified. Mothers showed less enthusiasm and creativity but matched their child's emotional state. The children showed appropriate motor skill items and attachment behaviors. The dyad coding showed less mutual enjoyment between the mother and child. Eight of the participants could not be located for the second measurement despite multiple contact methods. AV recordings capture rich, descriptive information that can be coded for interactional quality analysis. Repeated measurement with this cohort was not feasible, thus needing to assess for additional/more frequent contacts to maintain the sample.

  14. Infrared contrast data analysis method for quantitative measurement and monitoring in flash infrared thermography

    NASA Astrophysics Data System (ADS)

    Koshti, Ajay M.

    2015-04-01

    The paper provides information on a new infrared (IR) image contrast data post-processing method that involves converting raw data to normalized contrast versus time evolutions from the flash infrared thermography inspection video data. Thermal measurement features such as peak contrast, peak contrast time, persistence time, and persistence energy are calculated from the contrast evolutions. In addition, simulation of the contrast evolution is achieved through calibration on measured contrast evolutions from many flat bottom holes in a test plate of the subject material. The measurement features are used to monitor growth of anomalies and to characterize the void-like anomalies. The method was developed to monitor and analyze void-like anomalies in reinforced carbon-carbon (RCC) materials used on the wing leading edge of the NASA Space Shuttle Orbiters, but the method is equally applicable to other materials. The thermal measurement features relate to the anomaly characteristics such as depth and size. Calibration of the contrast is used to provide an assessment of the anomaly depth and width which correspond to the depth and diameter of the equivalent flat bottom hole (EFBH) from the calibration data. An edge detection technique called the half-max is used to measure width and length of the anomaly. Results of the half-max width and the EFBH diameter are compared with actual widths to evaluate utility of IR Contrast method. Some thermal measurements relate to gap thickness of the delaminations. Results of IR Contrast method on RCC hardware are provided. Keywords: normalized contrast, flash infrared thermography.

  15. Monitoring occupational exposure to cancer chemotherapy drugs

    NASA Technical Reports Server (NTRS)

    Baker, E. S.; Connor, T. H.

    1996-01-01

    Reports of the health effects of handling cytotoxic drugs and compliance with guidelines for handling these agents are briefly reviewed, and studies using analytical and biological methods of detecting exposure are evaluated. There is little conclusive evidence of detrimental health effects from occupational exposure to cytotoxic drugs. Work practices have improved since the issuance of guidelines for handling these drugs, but compliance with the recommended practices is still inadequate. Of 64 reports published since 1979 on studies of workers' exposure to these drugs, 53 involved studies of changes in cellular or molecular endpoints (biological markers) and 12 described chemical analyses of drugs or their metabolites in urine (2 involved both, and 2 reported the same study). The primary biological markers used were urine mutagenicity, sister chromatid exchange, and chromosomal aberrations; other studies involved formation of micronuclei and measurements of urinary thioethers. The studies had small sample sizes, and the methods were qualitative, nonspecific, subject to many confounders, and possibly not sensitive enough to detect most occupational exposures. Since none of the currently available biological and analytical methods is sufficiently reliable or reproducible for routine monitoring of exposure in the workplace, further studies using these methods are not recommended; efforts should focus instead on wide-spread implementation of improved practices for handling cytotoxic drugs.

  16. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  17. Precision determination of absolute neutron flux

    DOE PAGES

    Yue, A. T.; Anderson, E. S.; Dewey, M. S.; ...

    2018-06-08

    A technique for establishing the total neutron rate of a highly-collimated monochromatic cold neutron beam was demonstrated using an alpha–gamma counter. The method involves only the counting of measured rates and is independent of neutron cross sections, decay chain branching ratios, and neutron beam energy. For the measurement, a target of 10B-enriched boron carbide totally absorbed the neutrons in a monochromatic beam, and the rate of absorbed neutrons was determined by counting 478 keV gamma rays from neutron capture on 10B with calibrated high-purity germanium detectors. A second measurement based on Bragg diffraction from a perfect silicon crystal was performedmore » to determine the mean de Broglie wavelength of the beam to a precision of 0.024%. With these measurements, the detection efficiency of a neutron monitor based on neutron absorption on 6Li was determined to an overall uncertainty of 0.058%. We discuss the principle of the alpha–gamma method and present details of how the measurement was performed including the systematic effects. We further describe how this method may be used for applications in neutron dosimetry and metrology, fundamental neutron physics, and neutron cross section measurements.« less

  18. Precision determination of absolute neutron flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yue, A. T.; Anderson, E. S.; Dewey, M. S.

    A technique for establishing the total neutron rate of a highly-collimated monochromatic cold neutron beam was demonstrated using an alpha–gamma counter. The method involves only the counting of measured rates and is independent of neutron cross sections, decay chain branching ratios, and neutron beam energy. For the measurement, a target of 10B-enriched boron carbide totally absorbed the neutrons in a monochromatic beam, and the rate of absorbed neutrons was determined by counting 478 keV gamma rays from neutron capture on 10B with calibrated high-purity germanium detectors. A second measurement based on Bragg diffraction from a perfect silicon crystal was performedmore » to determine the mean de Broglie wavelength of the beam to a precision of 0.024%. With these measurements, the detection efficiency of a neutron monitor based on neutron absorption on 6Li was determined to an overall uncertainty of 0.058%. We discuss the principle of the alpha–gamma method and present details of how the measurement was performed including the systematic effects. We further describe how this method may be used for applications in neutron dosimetry and metrology, fundamental neutron physics, and neutron cross section measurements.« less

  19. Determining geometric error model parameters of a terrestrial laser scanner through Two-face, Length-consistency, and Network methods

    PubMed Central

    Wang, Ling; Muralikrishnan, Bala; Rachakonda, Prem; Sawyer, Daniel

    2017-01-01

    Terrestrial laser scanners (TLS) are increasingly used in large-scale manufacturing and assembly where required measurement uncertainties are on the order of few tenths of a millimeter or smaller. In order to meet these stringent requirements, systematic errors within a TLS are compensated in-situ through self-calibration. In the Network method of self-calibration, numerous targets distributed in the work-volume are measured from multiple locations with the TLS to determine parameters of the TLS error model. In this paper, we propose two new self-calibration methods, the Two-face method and the Length-consistency method. The Length-consistency method is proposed as a more efficient way of realizing the Network method where the length between any pair of targets from multiple TLS positions are compared to determine TLS model parameters. The Two-face method is a two-step process. In the first step, many model parameters are determined directly from the difference between front-face and back-face measurements of targets distributed in the work volume. In the second step, all remaining model parameters are determined through the Length-consistency method. We compare the Two-face method, the Length-consistency method, and the Network method in terms of the uncertainties in the model parameters, and demonstrate the validity of our techniques using a calibrated scale bar and front-face back-face target measurements. The clear advantage of these self-calibration methods is that a reference instrument or calibrated artifacts are not required, thus significantly lowering the cost involved in the calibration process. PMID:28890607

  20. Statistically Qualified Neuro-Analytic system and Method for Process Monitoring

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vilim, Richard B.; Garcia, Humberto E.; Chen, Frederick W.

    1998-11-04

    An apparatus and method for monitoring a process involves development and application of a statistically qualified neuro-analytic (SQNA) model to accurately and reliably identify process change. The development of the SQNA model is accomplished in two steps: deterministic model adaption and stochastic model adaptation. Deterministic model adaption involves formulating an analytic model of the process representing known process characteristics,augmenting the analytic model with a neural network that captures unknown process characteristics, and training the resulting neuro-analytic model by adjusting the neural network weights according to a unique scaled equation emor minimization technique. Stochastic model adaptation involves qualifying any remaining uncertaintymore » in the trained neuro-analytic model by formulating a likelihood function, given an error propagation equation, for computing the probability that the neuro-analytic model generates measured process output. Preferably, the developed SQNA model is validated using known sequential probability ratio tests and applied to the process as an on-line monitoring system.« less

  1. Colors Of Liquid Crystals Used To Measure Surface Shear Stresses

    NASA Technical Reports Server (NTRS)

    Reda, D. C.; Muratore, J. J., Jr.

    1996-01-01

    Developmental method of mapping shear stresses on aerodynamic surfaces involves observation, at multiple viewing angles, of colors of liquid-crystal surface coats illuminated by white light. Report describing method referenced in "Liquid Crystals Indicate Directions Of Surface Shear Stresses" (ARC-13379). Resulting maps of surface shear stresses contain valuable data on magnitudes and directions of skin friction forces associated with surface flows; data used to refine mathematical models of aerodynamics for research and design purposes.

  2. A method for analyzing dynamic stall of helicopter rotor blades

    NASA Technical Reports Server (NTRS)

    Crimi, P.; Reeves, B. L.

    1972-01-01

    A model for each of the basic flow elements involved in the unsteady stall of a two-dimensional airfoil in incompressible flow is presented. The interaction of these elements is analyzed using a digital computer. Computations of the loading during transient and sinusoidal pitching motions are in good qualitative agreement with measured loads. The method was used to confirm that large torsional response of helicopter blades detected in flight tests can be attributed to dynamic stall.

  3. Rapid condition assessment of structural condition after a blast using state-space identification

    NASA Astrophysics Data System (ADS)

    Eskew, Edward; Jang, Shinae

    2015-04-01

    After a blast event, it is important to quickly quantify the structural damage for emergency operations. In order improve the speed, accuracy, and efficiency of condition assessments after a blast, the authors have previously performed work to develop a methodology for rapid assessment of the structural condition of a building after a blast. The method involved determining a post-event equivalent stiffness matrix using vibration measurements and a finite element (FE) model. A structural model was built for the damaged structure based on the equivalent stiffness, and inter-story drifts from the blast are determined using numerical simulations, with forces determined from the blast parameters. The inter-story drifts are then compared to blast design conditions to assess the structures damage. This method still involved engineering judgment in terms of determining significant frequencies, which can lead to error, especially with noisy measurements. In an effort to improve accuracy and automate the process, this paper will look into a similar method of rapid condition assessment using subspace state-space identification. The accuracy of the method will be tested using a benchmark structural model, as well as experimental testing. The blast damage assessments will be validated using pressure-impulse (P-I) diagrams, which present the condition limits across blast parameters. Comparisons between P-I diagrams generated using the true system parameters and equivalent parameters will show the accuracy of the rapid condition based blast assessments.

  4. Determination of phosphate in soil extracts in the field: A green chemistry enzymatic method.

    PubMed

    Campbell, Ellen R; Warsko, Kayla; Davidson, Anna-Marie; Bill Campbell, Wilbur H

    2015-01-01

    Measurement of ortho-phosphate in soil extracts usually involves sending dried samples of soil to a laboratory for analysis and waiting several weeks for the results. Phosphate determination methods often involve use of strong acids, heavy metals, and organic dyes. To overcome limitations of this approach, we have developed a phosphate determination method which can be carried out in the field to obtain results on the spot. This new method uses: •Small volumes.•An enzymatic reaction.•Green chemistry. First, the soil sample is extracted with deionized water and filtered. Next, an aliquot of the soil extract (0.5 mL) is transferred to a disposable cuvette, containing 0.5 mL of reaction mixture [200 mM HEPES, pH 7.6, 20 mM MgCl2, with 80 nmol 2-amino-6-mercapto-7-methylpurine ribonucleoside (MESG) and 1 unit of recombinant purine nucleoside phosphorylase (PNP; EC 2.4.2.1)], mixed, and incubated for 10 min at field temperature. Absorbance of the completed reaction is measured at 360 nm in open-source, portable photometer linked by bluetooth to a smartphone. The phosphate and phosphorus content of the soil is determined by comparison of its absorbance at 360 nm to a previously prepared standard phosphate curve, which is stored in the smartphone app.

  5. Determination of phosphate in soil extracts in the field: A green chemistry enzymatic method

    PubMed Central

    Campbell, Ellen R.; Warsko, Kayla; Davidson, Anna-Marie; (Bill) Campbell, Wilbur H.

    2015-01-01

    Measurement of ortho-phosphate in soil extracts usually involves sending dried samples of soil to a laboratory for analysis and waiting several weeks for the results. Phosphate determination methods often involve use of strong acids, heavy metals, and organic dyes. To overcome limitations of this approach, we have developed a phosphate determination method which can be carried out in the field to obtain results on the spot. This new method uses: • Small volumes. • An enzymatic reaction. • Green chemistry. First, the soil sample is extracted with deionized water and filtered. Next, an aliquot of the soil extract (0.5 mL) is transferred to a disposable cuvette, containing 0.5 mL of reaction mixture [200 mM HEPES, pH 7.6, 20 mM MgCl2, with 80 nmol 2-amino-6-mercapto-7-methylpurine ribonucleoside (MESG) and 1 unit of recombinant purine nucleoside phosphorylase (PNP; EC 2.4.2.1)], mixed, and incubated for 10 min at field temperature. Absorbance of the completed reaction is measured at 360 nm in open-source, portable photometer linked by bluetooth to a smartphone. The phosphate and phosphorus content of the soil is determined by comparison of its absorbance at 360 nm to a previously prepared standard phosphate curve, which is stored in the smartphone app. PMID:26150991

  6. New method for GC/FID and GC-C-IRMS Analysis of plasma free fatty acid concentration and isotopic enrichment

    PubMed Central

    Kangani, Cyrous O.; Kelley, David E.; DeLany, James P.

    2008-01-01

    A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards free fatty acids so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methylester derivative after thin-layer chromatography. The [13C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF3/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry. PMID:18757250

  7. New method for GC/FID and GC-C-IRMS analysis of plasma free fatty acid concentration and isotopic enrichment.

    PubMed

    Kangani, Cyrous O; Kelley, David E; Delany, James P

    2008-09-15

    A simple, direct and accurate method for the determination of concentration and enrichment of free fatty acids (FFAs) in human plasma was developed. The validation and comparison to a conventional method are reported. Three amide derivatives, dimethyl, diethyl and pyrrolidide, were investigated in order to achieve optimal resolution of the individual fatty acids. This method involves the use of dimethylamine/Deoxo-Fluor to derivatize plasma free fatty acids to their dimethylamides. This derivatization method is very mild and efficient, and is selective only towards FFAs so that no separation from a total lipid extract is required. The direct method gave lower concentrations for palmitic acid and stearic acid and increased concentrations for oleic acid and linoleic acid in plasma as compared to methyl ester derivative after thin-layer chromatography. The [(13)C]palmitate isotope enrichment measured using direct method was significantly higher than that observed with the BF(3)/MeOH-TLC method. The present method provided accurate and precise measures of concentration as well as enrichment when analyzed with gas chromatography combustion-isotope ratio-mass spectrometry.

  8. Studying Functions of All Yeast Genes Simultaneously

    NASA Technical Reports Server (NTRS)

    Stolc, Viktor; Eason, Robert G.; Poumand, Nader; Herman, Zelek S.; Davis, Ronald W.; Anthony Kevin; Jejelowo, Olufisayo

    2006-01-01

    A method of studying the functions of all the genes of a given species of microorganism simultaneously has been developed in experiments on Saccharomyces cerevisiae (commonly known as baker's or brewer's yeast). It is already known that many yeast genes perform functions similar to those of corresponding human genes; therefore, by facilitating understanding of yeast genes, the method may ultimately also contribute to the knowledge needed to treat some diseases in humans. Because of the complexity of the method and the highly specialized nature of the underlying knowledge, it is possible to give only a brief and sketchy summary here. The method involves the use of unique synthetic deoxyribonucleic acid (DNA) sequences that are denoted as DNA bar codes because of their utility as molecular labels. The method also involves the disruption of gene functions through deletion of genes. Saccharomyces cerevisiae is a particularly powerful experimental system in that multiple deletion strains easily can be pooled for parallel growth assays. Individual deletion strains recently have been created for 5,918 open reading frames, representing nearly all of the estimated 6,000 genetic loci of Saccharomyces cerevisiae. Tagging of each deletion strain with one or two unique 20-nucleotide sequences enables identification of genes affected by specific growth conditions, without prior knowledge of gene functions. Hybridization of bar-code DNA to oligonucleotide arrays can be used to measure the growth rate of each strain over several cell-division generations. The growth rate thus measured serves as an index of the fitness of the strain.

  9. New spectrophotometric assay for pilocarpine.

    PubMed

    El-Masry, S; Soliman, R

    1980-07-01

    A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.

  10. Statistical identification of stimulus-activated network nodes in multi-neuron voltage-sensitive dye optical recordings.

    PubMed

    Fathiazar, Elham; Anemuller, Jorn; Kretzberg, Jutta

    2016-08-01

    Voltage-Sensitive Dye (VSD) imaging is an optical imaging method that allows measuring the graded voltage changes of multiple neurons simultaneously. In neuroscience, this method is used to reveal networks of neurons involved in certain tasks. However, the recorded relative dye fluorescence changes are usually low and signals are superimposed by noise and artifacts. Therefore, establishing a reliable method to identify which cells are activated by specific stimulus conditions is the first step to identify functional networks. In this paper, we present a statistical method to identify stimulus-activated network nodes as cells, whose activities during sensory network stimulation differ significantly from the un-stimulated control condition. This method is demonstrated based on voltage-sensitive dye recordings from up to 100 neurons in a ganglion of the medicinal leech responding to tactile skin stimulation. Without relying on any prior physiological knowledge, the network nodes identified by our statistical analysis were found to match well with published cell types involved in tactile stimulus processing and to be consistent across stimulus conditions and preparations.

  11. Quantification of alginate by aggregation induced by calcium ions and fluorescent polycations.

    PubMed

    Zheng, Hewen; Korendovych, Ivan V; Luk, Yan-Yeung

    2016-01-01

    For quantification of polysaccharides, including heparins and alginates, the commonly used carbazole assay involves hydrolysis of the polysaccharide to form a mixture of UV-active dye conjugate products. Here, we describe two efficient detection and quantification methods that make use of the negative charges of the alginate polymer and do not involve degradation of the targeted polysaccharide. The first method utilizes calcium ions to induce formation of hydrogel-like aggregates with alginate polymer; the aggregates can be quantified readily by staining with a crystal violet dye. This method does not require purification of alginate from the culture medium and can measure the large amount of alginate that is produced by a mucoid Pseudomonas aeruginosa culture. The second method employs polycations tethering a fluorescent dye to form suspension aggregates with the alginate polyanion. Encasing the fluorescent dye in the aggregates provides an increased scattering intensity with a sensitivity comparable to that of the conventional carbazole assay. Both approaches provide efficient methods for monitoring alginate production by mucoid P. aeruginosa. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Thermodynamic free energy methods to investigate shape transitions in bilayer membranes.

    PubMed

    Ramakrishnan, N; Tourdot, Richard W; Radhakrishnan, Ravi

    2016-06-01

    The conformational free energy landscape of a system is a fundamental thermodynamic quantity of importance particularly in the study of soft matter and biological systems, in which the entropic contributions play a dominant role. While computational methods to delineate the free energy landscape are routinely used to analyze the relative stability of conformational states, to determine phase boundaries, and to compute ligand-receptor binding energies its use in problems involving the cell membrane is limited. Here, we present an overview of four different free energy methods to study morphological transitions in bilayer membranes, induced either by the action of curvature remodeling proteins or due to the application of external forces. Using a triangulated surface as a model for the cell membrane and using the framework of dynamical triangulation Monte Carlo, we have focused on the methods of Widom insertion, thermodynamic integration, Bennett acceptance scheme, and umbrella sampling and weighted histogram analysis. We have demonstrated how these methods can be employed in a variety of problems involving the cell membrane. Specifically, we have shown that the chemical potential, computed using Widom insertion, and the relative free energies, computed using thermodynamic integration and Bennett acceptance method, are excellent measures to study the transition from curvature sensing to curvature inducing behavior of membrane associated proteins. The umbrella sampling and WHAM analysis has been used to study the thermodynamics of tether formation in cell membranes and the quantitative predictions of the computational model are in excellent agreement with experimental measurements. Furthermore, we also present a method based on WHAM and thermodynamic integration to handle problems related to end-point-catastrophe that are common in most free energy methods.

  13. Measurement of the total spectrum of electrons and positrons in the energy range of 300–1500 GeV in the PAMELA experiment with the aid of a sampling calorimeter and a neutron detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karelin, A. V., E-mail: karelin@hotbox.ru; Voronov, S. A.; Galper, A. M.

    2015-03-15

    A method based on the use of a sampling calorimeter was developed for measuring the total energy spectrum of electrons and positrons from high-energy cosmic rays in the PAMELA satellite-borne experiment. This made it possible to extend the range of energies accessible to measurements by the magnetic system of the PAMELA spectrometer. Themethod involves a procedure for selecting electrons on the basis of features of a secondary-particle shower in the calorimeter. The results obtained by measuring the total spectrum of cosmic-ray electrons and positrons in the energy range of 300–1500 GeV by the method in question are presented on themore » basis of data accumulated over a period spanning 2006 and 2013.« less

  14. Machine learning in soil classification.

    PubMed

    Bhattacharya, B; Solomatine, D P

    2006-03-01

    In a number of engineering problems, e.g. in geotechnics, petroleum engineering, etc. intervals of measured series data (signals) are to be attributed a class maintaining the constraint of contiguity and standard classification methods could be inadequate. Classification in this case needs involvement of an expert who observes the magnitude and trends of the signals in addition to any a priori information that might be available. In this paper, an approach for automating this classification procedure is presented. Firstly, a segmentation algorithm is developed and applied to segment the measured signals. Secondly, the salient features of these segments are extracted using boundary energy method. Based on the measured data and extracted features to assign classes to the segments classifiers are built; they employ Decision Trees, ANN and Support Vector Machines. The methodology was tested in classifying sub-surface soil using measured data from Cone Penetration Testing and satisfactory results were obtained.

  15. High resolution neutron Larmor diffraction using superconducting magnetic Wollaston prisms

    DOE PAGES

    Li, Fankang; Feng, Hao; Thaler, Alexander N.; ...

    2017-04-13

    The neutron Larmor diffraction technique has been implemented using superconducting magnetic Wollaston prisms in both single-arm and double-arm configurations. Successful measurements of the coefficient of thermal expansion of a single-crystal copper sample demonstrates that the method works as expected. Our experiment involves a new method of tuning by varying the magnetic field configurations in the device and the tuning results agree well with previous measurements. The difference between single-arm and double-arm configurations has been investigated experimentally. Here, we conclude that this measurement benchmarks the applications of magnetic Wollaston prisms in Larmor diffraction and shows in principle that the setup canmore » be used for inelastic phonon line-width measurements. The achievable resolution for Larmor diffraction is comparable to that using Neutron Resonance Spin Echo (NRSE) coils. Furthermore, the use of superconducting materials in the prisms allows high neutron polarization and transmission efficiency to be achieved.« less

  16. Beam profile measurements for target designators

    NASA Astrophysics Data System (ADS)

    Frank, J. D.

    1985-02-01

    An American aerospace company has conducted a number of investigations with the aim to improve on the tedious slow manual methods of measuring pulsed lasers for rangefinders, giving particular attention to beam divergence which is studied by varying aperture sizes and positions in the laser beam path. Three instruments have been developed to make the involved work easier to perform. One of these, the Automatic Laser Instrumentation and Measurement System (ALIMS), consists of an optical bench, a digital computer, and three bays of associated electronic instruments. ALIMS uses the aperture method to measure laser beam alignment and divergence. The Laser Intensity Profile System (LIPS) consists of a covered optical bench and a two bay electronic equipment and control console. The Automatic Laser Test Set (ALTS) utilizes a 50 x 50 silicon photodiode array to characterize military laser systems automatically. Details regarding the conducted determinations are discussed.

  17. Investigating the use of patient involvement and patient experience in quality improvement in Norway: rhetoric or reality?

    PubMed Central

    2013-01-01

    Background Patient involvement in health care decision making is part of a wider trend towards a more bottom-up approach to service planning and provision, and patient experience is increasingly conceptualized as a core dimension of health care quality. The aim of this multi-level study is two-fold: 1) to describe and analyze how governmental organizations expect acute hospitals to incorporate patient involvement and patient experiences into their quality improvement (QI) efforts and 2) to analyze how patient involvement and patient experiences are used by hospitals to try to improve the quality of care they provide. Methods This multi-level case study combines analysis of national policy documents and regulations at the macro level with semi-structured interviews and non-participant observation of key meetings and shadowing of staff at the meso and micro levels in two purposively sampled Norwegian hospitals. Fieldwork at the meso and micro levels was undertaken over a 12-month period (2011–2012). Results Governmental documents and regulations at the macro level demonstrated wide-ranging expectations for the integration of patient involvement and patient experiences in QI work in hospitals. The expectations span from systematic collection of patients’ and family members’ experiences for the purpose of improving service quality through establishing patient-oriented arenas for ongoing collaboration with staff to the support of individual involvement in decision making. However, the extent of involvement of patients and application of patient experiences in QI work was limited at both hospitals. Even though patient involvement was gaining prominence at the meso level − and to a lesser extent at the micro level − relevant tools for measuring and using patient experiences in QI work were lacking, and available measures of patient experience were not being used meaningfully or systematically. Conclusions The relative lack of expertise in Norwegian hospitals of adapting and implementing tools and methods for improving patient involvement and patient experiences at the meso and micro levels mark a need for health care policymakers and hospital leaders to learn from experiences of other industries and countries that have successfully integrated user experiences into QI work. Hospital managers need to design and implement wider strategies to help their staff members recognize and value the contribution that patient involvement and patient experiences can make to the improvement of healthcare quality. PMID:23742265

  18. Imaging Lenticular Autofluorescence in Older Subjects

    PubMed Central

    Charng, Jason; Tan, Rose; Luu, Chi D.; Sadigh, Sam; Stambolian, Dwight; Guymer, Robyn H.; Jacobson, Samuel G.; Cideciyan, Artur V.

    2017-01-01

    Purpose To evaluate whether a practical method of imaging lenticular autofluorescence (AF) can provide an individualized measure correlated with age-related lens yellowing in older subjects undergoing tests involving shorter wavelength lights. Methods Lenticular AF was imaged with 488-nm excitation using a confocal scanning laser ophthalmoscope (cSLO) routinely used for retinal AF imaging. There were 75 older subjects (ages 47–87) at two sites; a small cohort of younger subjects served as controls. At one site, the cSLO was equipped with an internal reference to allow quantitative AF measurements; at the other site, reduced-illuminance AF imaging (RAFI) was used. In a subset of subjects, lens density index was independently estimated from dark-adapted spectral sensitivities performed psychophysically. Results Lenticular AF intensity was significantly higher in the older eyes than the younger cohort when measured with the internal reference (59.2 ± 15.4 vs. 134.4 ± 31.7 gray levels; P < 0.05) as well as when recorded with RAFI without the internal reference (10.9 ± 1.5 vs. 26.1 ± 5.7 gray levels; P < 0.05). Lenticular AF was positively correlated with age; however, there could also be large differences between individuals of similar age. Lenticular AF intensity correlated well with lens density indices estimated from psychophysical measures. Conclusions Lenticular AF measured with a retinal cSLO can provide a practical and individualized measure of lens yellowing, and may be a good candidate to distinguish between preretinal and retinal deficits involving short-wavelength lights in older eyes. PMID:28973367

  19. Wavelet-based algorithm to the evaluation of contrasted hepatocellular carcinoma in CT-images after transarterial chemoembolization.

    PubMed

    Alvarez, Matheus; de Pina, Diana Rodrigues; Romeiro, Fernando Gomes; Duarte, Sérgio Barbosa; Miranda, José Ricardo de Arruda

    2014-07-26

    Hepatocellular carcinoma is a primary tumor of the liver and involves different treatment modalities according to the tumor stage. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm. 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits. A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0 cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small decrease for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.

  20. The Developmental Costs and Benefits of Children’s Involvement in Interparental Conflict

    PubMed Central

    Davies, Patrick T.; Coe, Jesse L.; Martin, Meredith J.; Sturge-Apple, Melissa L.; Cummings, E. Mark

    2015-01-01

    Building on empirical documentation of children’s involvement in interparental conflicts as a weak predictor of psychopathology, we tested the hypothesis that involvement in conflict more consistently serves as a moderator of associations between children’s emotional reactivity to interparental conflict and their psychological problems. In Study 1, 263 early adolescents (M age = 12.62 years), mothers, and fathers completed surveys of family and child functioning at two measurement occasions spaced two years apart. In Study 2, 243 preschool children (M age = 4.60 years) participated in a multi-method (i.e., observations, structured interview, surveys) measurement battery to assess family functioning, children’s reactivity to interparental conflict, and their psychological adjustment. Across both studies, latent difference score (LDS) analyses revealed that involvement moderated associations between emotional reactivity and children’s increases in psychological (i.e., internalizing and externalizing) problems. Children’s emotional reactivity to interparental conflict was a significantly stronger predictor of their psychological maladjustment when they were highly involved in the conflicts. In addition, the developmental benefits and costs of involvement varied as a function of emotional reactivity. Involvement in interparental conflict predicted increases in psychological problems for children experiencing high emotional reactivity and decreases in psychological problems when they exhibited low emotional reactivity. We interpret the results in the context of the new formulation of emotional security theory (e.g. Davies & Martin, 2013) and family systems models of children’s parentification (e.g., Byng-Hall, 2002). PMID:26053147

  1. Environmental Chemicals in Urine and Blood: Improving Methods for Creatinine and Lipid Adjustment

    PubMed Central

    O’Brien, Katie M.; Upson, Kristen; Cook, Nancy R.; Weinberg, Clarice R.

    2015-01-01

    Background Investigators measuring exposure biomarkers in urine typically adjust for creatinine to account for dilution-dependent sample variation in urine concentrations. Similarly, it is standard to adjust for serum lipids when measuring lipophilic chemicals in serum. However, there is controversy regarding the best approach, and existing methods may not effectively correct for measurement error. Objectives We compared adjustment methods, including novel approaches, using simulated case–control data. Methods Using a directed acyclic graph framework, we defined six causal scenarios for epidemiologic studies of environmental chemicals measured in urine or serum. The scenarios include variables known to influence creatinine (e.g., age and hydration) or serum lipid levels (e.g., body mass index and recent fat intake). Over a range of true effect sizes, we analyzed each scenario using seven adjustment approaches and estimated the corresponding bias and confidence interval coverage across 1,000 simulated studies. Results For urinary biomarker measurements, our novel method, which incorporates both covariate-adjusted standardization and the inclusion of creatinine as a covariate in the regression model, had low bias and possessed 95% confidence interval coverage of nearly 95% for most simulated scenarios. For serum biomarker measurements, a similar approach involving standardization plus serum lipid level adjustment generally performed well. Conclusions To control measurement error bias caused by variations in serum lipids or by urinary diluteness, we recommend improved methods for standardizing exposure concentrations across individuals. Citation O’Brien KM, Upson K, Cook NR, Weinberg CR. 2016. Environmental chemicals in urine and blood: improving methods for creatinine and lipid adjustment. Environ Health Perspect 124:220–227; http://dx.doi.org/10.1289/ehp.1509693 PMID:26219104

  2. Investigating the Dimensionality of Examinee Motivation across Instruction Conditions in Low-Stakes Testing Contexts

    ERIC Educational Resources Information Center

    Finney, Sara J.; Mathers, Catherine E.; Myers, Aaron J.

    2016-01-01

    Research investigating methods to influence examinee motivation during low-stakes assessment of student learning outcomes has involved manipulating test session instructions. The impact of instructions is often evaluated using a popular self-report measure of test-taking motivation. However, the impact of these manipulations on the psychometric…

  3. Homework and Achievement: Using Smartpen Technology to Find the Connection

    ERIC Educational Resources Information Center

    Rawson, Kevin; Stahovich, Thomas F.; Mayer, Richard E.

    2017-01-01

    There is a long history of research efforts aimed at understanding the relationship between homework activity and academic achievement. While some self-report inventories involving homework activity have been useful for predicting academic performance, self-reported measures may be limited or even problematic. Here, we employ a novel method for…

  4. Forest statistics for Southwest Arkansas counties

    Treesearch

    T. Richard Quick; Mary S. Hedlund

    1979-01-01

    These tables were derived from data obtained during a 1978 inventory of 20 counties comprising the Southwest Unit of Arkansas (fig. 1). The data on forest acreage and timber volume were secured by a systematic sampling method involving a forest-nonforest classification on aerial photographs and on-the-ground measurements of trees at sample locations. The sample...

  5. Forest statistics for plateau Tennessee counties

    Treesearch

    Renewable Resources Evaluation Research Work Unit

    1982-01-01

    These tables were derived from data obtained during a 1980 inventory of 16 counties comprising the Plateau Unit of Tennessee (fib. 1). The data on forest acreage and timber volume were secured by a systematic sampling method involving a forest-nonforest classification on aerial photographs and on-the-ground measurements of trees at sample locations. The sample...

  6. Forest statistics for Northwest Louisiana Parishes

    Treesearch

    James F. Rosson; Daniel F. Bertelson

    1985-01-01

    These tables were derived from data obtained during a 1984 inventory of 13 parishes comprising the Northwest unit of Louisiana (fig. 1). The data on forest acreage and timber volume were secured by a systematic sampling method involving a forest-nonforest classification on aerial photographs and on-the-ground measurements of trees at sample locations. The sample...

  7. Forest statistics for Southwest Louisiana parishes

    Treesearch

    James F. Rosson; Daniel F. Bertelson

    1985-01-01

    These tables were derived from data obtained during a 1984 inventory of 11 parishes comprising the Southwest Unit of Louisiana (fig. 1). The data on forest acreage and timber volume were secured by a systematic sampling method involving a forest-nonforest classification on aerial photographs and on-the-ground measurements of trees at sample locations. The sample...

  8. School-Related Predictors of Smoking, Drinking and Drug Use: Evidence from the Belfast Youth Development Study

    ERIC Educational Resources Information Center

    Perra, Oliver; Fletcher, Adam; Bonell, Chris; Higgins, Kathryn; McCrystal, Patrick

    2012-01-01

    Objective: To examine whether students' school engagement, relationships with teachers, educational aspirations and involvement in fights at school are associated with various measures of subsequent substance use. Methods: Data were drawn from the Belfast Youth Development Study (n = 2968). Multivariate logistic models examined associations…

  9. VALIDATION OF A METHOD FOR ESTIMATING POLLUTION EMISSION RATES FROM AREA SOURCES USING OPEN-PATH FTIR SEPCTROSCOPY AND DISPERSION MODELING TECHNIQUES

    EPA Science Inventory

    The paper describes a methodology developed to estimate emissions factors for a variety of different area sources in a rapid, accurate, and cost effective manner. he methodology involves using an open-path Fourier transform infrared (FTIR) spectrometer to measure concentrations o...

  10. Analyzing For Light Elements By X-Ray Scattering

    NASA Technical Reports Server (NTRS)

    Ross, H. Richard

    1993-01-01

    Nondestructive method of determining concentrations of low-atomic-number elements in liquids and solids involves measurements of Compton and Rayleigh scattering of x rays. Applied in quantitative analysis of low-atomic-number constituents of alloys, of contaminants and corrosion products on surfaces of alloys, and of fractions of hydrogen in plastics, oils, and solvents.

  11. A Study Exploring Exceptional Education Pre-Service Teachers' Mathematics Anxiety

    ERIC Educational Resources Information Center

    Gresham, Gina

    2010-01-01

    Fifty-two exceptional education pre-service teachers getting a K-6 endorsement were involved in this study that investigated the changes in levels of mathematics anxiety before and after a mathematics methods course for education majors. The changes were measured with respect to the use of manipulatives and other activities to make mathematics…

  12. Social-Psychological Factors Influencing Recreation Demand: Evidence from Two Recreational Rivers

    ERIC Educational Resources Information Center

    Smith, Jordan W.; Moore, Roger L.

    2013-01-01

    Traditional methods of estimating demand for recreation areas involve making inferences about individuals' preferences. Frequently, the assumption is made that recreationists' cost of traveling to a site is a reliable measure of the value they place on that resource and the recreation opportunities it provides. This assumption may ignore other…

  13. Learning the Cardiac Cycle: Simultaneous Observations of Electrical and Mechanical Events.

    ERIC Educational Resources Information Center

    Kenney, Richard Alec; Frey, Mary Anne Bassett

    1980-01-01

    Described is a method for integrating electrical and mechanical events of the cardiac cycle by measuring systolic time intervals, which involves simultaneous recording of the ECG, a phonocardiogram, and the contour of the carotid pulse. Both resting and stress change data are provided as bases for class discussion. (CS)

  14. A variable pressure method for characterizing nanoparticle surface charge using pore sensors.

    PubMed

    Vogel, Robert; Anderson, Will; Eldridge, James; Glossop, Ben; Willmott, Geoff

    2012-04-03

    A novel method using resistive pulse sensors for electrokinetic surface charge measurements of nanoparticles is presented. This method involves recording the particle blockade rate while the pressure applied across a pore sensor is varied. This applied pressure acts in a direction which opposes transport due to the combination of electro-osmosis, electrophoresis, and inherent pressure. The blockade rate reaches a minimum when the velocity of nanoparticles in the vicinity of the pore approaches zero, and the forces on typical nanoparticles are in equilibrium. The pressure applied at this minimum rate can be used to calculate the zeta potential of the nanoparticles. The efficacy of this variable pressure method was demonstrated for a range of carboxylated 200 nm polystyrene nanoparticles with different surface charge densities. Results were of the same order as phase analysis light scattering (PALS) measurements. Unlike PALS results, the sequence of increasing zeta potential for different particle types agreed with conductometric titration.

  15. A Re-evaluation of the Ferrozine Method for Dissolved Iron: The Effect of Organic Interferences

    NASA Astrophysics Data System (ADS)

    Balind, K.; Barber, A.; Gelinas, Y.

    2016-12-01

    Among the most commonly used analytical methods in geochemistry is the ferrozine method for determining dissolved iron concentration in water (1). This cheap and easy-to-use spectrophotometric method involves a complexing agent (ferrozine), a reducing agent (hydroxylamine-HCl) and buffer (ammonium acetate with ammonium hydroxide). Previous studies have demonstrated that complex organic matter (OM) originating from the Suwannee River did not lead to a significantly underestimation of the measured iron content in OM amended iron solutions (2). The authors concluded that this method could be used even in organic rich (i.e., 25 mg/L) waters. Here we compare the concentration of Fe measured using this spectrophotometric method to the total Fe as measured by ICP-MS in the presence/absence of specific organic molecules to ascertain if they interfere with the ferrozine method. We show that certain molecules with hydroxyl and carboxyl functional groups as well as multi-dentate chelating species have a significant effect on the measured iron concentrations. Two possible mechanisms likely are responsible for the inefficiency of this method in the presence of specific organic molecules; 1) incomplete reduction of Fe(III) bound to organic molecules, or 2) competition between the OM and ferrozine for the available iron. We address these possibilities separately by varying the experimental conditions. These methodological artifacts may have far reaching implications due to the extensive use of this method. Stookey, L. L., Anal. Chem., 42, 779 (1970). Viollier, E., et al., Applied Geochem., 15, 785 (2000).

  16. Focused Impedance Method (FIM) and Pigeon Hole Imaging (PHI) for localized measurements - a review

    NASA Astrophysics Data System (ADS)

    Siddique-e Rabbani, K.

    2010-04-01

    This paper summarises up to date development in Focused Impedance Method (FIM) initiated by us. It basically involves taking the sum of two orthogonal tetra-polar impedance measurements around a common central region, giving a localized enhanced sensitivity. Although the basic idea requires 8 electrodes, versions with 6- and 4-electrodes were subsequently conceived and developed. The focusing effect has been verified in 2D and 3D phantoms and through numerical analysis. Dynamic stomach emptying, and ventilation of localized lung regions have been studied successfully suggesting further applications in monitoring of gastric acid secretion, artificial respiration, bladder emptying, etc. Multi-frequency FIM may help identify some diseases and disorders including certain cancers. FIM, being much simpler and having less number of electrodes, appears to have the potential to replace EIT for applications involving large and shallow organs. An enhancement of 6-electrode FIM led to Pigeon Hole Imaging (PHI) in a square matrix through backprojection in two orthogonal directions, good for localising of one or two well separated objects.

  17. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  18. Validation of a Mobile Device for Acoustic Coordinated Reset Neuromodulation Tinnitus Therapy.

    PubMed

    Hauptmann, Christian; Wegener, Alexander; Poppe, Hendrik; Williams, Mark; Popelka, Gerald; Tass, Peter A

    2016-10-01

    Sound-based tinnitus intervention stimuli include broad-band noise signals with subjectively adjusted bandwidths used as maskers delivered by commercial devices or hearing aids, environmental sounds broadly described and delivered by both consumer devices and hearing aids, music recordings specifically modified and delivered in a variety of different ways, and other stimuli. Acoustic coordinated reset neuromodulation therapy for tinnitus reduction has unique and more stringent requirements compared to all other sound-based tinnitus interventions. These include precise characterization of tinnitus pitch and loudness, and effective provision of patient-controlled daily therapy signals at defined frequencies, levels, and durations outside of the clinic. The purpose of this study was to evaluate an approach to accommodate these requirements including evaluation of a mobile device, validation of an automated tinnitus pitch-matching algorithm and assessment of a patient's ability to control stimuli and collect repeated outcome measures. The experimental design involved direct laboratory measurements of the sound delivery capabilities of a mobile device, comparison of an automated, adaptive pitch-matching method to a traditional manual method and measures of a patient's ability to understand and manipulate a mobile device graphic user interface to both deliver the therapy signals and collect the outcome measures. This study consisted of 5 samples of a common mobile device for the laboratory measures and a total of 30 adult participants: 15 randomly selected normal-hearing participants with simulated tinnitus for validation of a tinnitus pitch-matching algorithm and 15 sequentially selected patients already undergoing tinnitus therapy for evaluation of patient usability. No tinnitus intervention(s) were specifically studied as a component of this study. Data collection involved laboratory measures of mobile devices, comparison of manual and automated adaptive tinnitus pitch-matching psychoacoustic procedures in the same participant analyzed for absolute differences (t test), variance differences (f test), and range comparisons, and assessment of patient usability including questionnaire measures and logs of patient observations. Mobile devices are able to reliably and accurately deliver the acoustic therapy signals. There was no difference in mean pitch matches (t test, p > 0.05) between an automated adaptive method compared to a traditional manual pitch-matching method. However, the variability of the automated pitch-matching method was much less (f test, p < 0.05) with twice as many matches within the predefined error range (±5%) compared to the manual pitch-matching method (80% versus 40%). After a short initial training, all participants were able to use the mobile device effectively and to perform the required tasks without further professional assistance. American Academy of Audiology

  19. A comparison of self-reported leisure-time physical activity and measured moderate-to-vigorous physical activity in adolescents and adults.

    PubMed

    Garriguet, Didier; Colley, Rachel C

    2014-07-01

    Systematic reviews and results of Statistics Canada surveys have shown a discrepancy between self-reported and measured physical activity. This study compares these two methods and examines specific activities to explain the limitations of each method. Data are from cycle 1 (2007 to 2009) and cycle 2 (2009 to 2011) of the Canadian Health Measures Survey. The survey involved an interview in the respondent's home and a visit to a mobile examination centre (MEC) for physical measurements. In a questionnaire, respondents were asked about 21 leisure-time physical activities. They were requested to wear an Actical accelerometer for seven days after the MEC visit. The analysis pertains to respondents aged 12 to 79 who wore the accelerometer for 10 or more hours on at least four days (n = 7,158). Averages of self-reported leisure-time physical activity and moderate-to-vigorous physical activity measured by accelerometer were within a couple of minutes of each other. However, at the individual level, the difference between estimates could exceed 37.5 minutes per day in one direction or the other, and around 40% of the population met physical activity thresholds according to one measurement method, but not according to the other. The disagreement is supported by weak observed correlations. The lack of a systematic trend in the relationship between the two methods of measuring physical activity precludes the creation of correction factors or being confident in using one method instead of the other. Accelerometers and questionnaires measure different aspects of physical activity.

  20. Accidental hypothermia in Poland – estimation of prevalence, diagnostic methods and treatment.

    PubMed

    Kosiński, Sylweriusz; Darocha, Tomasz; Gałązkowski, Robert; Drwiła, Rafał

    2015-02-06

    The incidence of hypothermia is difficult to evaluate, and the data concerning the morbidity and mortality rates do not seem to fully represent the problem. The aim of the study was to estimate the actual prevalence of accidental hypothermia in Poland, as well as the methods of diagnosis and management procedures used in emergency rooms (ERs). A specially designed questionnaire, consisting of 14 questions, was mailed to all the 223 emergency rooms (ER) in Poland. The questions concerned the incidence, methods of diagnosis and risk factors, as well as the rewarming methods used and available measurement instruments. The analysis involved data from 42 ERs providing emergency healthcare for the population of 5,305,000. The prevalence of accidental hypothermia may have been 5.05 cases per 100.000 residents per year. Among the 268 cases listed 25% were diagnosed with codes T68, T69 or X31, and in 75% hypothermia was neither included nor assigned a code in the final diagnosis. The most frequent cause of hypothermia was exposure to cold air alongside ethanol abuse (68%). Peripheral temperature was measured in 57%, core temperature measurement was taken in 29% of the patients. Peripheral temperature was measured most often at the axilla, while core temperature measurement was predominantly taken rectally. Mild hypothermia was diagnosed in 75.5% of the patients, moderate (32-28°C) in 16.5%, while severe hypothermia (less than 28°C) in 8% of the cases. Cardiopulmonary resuscitation was carried out in 7.5% of the patients. The treatment involved mainly warmed intravenous fluids (83.5%) and active external rewarming measures (70%). In no case was extracorporeal rewarming put to use. The actual incidence of accidental hypothermia in Polish emergency departments may exceed up to four times the official data. Core temperature is taken only in one third of the patients, the treatment of hypothermic patients is rarely conducted in intensive care wards and extracorporeal rewarming techniques are not used. It may be expected that personnel education and the development of management procedures will brighten the prognosis and increase the survival rate in accidental hypothermia.

  1. Development of car theft crime index in peninsular Malaysia

    NASA Astrophysics Data System (ADS)

    Zulkifli, Malina; Ismail, Noriszura; Razali, Ahmad Mahir; Kasim, Maznah Mat

    2014-06-01

    Vehicle theft is classified as property crime and is considered as the most frequently reported crime in Malaysia. The rising number of vehicle thefts requires proper control by relevant authorities, especially through planning and implementation of strategic and effective measures. Nevertheless, the effort to control this crime would be much easier if there is an indication or index which is more specific to vehicle theft. This study aims to build an index crime which is specific to vehicle theft. The development of vehicle theft index proposed in this study requires three main steps; the first involves identification of criteria related to vehicle theft, the second requires calculation of degrees of importance, or weighting criteria, which involves application of correlation and entropy methods, and the final involves building of vehicle theft index using method of linear combination, or weighted arithmetic average. The results show that the two methods used for determining weights of vehicle theft index are similar. Information generated from the results can be used as a primary source for local authorities to plan strategies for reduction of vehicle theft and for insurance companies to determine premium rates of automobile insurance.

  2. Approaches to chronic disease management evaluation in use in Europe: a review of current methods and performance measures.

    PubMed

    Conklin, Annalijn; Nolte, Ellen; Vrijhoef, Hubertus

    2013-01-01

    An overview was produced of approaches currently used to evaluate chronic disease management in selected European countries. The study aims to describe the methods and metrics used in Europe as a first to help advance the methodological basis for their assessment. A common template for collection of evaluation methods and performance measures was sent to key informants in twelve European countries; responses were summarized in tables based on template evaluation categories. Extracted data were descriptively analyzed. Approaches to the evaluation of chronic disease management vary widely in objectives, designs, metrics, observation period, and data collection methods. Half of the reported studies used noncontrolled designs. The majority measure clinical process measures, patient behavior and satisfaction, cost and utilization; several also used a range of structural indicators. Effects are usually observed over 1 or 3 years on patient populations with a single, commonly prevalent, chronic disease. There is wide variation within and between European countries on approaches to evaluating chronic disease management in their objectives, designs, indicators, target audiences, and actors involved. This study is the first extensive, international overview of the area reported in the literature.

  3. Project - based teaching and other methods to make learning more attractive

    NASA Astrophysics Data System (ADS)

    Švecová, Libuše; Vlková, Iva

    2017-01-01

    This contribution presents the results of a research carried out at secondary schools in the Moravian-Silesian Region. This research involved a total of 120 pupils and focused on project teaching with the emphasis on pupil inquiry activity and the connection of their knowledge in the fields of physics and biology. To verify pupil inquiry activity, the tasks on the worksheets have been designed specifically to measure physical quantities on the human body by computer-aided measuring processes. To support pupil inquiry activity, group work was selected as the organization method of teaching. Audio recording and pedagogical observations were used as the research tools for assessment and a consequent evaluation of acquired data.

  4. The 'sniffer-patch' technique for detection of neurotransmitter release.

    PubMed

    Allen, T G

    1997-05-01

    A wide variety of techniques have been employed for the detection and measurement of neurotransmitter release from biological preparations. Whilst many of these methods offer impressive levels of sensitivity, few are able to combine sensitivity with the necessary temporal and spatial resolution required to study quantal release from single cells. One detection method that is seeing a revival of interest and has the potential to fill this niche is the so-called 'sniffer-patch' technique. In this article, specific examples of the practical aspects of using this technique are discussed along with the procedures involved in calibrating these biosensors to extend their applications to provide quantitative, in addition to simple qualitative, measurements of quantal transmitter release.

  5. From the Transits of Venus to the Birth of Experimental Psychology

    NASA Astrophysics Data System (ADS)

    Sheehan, William

    2013-06-01

    I trace the attempts to determine the Earth-Sun distance, which is based on measurements of the solar parallax, from the naked-eye observations of Aristarchus of Samos in antiquity to observations of the transits of Venus in the 18th century, noting the nature of the observational errors involved in them. I then turn to measurements of stellar positions with meridian or transit telescopes in the 17th to 19th centuries using the eye and ear method of observation. I show how an analysis of the observational discrepancies in this method led to the discovery of an observer's "personal equation," and ultimately to the birth of experimental psychology.

  6. Rheological properties, shape oscillations, and coalescence of liquid drops with surfactants

    NASA Technical Reports Server (NTRS)

    Apfel, R. E.; Holt, R. G.

    1990-01-01

    A method was developed to deduce dynamic interfacial properties of liquid drops. The method involves measuring the frequency and damping of free quadrupole oscillations of an acoustically levitated drop. Experimental results from pure liquid-liquid systems agree well with theoretical predictions. Additionally, the effects of surfactants is considered. Extension of these results to a proposed microgravity experiment on the drop physics module (DPM) in USML-1 are discussed. Efforts are also underway to model the time history of the thickness of the fluid layer between two pre-coalescence drops, and to measure the film thickness experimentally. Preliminary results will be reported, along with plans for coalescence experiments proposed for USML-1.

  7. Alteration of diffusion-tensor MRI measures in brain regions involved in early stages of Parkinson's disease.

    PubMed

    Chen, Nan-Kuei; Chou, Ying-Hui; Sundman, Mark; Hickey, Patrick; Kasoff, Willard S; Bernstein, Adam; Trouard, Theodore P; Lin, Tanya; Rapcsak, Steven Z; Sherman, Scott J; Weingarten, Carol

    2018-06-07

    Many non-motor symptoms (e.g., hyposmia) appear years before the cardinal motor features of Parkinson's disease (PD). It is thus desirable to be able to use noninvasive brain imaging methods, such as magnetic resonance imaging (MRI), to detect brain abnormalities in early PD stages. Among the MRI modalities, diffusion tensor imaging (DTI) is suitable for detecting changes of brain tissue structure due to neurological diseases. The main purpose of this study was to investigate whether DTI signals measured from brain regions involved in early stages of PD differ from those of healthy controls. To answer this question, we analyzed whole-brain DTI data of 30 early-stage PD patients and 30 controls using improved ROI based analysis methods. Results showed that 1) the fractional anisotropy (FA) values in the olfactory tract (connected with the olfactory bulb: one of the first structures affected by PD) are lower in PD patients than healthy controls; 2) FA values are higher in PD patients than healthy controls in the following brain regions: corticospinal tract, cingulum (near hippocampus), and superior longitudinal fasciculus (temporal part). Experimental results suggest that the tissue property, measured by FA, in olfactory regions is structurally modulated by PD with a mechanism that is different from other brain regions.

  8. Comparing the Medicaid Retrospective Drug Utilization Review Program Cost-Savings Methods Used by State Agencies

    PubMed Central

    Prada, Sergio I.

    2017-01-01

    Background The Medicaid Drug Utilization Review (DUR) program is a 2-phase process conducted by Medicaid state agencies. The first phase is a prospective DUR and involves electronically monitoring prescription drug claims to identify prescription-related problems, such as therapeutic duplication, contraindications, incorrect dosage, or duration of treatment. The second phase is a retrospective DUR and involves ongoing and periodic examinations of claims data to identify patterns of fraud, abuse, underutilization, drug–drug interaction, or medically unnecessary care, implementing corrective actions when needed. The Centers for Medicare & Medicaid Services requires each state to measure prescription drug cost-savings generated from its DUR programs on an annual basis, but it provides no guidance or unified methodology for doing so. Objectives To describe and synthesize the methodologies used by states to measure cost-savings using their Medicaid retrospective DUR program in federal fiscal years 2014 and 2015. Method For each state, the cost-savings methodologies included in the Medicaid DUR 2014 and 2015 reports were downloaded from Medicaid's website. The reports were then reviewed and synthesized. Methods described by the states were classified according to research designs often described in evaluation textbooks. Discussion In 2014, the most often used prescription drugs cost-savings estimation methodology for the Medicaid retrospective DUR program was a simple pre-post intervention method, without a comparison group (ie, 12 states). In 2015, the most common methodology used was a pre-post intervention method, with a comparison group (ie, 14 states). Comparisons of savings attributed to the program among states are still unreliable, because of a lack of a common methodology available for measuring cost-savings. Conclusion There is great variation among states in the methods used to measure prescription drug utilization cost-savings. This analysis suggests that there is still room for improvement in terms of methodology transparency, which is important, because lack of transparency hinders states from learning from each other. Ultimately, the federal government needs to evaluate and improve its DUR program. PMID:29403573

  9. Fractional exhaled nitric oxide-measuring devices: technology update

    PubMed Central

    Maniscalco, Mauro; Vitale, Carolina; Vatrella, Alessandro; Molino, Antonio; Bianco, Andrea; Mazzarella, Gennaro

    2016-01-01

    The measurement of exhaled nitric oxide (NO) has been employed in the diagnosis of specific types of airway inflammation, guiding treatment monitoring by predicting and assessing response to anti-inflammatory therapy and monitoring for compliance and detecting relapse. Various techniques are currently used to analyze exhaled NO concentrations under a range of conditions for both health and disease. These include chemiluminescence and electrochemical sensor devices. The cost effectiveness and ability to achieve adequate flexibility in sensitivity and selectivity of NO measurement for these methods are evaluated alongside the potential for use of laser-based technology. This review explores the technologies involved in the measurement of exhaled NO. PMID:27382340

  10. Oxygen consumption measurements during continual centrifugation of mice.

    NASA Technical Reports Server (NTRS)

    Fethke, W.; Cook, K. M.; Porter, S. M.; Wunder, C. C.

    1973-01-01

    A simple method is described for measurement of metabolism of conscious, unrestrained animals, during chronic centrifugation or other conditions of isolation (23.75 hr/day) from the investigators in an essentially normal atmospheric environment for as long as seven days. This involves telemetry of pressure changes in a metabolic chamber. At 7 G's, increased O2 intake lasting two to seven days and a decreased excursion of the day-night difference were measured for male white mice with less effect or even an opposite effect at lower fields. Base-line measurements of metabolic rate per mouse are less affected by animal size than expected from the surface area law.

  11. Detecting Phycocynanin-Pigmented Microbes in Reflected Light

    NASA Technical Reports Server (NTRS)

    Vincent, Robert K.

    2008-01-01

    A recently invented method of measuring concentrations of phycocynanin-pigmented algae and bacteria in water is based on measurement of the spectrum of reflected sunlight. When present in sufficiently high concentrations, phycocynanin-pigmented microorganisms can be hazardous to the health of humans who use, and of animals that depend on, an affected body of water. The present method is intended to satisfy a need for a rapid, convenient means of detecting hazardous concentrations of phycocynanin-pigmented microorganisms. Rapid detection will speed up the issuance of public health warnings and performance of corrective actions. The method involves the measurement of light reflected from a body of water in at least two, but preferably five wavelength bands. In one version of the method, the five wavelength bands are bands 1, 3, 4, 5, and 7 of the Thematic Mapper (TM) multispectral imaging instrument aboard the Landsat-7 satellite (see table). In principle, other wavelength bands indicative of phycocynanin could be used alternatively or in addition to these five. Moreover, although the method was originally intended specifically for processing Landsat- 7 TM data, it is equally applicable to processing of data from other satellite-borne instruments or from airborne, hand-held, buoy-mounted, tower-mounted, or otherwise mounted instruments that measure radiances of light reflected from water in the wavelength bands of interest.

  12. Optical factors determined by the T-matrix method in turbidity measurement of absolute coagulation rate constants.

    PubMed

    Xu, Shenghua; Liu, Jie; Sun, Zhiwei

    2006-12-01

    Turbidity measurement for the absolute coagulation rate constants of suspensions has been extensively adopted because of its simplicity and easy implementation. A key factor in deriving the rate constant from experimental data is how to theoretically evaluate the so-called optical factor involved in calculating the extinction cross section of doublets formed during aggregation. In a previous paper, we have shown that compared with other theoretical approaches, the T-matrix method provides a robust solution to this problem and is effective in extending the applicability range of the turbidity methodology, as well as increasing measurement accuracy. This paper will provide a more comprehensive discussion of the physical insight for using the T-matrix method in turbidity measurement and associated technical details. In particular, the importance of ensuring the correct value for the refractive indices for colloidal particles and the surrounding medium used in the calculation is addressed, because the indices generally vary with the wavelength of the incident light. The comparison of calculated results with experiments shows that the T-matrix method can correctly calculate optical factors even for large particles, whereas other existing theories cannot. In addition, the data of the optical factor calculated by the T-matrix method for a range of particle radii and incident light wavelengths are listed.

  13. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  14. Instrumentation for air quality measurements.

    NASA Technical Reports Server (NTRS)

    Loewenstein, M.

    1973-01-01

    Comparison of the new generation of air quality monitoring instruments with some more traditional methods. The first generation of air quality measurement instruments, based on the use of oxidant coulometric cells, nitrogen oxide colorimetry, carbon monoxide infrared analyzers, and other types of detectors, is compared with new techniques now coming into wide use in the air monitoring field and involving the use of chemiluminescent reactions, optical absorption detectors, a refinement of the carbon monoxide infrared analyzer, electrochemical cells based on solid electrolytes, and laser detectors.

  15. REPLICATIONS AND EXTENSIONS IN AROUSAL ASSESSMENT FOR SEX OFFENDERS WITH DEVELOPMENTAL DISABILITIES

    PubMed Central

    Reyes, Jorge R; Vollmer, Timothy R; Hall, Astrid

    2011-01-01

    Three adult male sex offenders with developmental disabilities participated in phallometric assessments that involved repeated measures of arousal when exposed to various stimuli. Arousal assessment outcomes were similar to those obtained by Reyes et al. (2006). Additional data-analysis methods provided further information about sexual preferences, thus replicating and extending previous research. The results provide preliminary data for establishing a preference gradient by age. Implications for the use of repeated measures and preference gradients in arousal assessments are discussed. PMID:21709795

  16. Beat frequency ultrasonic microsphere contrast agent detection system

    NASA Technical Reports Server (NTRS)

    Pretlow, Robert A., III (Inventor); Yost, William T. (Inventor); Cantrell, John H., Jr. (Inventor)

    1995-01-01

    A system for and method of detecting and measuring concentrations of an ultrasonically-reflective microsphere contrast agent involving detecting non-linear sum and difference beat frequencies produced by the microspheres when two impinging signals with non-identical frequencies are combined by mixing. These beat frequencies can be used for a variety of applications such as detecting the presence of and measuring the flow rates of biological fluids and industrial liquids, including determining the concentration level of microspheres in the myocardium.

  17. Beat frequency ultrasonic microsphere contrast agent detection system

    NASA Technical Reports Server (NTRS)

    Pretlow, III, Robert A. (Inventor); Yost, William T. (Inventor); Cantrell, Jr., John H. (Inventor)

    1997-01-01

    A system for and method of detecting and measuring concentrations of an ultrasonically-reflective microsphere contrast agent involving detecting non-linear sum and difference beat frequencies produced by the microspheres when two impinging signals with non-identical frequencies are combined by mixing. These beat frequencies can be used for a variety of applications such as detecting the presence of and measuring the flow rates of biological fluids and industrial liquids, including determining the concentration level of microspheres in the myocardium.

  18. Usability of calcium carbide gas pressure method in hydrological sciences

    NASA Astrophysics Data System (ADS)

    Arsoy, S.; Ozgur, M.; Keskin, E.; Yilmaz, C.

    2013-10-01

    Soil moisture is a key engineering variable with major influence on ecological and hydrological processes as well as in climate, weather, agricultural, civil and geotechnical applications. Methods for quantification of the soil moisture are classified into three main groups: (i) measurement with remote sensing, (ii) estimation via (soil water balance) simulation models, and (iii) measurement in the field (ground based). Remote sensing and simulation modeling require rapid ground truthing with one of the ground based methods. Calcium carbide gas pressure (CCGP) method is a rapid measurement procedure for obtaining soil moisture and relies on the chemical reaction of the calcium carbide reagent with the water in soil pores. However, the method is overlooked in hydrological science applications. Therefore, the purpose of this study is to evaluate the usability of the CCGP method in comparison with standard oven-drying and dielectric methods in terms of accuracy, time efficiency, operational ease, cost effectiveness and safety for quantification of the soil moisture over a wide range of soil types. The research involved over 250 tests that were carried out on 15 different soil types. It was found that the accuracy of the method is mostly within ±1% of soil moisture deviation range in comparison to oven-drying, and that CCGP method has significant advantages over dielectric methods in terms of accuracy, cost, operational ease and time efficiency for the purpose of ground truthing.

  19. Evaluation of an analytic linear Boltzmann transport equation solver for high-density inhomogeneities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lloyd, S. A. M.; Ansbacher, W.; Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6

    2013-01-15

    Purpose: Acuros external beam (Acuros XB) is a novel dose calculation algorithm implemented through the ECLIPSE treatment planning system. The algorithm finds a deterministic solution to the linear Boltzmann transport equation, the same equation commonly solved stochastically by Monte Carlo methods. This work is an evaluation of Acuros XB, by comparison with Monte Carlo, for dose calculation applications involving high-density materials. Existing non-Monte Carlo clinical dose calculation algorithms, such as the analytic anisotropic algorithm (AAA), do not accurately model dose perturbations due to increased electron scatter within high-density volumes. Methods: Acuros XB, AAA, and EGSnrc based Monte Carlo are usedmore » to calculate dose distributions from 18 MV and 6 MV photon beams delivered to a cubic water phantom containing a rectangular high density (4.0-8.0 g/cm{sup 3}) volume at its center. The algorithms are also used to recalculate a clinical prostate treatment plan involving a unilateral hip prosthesis, originally evaluated using AAA. These results are compared graphically and numerically using gamma-index analysis. Radio-chromic film measurements are presented to augment Monte Carlo and Acuros XB dose perturbation data. Results: Using a 2% and 1 mm gamma-analysis, between 91.3% and 96.8% of Acuros XB dose voxels containing greater than 50% the normalized dose were in agreement with Monte Carlo data for virtual phantoms involving 18 MV and 6 MV photons, stainless steel and titanium alloy implants and for on-axis and oblique field delivery. A similar gamma-analysis of AAA against Monte Carlo data showed between 80.8% and 87.3% agreement. Comparing Acuros XB and AAA evaluations of a clinical prostate patient plan involving a unilateral hip prosthesis, Acuros XB showed good overall agreement with Monte Carlo while AAA underestimated dose on the upstream medial surface of the prosthesis due to electron scatter from the high-density material. Film measurements support the dose perturbations demonstrated by Monte Carlo and Acuros XB data. Conclusions: Acuros XB is shown to perform as well as Monte Carlo methods and better than existing clinical algorithms for dose calculations involving high-density volumes.« less

  20. Development of a core outcome set for orthodontic trials using a mixed-methods approach: protocol for a multicentre study.

    PubMed

    Tsichlaki, Aliki; O'Brien, Kevin; Johal, Ama; Marshman, Zoe; Benson, Philip; Colonio Salazar, Fiorella B; Fleming, Padhraig S

    2017-08-04

    Orthodontic treatment is commonly undertaken in young people, with over 40% of children in the UK needing treatment and currently one third having treatment, at a cost to the National Health Service in England and Wales of £273 million each year. Most current research about orthodontic care does not consider what patients truly feel about, or want, from treatment, and a diverse range of outcomes is being used with little consistency between studies. This study aims to address these problems, using established methodology to develop a core outcome set for use in future clinical trials of orthodontic interventions in children and young people. This is a mixed-methods study incorporating four distinct stages. The first stage will include a scoping review of the scientific literature to identify primary and secondary outcome measures that have been used in previous orthodontic clinical trials. The second stage will involve qualitative interviews and focus groups with orthodontic patients aged 10 to 16 years to determine what outcomes are important to them. The outcomes elicited from these two stages will inform the third stage of the study in which a long-list of outcomes will be ranked in terms of importance using electronic Delphi surveys involving clinicians and patients. The final stage of the study will involve face-to-face consensus meetings with all stakeholders to discuss and agree on the outcome measures that should be included in the final core outcome set. This research will help to inform patients, parents, clinicians and commissioners about outcomes that are important to young people undergoing orthodontic treatment. Adoption of the core outcome set in future clinical trials of orthodontic treatment will make it easier for results to be compared, contrasted and combined. This should translate into improved decision-making by all stakeholders involved. The project has been registered on the Core Outcome Measures in Effectiveness Trials ( COMET ) website, January 2016.

  1. Computational synchronization of microarray data with application to Plasmodium falciparum.

    PubMed

    Zhao, Wei; Dauwels, Justin; Niles, Jacquin C; Cao, Jianshu

    2012-06-21

    Microarrays are widely used to investigate the blood stage of Plasmodium falciparum infection. Starting with synchronized cells, gene expression levels are continually measured over the 48-hour intra-erythrocytic cycle (IDC). However, the cell population gradually loses synchrony during the experiment. As a result, the microarray measurements are blurred. In this paper, we propose a generalized deconvolution approach to reconstruct the intrinsic expression pattern, and apply it to P. falciparum IDC microarray data. We develop a statistical model for the decay of synchrony among cells, and reconstruct the expression pattern through statistical inference. The proposed method can handle microarray measurements with noise and missing data. The original gene expression patterns become more apparent in the reconstructed profiles, making it easier to analyze and interpret the data. We hypothesize that reconstructed gene expression patterns represent better temporally resolved expression profiles that can be probabilistically modeled to match changes in expression level to IDC transitions. In particular, we identify transcriptionally regulated protein kinases putatively involved in regulating the P. falciparum IDC. By analyzing publicly available microarray data sets for the P. falciparum IDC, protein kinases are ranked in terms of their likelihood to be involved in regulating transitions between the ring, trophozoite and schizont developmental stages of the P. falciparum IDC. In our theoretical framework, a few protein kinases have high probability rankings, and could potentially be involved in regulating these developmental transitions. This study proposes a new methodology for extracting intrinsic expression patterns from microarray data. By applying this method to P. falciparum microarray data, several protein kinases are predicted to play a significant role in the P. falciparum IDC. Earlier experiments have indeed confirmed that several of these kinases are involved in this process. Overall, these results indicate that further functional analysis of these additional putative protein kinases may reveal new insights into how the P. falciparum IDC is regulated.

  2. The Webcam system: a simple, automated, computer-based video system for quantitative measurement of movement in nonhuman primates.

    PubMed

    Togasaki, Daniel M; Hsu, Albert; Samant, Meghana; Farzan, Bijan; DeLanney, Louis E; Langston, J William; Di Monte, Donato A; Quik, Maryka

    2005-06-30

    Investigations using models of neurologic disease frequently involve quantifying animal motor activity. We developed a simple method for measuring motor activity using a computer-based video system (the Webcam system) consisting of an inexpensive video camera connected to a personal computer running customized software. Images of the animals are captured at half-second intervals and movement is quantified as the number of pixel changes between consecutive images. The Webcam system allows measurement of motor activity of the animals in their home cages, without devices affixed to their bodies. Webcam quantification of movement was validated by correlation with measures simultaneously obtained by two other methods: measurement of locomotion by interruption of infrared beams; and measurement of general motor activity using portable accelerometers. In untreated squirrel monkeys, correlations of Webcam and locomotor activity exceeded 0.79, and correlations with general activity counts exceeded 0.65. Webcam activity decreased after the monkeys were rendered parkinsonian by treatment with 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP), but the correlations with the other measures of motor activity were maintained. Webcam activity also correlated with clinical ratings of parkinsonism. These results indicate that the Webcam system is reliable under both untreated and experimental conditions and is an excellent method for quantifying motor activity in animals.

  3. Ultrasound Velocity Measurement in a Liquid Metal Electrode

    PubMed Central

    Perez, Adalberto; Kelley, Douglas H.

    2015-01-01

    A growing number of electrochemical technologies depend on fluid flow, and often that fluid is opaque. Measuring the flow of an opaque fluid is inherently more difficult than measuring the flow of a transparent fluid, since optical methods are not applicable. Ultrasound can be used to measure the velocity of an opaque fluid, not only at isolated points, but at hundreds or thousands of points arrayed along lines, with good temporal resolution. When applied to a liquid metal electrode, ultrasound velocimetry involves additional challenges: high temperature, chemical activity, and electrical conductivity. Here we describe the experimental apparatus and methods that overcome these challenges and allow the measurement of flow in a liquid metal electrode, as it conducts current, at operating temperature. Temperature is regulated within ±2 °C using a Proportional-Integral-Derivative (PID) controller that powers a custom-built furnace. Chemical activity is managed by choosing vessel materials carefully and enclosing the experimental setup in an argon-filled glovebox. Finally, unintended electrical paths are carefully prevented. An automated system logs control settings and experimental measurements, using hardware trigger signals to synchronize devices. This apparatus and these methods can produce measurements that are impossible with other techniques, and allow optimization and control of electrochemical technologies like liquid metal batteries. PMID:26273726

  4. The Measurement of Fuel-Air Ratio by Analysis for the Oxidized Exhaust Gas

    NASA Technical Reports Server (NTRS)

    Gerrish, Harold C.; Meem, J. Lawrence, Jr.

    1943-01-01

    An investigation was made to determine a method of measuring fuel-air ratio that could be used for test purposes in flight and for checking conventional equipment in the laboratory. Two single-cylinder test engines equipped with typical commercial engine cylinders were used. The fuel-air ratio of the mixture delivered to the engines was determined by direct measurement of the quantity of air and of fuel supplied and also by analysis of the oxidized exhaust gas and of the normal exhaust gas. Five fuels were used: gasoline that complied with Army-Navy fuel Specification No. AN-VV-F-781 and four mixtures of this gasoline with toluene, benzene, and xylene. The method of determining the fuel-air ratio described in this report involves the measurement of the carbon-dioxide content of the oxidized exhaust gas and the use of graphs for the presented equation. This method is considered useful in aircraft, in the field, or in the laboratory for a range of fuel-air ratios from 0.047 to 0.124.

  5. The Measurement of Fuel-air Ratio by Analysis of the Oxidized Exhaust Gas

    NASA Technical Reports Server (NTRS)

    Memm, J. Lawrence, Jr.

    1943-01-01

    An investigation was made to determine a method of measuring fuel-air ratio that could be used for test purposes in flight and for checking conventional equipment in the laboratory. Two single-cylinder test engines equipped with typical commercial engine cylinders were used. The fuel-air ratio of the mixture delivered to the engines was determined by direct measurement of the quantity of air and of fuel supplied and also by analysis of the oxidized exhaust gas and of the normal exhaust gas. Five fuels were used: gasoline that complied with Army-Navy Fuel Specification, No. AN-VV-F-781 and four mixtures of this gasoline with toluene, benzene, and xylene. The method of determining the fuel-air ratio described in this report involves the measurement of the carbon-dioxide content of the oxidized exhaust gas and the use of graphs or the presented equation. This method is considered useful in aircraft, in the field, or in the laboratory for a range of fuel-air ratios from 0.047 to 0.124

  6. Landslides Monitoring on Salt Deposits Using Geophysical Methods, Case study - Slanic Prahova, Romania

    NASA Astrophysics Data System (ADS)

    Ovidiu, Avram; Rusu, Emil; Maftei, Raluca-Mihaela; Ulmeanu, Antonio; Scutelnicu, Ioan; Filipciuc, Constantina; Tudor, Elena

    2017-12-01

    Electrometry is most frequently applied geophysical method to examine dynamical phenomena related to the massive salt presence due to resistivity contrasts between salt, salt breccia and geological covering formations. On the vertical resistivity sections obtained with VES devices these three compartments are clearly differentiates by high resistivity for the massive salt, very low for salt breccia and variable for geological covering formations. When the land surface is inclined, shallow formations are moving gravitationally on the salt back, producing a landslide. Landslide monitoring involves repeated periodically measurements of geoelectrical profiles into a grid covering the slippery surface, in the same conditions (climate, electrodes position, instrument and measurement parameters). The purpose of monitoring landslides in Slanic Prahova area, was to detect the changes in resistivity distribution profiles to superior part of subsoil measured in 2014 and 2015. Measurement grid include several representative cross sections in susceptibility to landslides point of view. The results are graphically represented by changing the distribution of topography and resistivity differences between the two sets of geophysical measurements.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Metz, Lori A.; Friese, Judah I.; Finn, Erin C.

    Critical assemblies provide one method of achieving a fast neutron spectrum that is close to a 235U fission-energy neutron spectrum for nuclear data measurements. Previous work has demonstrated the use of a natural boron carbide capsule for spectral-tailoring in a mixed spectrum reactor as an alternate and complementary method for performing fission-energy neutron experiments. Previous fission products measurements showed that the neutron spectrum achievable with natural boron carbide was not as hard as what can be achieved with critical assemblies. New measurements performed with the Washington State University TRIGA reactor using a boron carbide capsule 96% enriched in 10B formore » irradiations resulted in a neutron spectrum very similar to a critical assembly and a pure 235U fission spectrum. The current work describes an experiment involving a highly-enriched uranium target irradiated under the new 10B4C capsule. Fission product yields were measured following radiochemical separations and are presented here. Reactor dosimetry measurements for characterizing neutron spectra and fluence for the enriched boron carbide capsule and critical assemblies are also discussed.« less

  8. THE USE OF A PHOSPHORUS-POLYTHENE MIXTURE FOR FAST NEUTRON MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stephens, K.G.; Williams, G.H.

    1962-06-01

    A convenient way of measuring relative fast fluxes in a small reactor core using foils of phosphorus-Polythene mixture is described. Determination of the disintegration rate of these foils involves the measurement of the BETA -ray self-absorption of the foils as a function of thickness. Accurate disintegration rate measurements after irradiation for 6 hr using 7/16 in. dia foils enabled fission fluxes of 5 x 10/sup 5/ n/cm/sup 2/ sec or more to be measured absolutely to within plus or minus 6 per cent and fluxes as low as 5 x 10/sup 4/ n/cm/sup 2/ sec to within plus or minusmore » 20 per cent when the counter background was 80 c/ min. By reducing this background and increasing foil size, both of these limits are lowered by about a factor 50. The method compares favorably with methods using the S/sup 32/(n,p P/sup 32/ threshold reaction. (auth)« less

  9. Non-Aqueous Titration Method for Determining Suppressor Concentration in the MCU Next Generation Solvent (NGS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taylor-Pashow, Kathryn M. L.; Jones, Daniel H.

    A non-aqueous titration method has been used for quantifying the suppressor concentration in the MCU solvent hold tank (SHT) monthly samples since the Next Generation Solvent (NGS) was implemented in 2013. The titration method measures the concentration of the NGS suppressor (TiDG) as well as the residual tri-n-octylamine (TOA) that is a carryover from the previous solvent. As the TOA concentration has decreased over time, it has become difficult to resolve the TiDG equivalence point as the TOA equivalence point has moved closer. In recent samples, the TiDG equivalence point could not be resolved, and therefore, the TiDG concentration wasmore » determined by subtracting the TOA concentration as measured by semi-volatile organic analysis (SVOA) from the total base concentration as measured by titration. In order to improve the titration method so that the TiDG concentration can be measured directly, without the need for the SVOA data, a new method has been developed that involves spiking of the sample with additional TOA to further separate the two equivalence points in the titration. This method has been demonstrated on four recent SHT samples and comparison to results obtained using the SVOA TOA subtraction method shows good agreement. Therefore, it is recommended that the titration procedure be revised to include the TOA spike addition, and this to become the primary method for quantifying the TiDG.« less

  10. High temperature pressurized high frequency testing rig and test method

    DOEpatents

    De La Cruz, Jose; Lacey, Paul

    2003-04-15

    An apparatus is described which permits the lubricity of fuel compositions at or near temperatures and pressures experienced by compression ignition fuel injector components during operation in a running engine. The apparatus consists of means to apply a measured force between two surfaces and oscillate them at high frequency while wetted with a sample of the fuel composition heated to an operator selected temperature. Provision is made to permit operation at or near the flash point of the fuel compositions. Additionally a method of using the subject apparatus to simulate ASTM Testing Method D6079 is disclosed, said method involving using the disclosed apparatus to contact the faces of prepared workpieces under a measured load, sealing the workface contact point into the disclosed apparatus while immersing said contact point between said workfaces in a lubricating media to be tested, pressurizing and heating the chamber and thereby the fluid and workfaces therewithin, using the disclosed apparatus to impart a differential linear motion between the workpieces at their contact point until a measurable scar is imparted to at least one workpiece workface, and then evaluating the workface scar.

  11. Passive field reflectance measurements

    NASA Astrophysics Data System (ADS)

    Weber, Christian; Schinca, Daniel C.; Tocho, Jorge O.; Videla, Fabian

    2008-10-01

    The results of reflectance measurements performed with a three-band passive radiometer with independent channels for solar irradiance reference are presented. Comparative operation between the traditional method that uses downward-looking field and reference white panel measurements and the new approach involving duplicated downward- and upward-looking spectral channels (each latter one with its own diffuser) is analyzed. The results indicate that the latter method performs in very good agreement with the standard method and is more suitable for passive sensors under rapidly changing atmospheric conditions (such as clouds, dust, mist, smog and other scatterers), since a more reliable synchronous recording of reference and incident light is achieved. Besides, having separate channels for the reference and the signal allows a better balancing of gains in the amplifiers for each spectral channel. We show the results obtained in the determination of the normalized difference vegetation index (NDVI) corresponding to the period 2004-2007 field experiments concerning weed detection in soybean stubbles and fertilizer level assessment in wheat. The method may be used to refine sensor-based nitrogen fertilizer rate recommendations and to determine suitable zones for herbicide applications.

  12. Application of a High-Fidelity Icing Analysis Method to a Model-Scale Rotor in Forward Flight

    NASA Technical Reports Server (NTRS)

    Narducci, Robert; Orr, Stanley; Kreeger, Richard E.

    2012-01-01

    An icing analysis process involving the loose coupling of OVERFLOW-RCAS for rotor performance prediction and with LEWICE3D for thermal analysis and ice accretion is applied to a model-scale rotor for validation. The process offers high-fidelity rotor analysis for the noniced and iced rotor performance evaluation that accounts for the interaction of nonlinear aerodynamics with blade elastic deformations. Ice accumulation prediction also involves loosely coupled data exchanges between OVERFLOW and LEWICE3D to produce accurate ice shapes. Validation of the process uses data collected in the 1993 icing test involving Sikorsky's Powered Force Model. Non-iced and iced rotor performance predictions are compared to experimental measurements as are predicted ice shapes.

  13. Gain Switching for a Detection System to Accommodate a Newly Developed MALDI-Based Quantification Method

    NASA Astrophysics Data System (ADS)

    Ahn, Sung Hee; Hyeon, Taeghwan; Kim, Myung Soo; Moon, Jeong Hee

    2017-09-01

    In matrix-assisted laser desorption ionization time-of-flight mass spectrometry (MALDI-TOF), matrix-derived ions are routinely deflected away to avoid problems with ion detection. This, however, limits the use of a quantification method that utilizes the analyte-to-matrix ion abundance ratio. In this work, we will show that it is possible to measure this ratio by a minor instrumental modification of a simple form of MALDI-TOF. This involves detector gain switching. [Figure not available: see fulltext.

  14. Corrosion Control Through a Better Understanding of the Metallic Substrate/Organic Coating/Interface.

    DTIC Science & Technology

    1981-11-01

    and the cyclohexane over molecular sieves, -79- pm NO pAalB hION fl i4a and then measuring the ppm of water in the cyclohexane using the Karl Fischer ... potentiometric titration using a method outlined by Bell [J. Polymer Sci. 8, 417-36 (1970)]. The procedure for epoxy groups involves the re- action with...with time, earliest stages of coating delamination, and secondary processes that occur was obtained by time lapse photography. Electrical methods are

  15. A solid state tunable laser for resonance measurements of atmospheric sodium

    NASA Technical Reports Server (NTRS)

    Philbrick, C. R.; Bufton, J. L.; Gardner, C. S.

    1985-01-01

    The measurement of wave dynamics in the upper mesosphere using a solid-state laser to excite the resonance fluorescence line of sodium is examined. Two Nd:YAG lasers are employed to produce the sodium resonance line. The method involves mixing the 1064 nm radiation with that from a second Nd:YAG operating at 1319 nm in a nonlinear infrared crystal to directly produce 589 nm radiation by sum frequency generation. The use of the transmitter to measure the sodium layer from the Space Shuttle Platform is proposed. A diagram of the laser transmitter is presented.

  16. A convenient method for X-ray analysis in TEM that measures mass thickness and composition

    NASA Astrophysics Data System (ADS)

    Statham, P.; Sagar, J.; Holland, J.; Pinard, P.; Lozano-Perez, S.

    2018-01-01

    We consider a new approach for quantitative analysis in transmission electron microscopy (TEM) that offers the same convenience as single-standard quantitative analysis in scanning electron microscopy (SEM). Instead of a bulk standard, a thin film with known mass thickness is used as a reference. The procedure involves recording an X-ray spectrum from the reference film for each session of acquisitions on real specimens. There is no need to measure the beam current; the current only needs to be stable for the duration of the session. A new reference standard with a large (1 mm x 1 mm) area of uniform thickness of 100 nm silicon nitride is used to reveal regions of X-ray detector occlusion that would give misleading results for any X-ray method that measures thickness. Unlike previous methods, the new X-ray method does not require an accurate beam current monitor but delivers equivalent accuracy in mass thickness measurement. Quantitative compositional results are also automatically corrected for specimen self-absorption. The new method is tested using a wedge specimen of Inconel 600 that is used to calibrate the high angle angular dark field (HAADF) signal to provide a thickness reference and results are compared with electron energy-loss spectrometry (EELS) measurements. For the new X-ray method, element composition results are consistent with the expected composition for the alloy and the mass thickness measurement is shown to provide an accurate alternative to EELS for thickness determination in TEM without the uncertainty associated with mean free path estimates.

  17. Novel Technique for Making Measurements of SO2 with a Standalone Sonde

    NASA Astrophysics Data System (ADS)

    Flynn, J. H., III; Morris, G. A.; Kotsakis, A.; Alvarez, S. L.

    2017-12-01

    A novel technique has been developed to measure SO2 using the existing electrochemical concentration cell (ECC) ozonesonde technology. An interference in the ozone measurement occurs when SO2 is introduced to the iodide redox reaction causing the signal to decrease and go to zero when [O3] < [SO2]. The original method of measuring SO2 with ozonesondes involves launching two ozonesondes together with one ozonesonde unmodified and one with an SO2 filter [Morris et al, 2010]. By taking the difference between these profiles, the SO2 profile could be determined as long as [O3] > [SO2]. A new method allows for making a direct measurement of SO2 without the need for the dual payload by modifying the existing design. The ultimate goal is to be able to measure SO2 vertical profiles in the atmosphere, such as in plumes from anthropogenic or natural sources (i.e. volcanic eruptions). The benefits of an SO2 sonde include the ability to make measurements where aircraft cannot safely fly, such as in volcanic plumes, and to provide validation of SO2 columns from satellites.

  18. Measurement of 14CO2 Assimilation in Soils: an Experiment for the Biological Exploration of Mars

    PubMed Central

    Hubbard, Jerry S.; Hobby, George L.; Horowitz, Norman H.; Geiger, Paul J.; Morelli, Frank A.

    1970-01-01

    A method is described for the measurement of 14CO2 assimilation by microorganisms in soils. A determination involves exposing soil to 14CO2, pyrolyzing the exposed soil, trapping the organic pyrolysis products on a column of firebrick coated with CuO, combusting the trapped organics by heating, and measuring the radioactivity in the CO2 produced in the combustion. The detection of significant levels of 14C in the trapped organic fraction appears to be an unambiguous indication of biological activity. The 14CO2 which is adsorbed or exchanged into soils by nonbiological processes does not interfere. The method easily detects the 14CO2 fixed by 102 to 103 algae after light exposure for 3 to 24 hr. Assimilation of 14C is also demonstrable in dark-exposed soils containing 105 to 106 heterotrophic bacteria. Possible applications of the method in the biological exploration of Mars are discussed. Images PMID:16349879

  19. Measurement by reversed-phase high-performance liquid chromatography of malondialdehyde in normal human urine following derivatisation with 2,4-dinitrophenylhydrazine.

    PubMed

    Korchazhkina, Olga; Exley, Christopher; Andrew Spencer, Stephen

    2003-09-05

    A selective and sensitive method based on derivatisation with 2,4-dinitrophenylhydrazine (DNPH) and consecutive HPLC gradient separation is described for the determination of malondialdehyde (MDA) in urine. Preparation of urine samples involved a one-step derivatisation/extraction procedure. Separation was achieved using a Waters SymmetryC(18) column (3.9 x 150 mm) and linear gradient of acetonitrile in water (from 30% to 70% in 30 min). The overall detection limit of the method was 56 nM of MDA in urine. The recovery of MDA was 94.3+/-8.6%. MDA in urine of healthy volunteers, measured using the method of standard additions, was 0.019+/-0.012 microM/mmol creatinine. MDA in the same samples measured using the 2-thiobarbituric acid (TBA) assay was 0.181+/-0.063 microM/mmol creatinine. We demonstrate that the commonly used TBA assay in conjunction with HPLC may overestimate the MDA concentration in human urine by almost 10-fold.

  20. Application of Energy Function as a Measure of Error in the Numerical Solution for Online Transient Stability Assessment

    NASA Astrophysics Data System (ADS)

    Sarojkumar, K.; Krishna, S.

    2016-08-01

    Online dynamic security assessment (DSA) is a computationally intensive task. In order to reduce the amount of computation, screening of contingencies is performed. Screening involves analyzing the contingencies with the system described by a simpler model so that computation requirement is reduced. Screening identifies those contingencies which are sure to not cause instability and hence can be eliminated from further scrutiny. The numerical method and the step size used for screening should be chosen with a compromise between speed and accuracy. This paper proposes use of energy function as a measure of error in the numerical solution used for screening contingencies. The proposed measure of error can be used to determine the most accurate numerical method satisfying the time constraint of online DSA. Case studies on 17 generator system are reported.

Top