Science.gov

Sample records for conditions measurement methodology

  1. Development of the methodology of exhaust emissions measurement under RDE (Real Driving Emissions) conditions for non-road mobile machinery (NRMM) vehicles

    NASA Astrophysics Data System (ADS)

    Merkisz, J.; Lijewski, P.; Fuc, P.; Siedlecki, M.; Ziolkowski, A.

    2016-09-01

    The paper analyzes the exhaust emissions from farm vehicles based on research performed under field conditions (RDE) according to the NTE procedure. This analysis has shown that it is hard to meet the NTE requirements under field conditions (engine operation in the NTE zone for at least 30 seconds). Due to a very high variability of the engine conditions, the share of a valid number of NTE windows in the field test is small throughout the entire test. For this reason, a modification of the measurement and exhaust emissions calculation methodology has been proposed for farm vehicles of the NRMM group. A test has been developed composed of the following phases: trip to the operation site (paved roads) and field operations (including u-turns and maneuvering). The range of the operation time share in individual test phases has been determined. A change in the method of calculating the real exhaust emissions has also been implemented in relation to the NTE procedure.

  2. Structural health monitoring methodology for aircraft condition-based maintenance

    NASA Astrophysics Data System (ADS)

    Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre

    2001-06-01

    Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.

  3. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  4. Directional reflectance characterization facility and measurement methodology

    NASA Astrophysics Data System (ADS)

    McGuckin, B. T.; Haner, D. A.; Menzies, R. T.; Esproles, C.; Brothers, A. M.

    1996-08-01

    A precision reflectance characterization facility, constructed specifically for the measurement of the bidirectional reflectance properties of Spectralon panels planned for use as in-flight calibrators on the NASA Multiangle Imaging Spectroradiometer (MISR) instrument is described. The incident linearly polarized radiation is provided at three laser wavelengths: 442, 632.8, and 859.9 nm. Each beam is collimated when incident on the Spectralon. The illuminated area of the panel is viewed with a silicon photodetector that revolves around the panel (360 ) on a 30-cm boom extending from a common rotational axis. The reflected radiance detector signal is ratioed with the signal from a reference detector to minimize the effect of amplitude instabilities in the laser sources. This and other measures adopted to reduce noise have resulted in a bidirectional reflection function (BRF) calibration facility with a measurement precision with regard to a BRF measurement of 0.002 at the 1 confidence level. The Spectralon test piece panel is held in a computer-controlled three-axis rotational assembly capable of a full 360 rotation in the horizontal plane and 90 in the vertical. The angular positioning system has repeatability and resolution of 0.001 . Design details and an outline of the measurement methodology are presented.

  5. PIMM: A Performance Improvement Measurement Methodology

    SciTech Connect

    Not Available

    1994-05-15

    This report presents a Performance Improvement Measurement Methodology (PIMM) for measuring and reporting the mission performance for organizational elements of the U.S. Department of Energy to comply with the Chief Financial Officer`s Act (CFOA) of 1990 and the Government Performance and Results Act (GPRA) of 1993. The PIMM is illustrated by application to the Morgantown Energy Technology Center (METC), a Research, Development and Demonstration (RD&D) field center of the Office of Fossil Energy, along with limited applications to the Strategic Petroleum Reserve Office and the Office of Fossil Energy. METC is now implementing the first year of a pilot project under GPRA using the PIMM. The PIMM process is applicable to all elements of the Department; organizations may customize measurements to their specific missions. The PIMM has four aspects: (1) an achievement measurement that applies to any organizational element, (2) key indicators that apply to institutional elements, (3) a risk reduction measurement that applies to all RD&D elements and to elements with long-term activities leading to risk-associated outcomes, and (4) a cost performance evaluation. Key Indicators show how close the institution is to attaining long range goals. Risk reduction analysis is especially relevant to RD&D. Product risk is defined as the chance that the product of new technology will not meet the requirements of the customer. RD&D is conducted to reduce technology risks to acceptable levels. The PIMM provides a profile to track risk reduction as RD&D proceeds. Cost performance evaluations provide a measurement of the expected costs of outcomes relative to their actual costs.

  6. Informationally complete measurements from compressed sensing methodology

    NASA Astrophysics Data System (ADS)

    Kalev, Amir; Riofrio, Carlos; Kosut, Robert; Deutsch, Ivan

    2015-03-01

    Compressed sensing (CS) is a technique to faithfully estimate an unknown signal from relatively few data points when the measurement samples satisfy a restricted isometry property (RIP). Recently this technique has been ported to quantum information science to perform tomography with a substantially reduced number of measurement settings. In this work we show that the constraint that a physical density matrix is positive semidefinite provides a rigorous connection between the RIP and the informational completeness (IC) of a POVM used for state tomography. This enables us to construct IC measurements that are robust to noise using tools provided by the CS methodology. The exact recovery no longer hinges on a particular convex optimization program; solving any optimization, constrained on the cone of positive matrices, effectively results in a CS estimation of the state. From a practical point of view, we can therefore employ fast algorithms developed to handle large dimensional matrices for efficient tomography of quantum states of a large dimensional Hilbert space. Supported by the National Science Foundation.

  7. Operant Theory and Methodology in Infant Vocal Conditioning.

    ERIC Educational Resources Information Center

    Poulson, Claire L.

    1984-01-01

    Aims to clarify the distinction between elicitation and reinforcement discussed in Bloom (1984); to make explicit theoretical and methodological assumptions about the experimental analysis of infant behavior as shown in components of Poulson (1983); and to clarify differences in interpretation of other infant vocal conditioning research.…

  8. Measuring Program Outcomes: Using Retrospective Pretest Methodology.

    ERIC Educational Resources Information Center

    Pratt, Clara C.; McGuigan, William M.; Katsev, Aphra R.

    2000-01-01

    Used longitudinal data from 307 mothers of firstborn infants participating in a home-visitation, child abuse prevention program in a retrospective pretest methodology. Results shows that when response shift bias was present, the retrospective pretest methodology produced a more legitimate assessment of program outcomes than did the traditional…

  9. Relative Hazard and Risk Measure Calculation Methodology

    SciTech Connect

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.

    2004-03-20

    The relative hazard (RH) and risk measure (RM) methodology and computer code is a health risk-based tool designed to allow managers and environmental decision makers the opportunity to readily consider human health risks (i.e., public and worker risks) in their screening-level analysis of alternative cleanup strategies. Environmental management decisions involve consideration of costs, schedules, regulatory requirements, health hazards, and risks. The RH-RM tool is a risk-based environmental management decision tool that allows managers the ability to predict and track health hazards and risks over time as they change in relation to mitigation and cleanup actions. Analysis of the hazards and risks associated with planned mitigation and cleanup actions provides a baseline against which alternative strategies can be compared. This new tool allows managers to explore “what if scenarios,” to better understand the impact of alternative mitigation and cleanup actions (i.e., alternatives to the planned actions) on health hazards and risks. This new tool allows managers to screen alternatives on the basis of human health risk and compare the results with cost and other factors pertinent to the decision. Once an alternative or a narrow set of alternatives are selected, it will then be more cost-effective to perform the detailed risk analysis necessary for programmatic and regulatory acceptance of the selected alternative. The RH-RM code has been integrated into the PNNL developed Framework for Risk Analysis In Multimedia Environmental Systems (FRAMES) to allow the input and output data of the RH-RM code to be readily shared with the more comprehensive risk analysis models, such as the PNNL developed Multimedia Environmental Pollutant Assessment System (MEPAS) model.

  10. Methodology and Process for Condition Assessment at Existing Hydropower Plants

    SciTech Connect

    Zhang, Qin Fen; Smith, Brennan T; Cones, Marvin; March, Patrick; Dham, Rajesh; Spray, Michael

    2012-01-01

    Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

  11. Methodological Issues in Measuring Health Disparities

    PubMed Central

    Keppel, Kenneth; Pamuk, Elsie; Lynch, John; Carter-Pokras, Olivia; Kim, Insun; Mays, Vickie; Pearcy, Jeffrey; Schoenbach, Victor; Weissman, Joel S.

    2013-01-01

    Objectives This report discusses six issues that affect the measurement of disparities in health between groups in a population: Selecting a reference point from which to measure disparityMeasuring disparity in absolute or in relative termsMeasuring in terms of favorable or adverse eventsMeasuring in pair-wise or in summary fashionChoosing whether to weight groups according to group sizeDeciding whether to consider any inherent ordering of the groups. These issues represent choices that are made when disparities are measured. Methods Examples are used to highlight how these choices affect specific measures of disparity. Results These choices can affect the size and direction of disparities measured at a point in time and conclusions about the size and direction of changes in disparity over time. Eleven guidelines for measuring disparities are presented. Conclusions Choices concerning the measurement of disparity should be made deliberately, recognizing that each choice will affect the results. When results are presented, the choices on which the measurements are based should be described clearly and justified appropriately. PMID:16032956

  12. Quantum Measurement and Initial Conditions

    NASA Astrophysics Data System (ADS)

    Stoica, Ovidiu Cristinel

    2016-03-01

    Quantum measurement finds the observed system in a collapsed state, rather than in the state predicted by the Schrödinger equation. Yet there is a relatively spread opinion that the wavefunction collapse can be explained by unitary evolution (for instance in the decoherence approach, if we take into account the environment). In this article it is proven a mathematical result which severely restricts the initial conditions for which measurements have definite outcomes, if pure unitary evolution is assumed. This no-go theorem remains true even if we take the environment into account. The result does not forbid a unitary description of the measurement process, it only shows that such a description is possible only for very restricted initial conditions. The existence of such restrictions of the initial conditions can be understood in the four-dimensional block universe perspective, as a requirement of global self-consistency of the solutions of the Schrödinger equation.

  13. Methodological Challenges in Measuring Child Maltreatment

    ERIC Educational Resources Information Center

    Fallon, Barbara; Trocme, Nico; Fluke, John; MacLaurin, Bruce; Tonmyr, Lil; Yuan, Ying-Ying

    2010-01-01

    Objective: This article reviewed the different surveillance systems used to monitor the extent of reported child maltreatment in North America. Methods: Key measurement and definitional differences between the surveillance systems are detailed and their potential impact on the measurement of the rate of victimization. The infrastructure…

  14. Wastewater Sampling Methodologies and Flow Measurement Techniques.

    ERIC Educational Resources Information Center

    Harris, Daniel J.; Keffer, William J.

    This document provides a ready source of information about water/wastewater sampling activities using various commercial sampling and flow measurement devices. The report consolidates the findings and summarizes the activities, experiences, sampling methods, and field measurement techniques conducted by the Environmental Protection Agency (EPA),…

  15. A methodological review of resilience measurement scales

    PubMed Central

    2011-01-01

    Background The evaluation of interventions and policies designed to promote resilience, and research to understand the determinants and associations, require reliable and valid measures to ensure data quality. This paper systematically reviews the psychometric rigour of resilience measurement scales developed for use in general and clinical populations. Methods Eight electronic abstract databases and the internet were searched and reference lists of all identified papers were hand searched. The focus was to identify peer reviewed journal articles where resilience was a key focus and/or is assessed. Two authors independently extracted data and performed a quality assessment of the scale psychometric properties. Results Nineteen resilience measures were reviewed; four of these were refinements of the original measure. All the measures had some missing information regarding the psychometric properties. Overall, the Connor-Davidson Resilience Scale, the Resilience Scale for Adults and the Brief Resilience Scale received the best psychometric ratings. The conceptual and theoretical adequacy of a number of the scales was questionable. Conclusion We found no current 'gold standard' amongst 15 measures of resilience. A number of the scales are in the early stages of development, and all require further validation work. Given increasing interest in resilience from major international funders, key policy makers and practice, researchers are urged to report relevant validation statistics when using the measures. PMID:21294858

  16. Experimental comparison of exchange bias measurement methodologies

    SciTech Connect

    Hovorka, Ondrej; Berger, Andreas; Friedman, Gary

    2007-05-01

    Measurements performed on all-ferromagnetic bilayer systems and supported by model calculation results are used to compare different exchange bias characterization methods. We demonstrate that the accuracy of the conventional two-point technique based on measuring the sum of the coercive fields depends on the symmetry properties of hysteresis loops. On the other hand, the recently proposed center of mass method yields results independent of the hysteresis loop type and coincides with the two-point measurement only if the loops are symmetric. Our experimental and simulation results clearly demonstrate a strong correlation between loop asymmetry and the difference between these methods.

  17. Toward a new methodology for measuring the threshold Shields number

    NASA Astrophysics Data System (ADS)

    Rousseau, Gauthier; Dhont, Blaise; Ancey, Christophe

    2016-04-01

    A number of bedload transport equations involve the threshold Shields number (corresponding to the threshold of incipient motion for particles resting on the streambed). Different methods have been developed for determining this threshold Shields number; they usually assume that the initial streambed is plane prior to sediment transport. Yet, there are many instances in real-world scenarios, in which the initial streambed is not free of bed forms. We are interested in developing a new methodology for determining the threshold of incipient motion in gravel-bed streams in which smooth bed forms (e.g., anti-dunes) develop. Experiments were conducted in a 10-cm wide, 2.5-m long flume, whose initial inclination was 3%. Flows were supercritical and fully turbulent. The flume was supplied with water and sediment at fixed rates. As bed forms developed and migrated, and sediment transport rates exhibited wide fluctuations, measurements had to be taken over long times (typically 10 hr). Using a high-speed camera, we recorded the instantaneous bed load transport rate at the outlet of the flume by taking top-view images. In parallel, we measured the evolution of the bed slope, water depth, and shear stress by filming through a lateral window of the flume. These measurements allowed for the estimation of the space and time-averaged slope, from which we deduced the space and time-averaged Shields number under incipient bed load transport conditions. In our experiments, the threshold Shields number was strongly dependent on streambed morphology. Experiments are under way to determine whether taking the space and time average of incipient motion experiments leads to a more robust definition of the threshold Shields number. If so, this new methodology will perform better than existing approaches at measuring the threshold Shields number.

  18. Evaluation of UT Wall Thickness Measurements and Measurement Methodology

    SciTech Connect

    Weier, Dennis R.; Pardini, Allan F.

    2007-10-01

    CH2M HILL has requested that PNNL examine the ultrasonic methodology utilized in the inspection of the Hanford double shell waste tanks. Specifically, PNNL is to evaluate the UT process variability and capability to detect changes in wall thickness and to document the UT operator's techniques and methodology in the determination of the reported minimum and average UT data and how it compares to the raw (unanalyzed) UT data.

  19. A Methodology for Absolute Isotope Composition Measurement

    NASA Astrophysics Data System (ADS)

    Shen, J. J.; Lee, D.; Liang, W.

    2007-12-01

    Double spike technique was a well defined method for isotope composition measurement by TIMS of samples which have natural mass fractionation effect, but it is still a problem to define the isotope composition for double spike itself. In this study, we modified the old double spike technique and found that we could use the modified technique to solve the ¡§true¡¨ isotope composition of double spike itself. According the true isotope composition of double spike, we can measure the absolute isotope composition if the sample has natural fractionation effect. A new vector analytical method has been developed in order to obtain the true isotopic composition of a 42Ca-48Ca double spike, and this is achieved by using two different sample-spike mixtures combined with the double spike and the natural Ca data. Because the natural sample, the two mixtures, and the spike should all lie on a single mixing line, we are able to constrain the true isotopic composition of our double spike using this new approach. This method not only can be used in Ca system but also in Ti, Cr, Fe, Ni, Zn, Mo, Ba and Pb systems. The absolute double spike isotopic ratio is important, which can save a lot of time to check different reference standards. Especially for Pb, radiogenic isotope system, the decay systems embodied in three of four naturally occurring isotopes induce difficult to obtain true isotopic ratios for absolute dating.

  20. Methodological aspects of EEG and body dynamics measurements during motion

    PubMed Central

    Reis, Pedro M. R.; Hebenstreit, Felix; Gabsteiger, Florian; von Tscharner, Vinzenz; Lochmann, Matthias

    2014-01-01

    EEG involves the recording, analysis, and interpretation of voltages recorded on the human scalp which originate from brain gray matter. EEG is one of the most popular methods of studying and understanding the processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements that are performed in response to the environment. However, there are methodological difficulties which can occur when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions on how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics, and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determinating real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks. PMID:24715858

  1. DEVELOPMENT OF PETROLUEM RESIDUA SOLUBILITY MEASUREMENT METHODOLOGY

    SciTech Connect

    Per Redelius

    2006-03-01

    In the present study an existing spectrophotometry system was upgraded to provide high-resolution ultraviolet (UV), visible (Vis), and near infrared (NIR) analyses of test solutions to measure the relative solubilities of petroleum residua dissolved in eighteen test solvents. Test solutions were prepared by dissolving ten percent petroleum residue in a given test solvent, agitating the mixture, followed by filtration and/or centrifugation to remove insoluble materials. These solutions were finally diluted with a good solvent resulting in a supernatant solution that was analyzed by spectrophotometry to quantify the degree of dissolution of a particular residue in the suite of test solvents that were selected. Results obtained from this approach were compared with spot-test data (to be discussed) obtained from the cosponsor.

  2. Diffusions conditioned on occupation measures

    NASA Astrophysics Data System (ADS)

    Angeletti, Florian; Touchette, Hugo

    2016-02-01

    A Markov process fluctuating away from its typical behavior can be represented in the long-time limit by another Markov process, called the effective or driven process, having the same stationary states as the original process conditioned on the fluctuation observed. We construct here this driven process for diffusions spending an atypical fraction of their evolution in some region of state space, corresponding mathematically to stochastic differential equations conditioned on occupation measures. As an illustration, we consider the Langevin equation conditioned on staying for a fraction of time in different intervals of the real line, including the positive half-line which leads to a generalization of the Brownian meander problem. Other applications related to quasi-stationary distributions, metastable states, noisy chemical reactions, queues, and random walks are discussed.

  3. Advanced quantitative measurement methodology in physics education research

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The

  4. Issues in Measurement and Methodology: CSE's 1978 Conference.

    ERIC Educational Resources Information Center

    Burry, James, Ed.; Quellmalz, Edys S., Ed.

    1978-01-01

    Abstracts are presented of the major conference papers and thematic discussions delivered at the 1978 Measurement and Methodology Conference. The titles of the presentations are: Policy-Responsive Evaluation (Wiley); When Educators Set Standards (Glass); Comments on Wiley and Glass (Schutz); Key Standard-Setting Considerations for Minimal…

  5. Operant theory and methodology in infant vocal conditioning.

    PubMed

    Poulson, C L

    1984-08-01

    The numerous mechanisms of behavior change in infant development are sometimes difficult to distinguish. Although it is agreed that elicitation and reinforcement both influence infant learning, the distinction between these two learning mechanisms was clarified in response to K. Bloom's (1984, Journal of Experimental Child Psychology, 38, 93-102) commentary. The theoretical and methodological assumptions of an functional analysis of infant behavior were made explicit in the context of the C. L. Poulson study (1983, Journal of Experimental Child Psychology, 36, 471-489). The rationale for the use of DRO schedules to control for elicitation effects of continuous reinforcement and the inadequacy of noncontingent schedules for this purpose were also discussed.

  6. Holdup measurements under realistic conditions

    SciTech Connect

    Sprinkel, J.K. Jr.; Marshall, R.; Russo, P.A.; Siebelist, R.

    1997-11-01

    This paper reviews the documentation of the precision and bias of holdup (residual nuclear material remaining in processing equipment) measurements and presents previously unreported results. Precision and bias results for holdup measurements are reported from training seminars with simulated holdup, which represent the best possible results, and compared to actual plutonium processing facility measurements. Holdup measurements for plutonium and uranium processing plants are also compared to reference values. Recommendations for measuring holdup are provided for highly enriched uranium facilities and for low enriched uranium facilities. The random error component of holdup measurements is less than the systematic error component. The most likely factor in measurement error is incorrect assumptions about the measurement, such as background, measurement geometry, or signal attenuation. Measurement precision on the order of 10% can be achieved with some difficulty. Bias of poor quality holdup measurement can also be improved. However, for most facilities, holdup measurement errors have no significant impact on inventory difference, sigma, or safety (criticality, radiation, or environmental); therefore, it is difficult to justify the allocation of more resources to improving holdup measurements. 25 refs., 10 tabs.

  7. Research Methodology: Endocrinologic Measurements in Exercise Science and Sports Medicine

    PubMed Central

    Hackney, Anthony C; Viru, Atko

    2008-01-01

    Objective: To provide background information on methodologic factors that influence and add variance to endocrine outcome measurements. Our intent is to aid and improve the quality of exercise science and sports medicine research endeavors of investigators inexperienced in endocrinology. Background: Numerous methodologic factors influence human endocrine (hormonal) measurements and, consequently, can dramatically compromise the accuracy and validity of exercise and sports medicine research. These factors can be categorized into those that are biologic and those that are procedural-analytic in nature. Recommendations: Researchers should design their studies to monitor, control, and adjust for the biologic and procedural-analytic factors discussed within this paper. By doing so, they will find less variance in their hormonal outcomes and thereby will increase the validity of their physiologic data. These actions can assist the researcher in the interpretation and understanding of endocrine data and, in turn, make their research more scientifically sound. PMID:19030142

  8. National working conditions surveys in Latin America: comparison of methodological characteristics

    PubMed Central

    Merino-Salazar, Pamela; Artazcoz, Lucía; Campos-Serna, Javier; Gimeno, David; Benavides, Fernando G.

    2015-01-01

    Background: High-quality and comparable data to monitor working conditions and health in Latin America are not currently available. In 2007, multiple Latin American countries started implementing national working conditions surveys. However, little is known about their methodological characteristics. Objective: To identify commonalities and differences in the methodologies of working conditions surveys (WCSs) conducted in Latin America through 2013. Methods: The study critically examined WCSs in Latin America between 2007 and 2013. Sampling design, data collection, and questionnaire content were compared. Results: Two types of surveys were identified: (1) surveys covering the entire working population and administered at the respondent's home and (2) surveys administered at the workplace. There was considerable overlap in the topics covered by the dimensions of employment and working conditions measured, but less overlap in terms of health outcomes, prevention resources, and activities. Conclusions: Although WCSs from Latin America are similar, there was heterogeneity across surveyed populations and location of the interview. Reducing differences in surveys between countries will increase comparability and allow for a more comprehensive understanding of occupational health in the region. PMID:26079314

  9. Operant conditioning methodology in the assessment of food preferences: introductory comments.

    PubMed

    Rashotte, M E; Smith, J C

    1984-01-01

    This paper provides a general introduction to the methodology used in the next three papers of this monograph to test dogs' food preferences. This "operant conditioning" methodology was developed in behavioral laboratories concerned with learning in animals. The logic of the methodology is described, some technical and practical aspects of the methodology are summarized, and its application in the hedonic-scaling of qualitatively different foods is discussed. The relation between this new methodology and the traditional two-bowl preference test is reviewed.

  10. From lab to field conditions: a pilot study on EEG methodology in applied sports sciences.

    PubMed

    Reinecke, Kirsten; Cordes, Marjolijn; Lerch, Christiane; Koutsandréou, Flora; Schubert, Michael; Weiss, Michael; Baumeister, Jochen

    2011-12-01

    Although neurophysiological aspects have become more important in sports and exercise sciences in the last years, it was not possible to measure cortical activity during performance outside a laboratory due to equipment limits or movement artifacts in particular. With this pilot study we want to investigate whether Electroencephalography (EEG) data obtained in a laboratory golf putting performance differ from a suitable putting task under field conditions. Therefore, parameters of the working memory (frontal Theta and parietal Alpha 2 power) were recorded during these two conditions. Statistical calculations demonstrated a significant difference only for Theta power at F4 regarding the two putting conditions "field" and "laboratory". These findings support the idea that brain activity patterns obtained under laboratory conditions are comparable but not equivalent to those obtained under field conditions. Additionally, we were able to show that the EEG methodology seems to be a reliable tool to observe brain activity under field conditions in a golf putting task. However, considering the still existing problems of movement artifacts during EEG measurements, eligible sports and exercises are limited to those being relatively motionless during execution. Further studies are needed to confirm these pilot results. PMID:21800184

  11. Methodology of high-resolution photography for mural condition database

    NASA Astrophysics Data System (ADS)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  12. Physiological outflow boundary conditions methodology for small arteries with multiple outlets: a patient-specific hepatic artery haemodynamics case study.

    PubMed

    Aramburu, Jorge; Antón, Raúl; Bernal, Nebai; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio

    2015-04-01

    Physiological outflow boundary conditions are necessary to carry out computational fluid dynamics simulations that reliably represent the blood flow through arteries. When dealing with complex three-dimensional trees of small arteries, and therefore with multiple outlets, the robustness and speed of convergence are also important. This study derives physiological outflow boundary conditions for cases in which the physiological values at those outlets are not known (neither in vivo measurements nor literature-based values are available) and in which the tree exhibits symmetry to some extent. The inputs of the methodology are the three-dimensional domain and the flow rate waveform and the systolic and diastolic pressures at the inlet. The derived physiological outflow boundary conditions, which are a physiological pressure waveform for each outlet, are based on the results of a zero-dimensional model simulation. The methodology assumes symmetrical branching and is able to tackle the flow distribution problem when the domain outlets are at branches with a different number of upstream bifurcations. The methodology is applied to a group of patient-specific arteries in the liver. The methodology is considered to be valid because the pulsatile computational fluid dynamics simulation with the inflow flow rate waveform (input of the methodology) and the derived outflow boundary conditions lead to physiological results, that is, the resulting systolic and diastolic pressures at the inlet match the inputs of the methodology, and the flow split is also physiological.

  13. Physiological outflow boundary conditions methodology for small arteries with multiple outlets: a patient-specific hepatic artery haemodynamics case study.

    PubMed

    Aramburu, Jorge; Antón, Raúl; Bernal, Nebai; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio

    2015-04-01

    Physiological outflow boundary conditions are necessary to carry out computational fluid dynamics simulations that reliably represent the blood flow through arteries. When dealing with complex three-dimensional trees of small arteries, and therefore with multiple outlets, the robustness and speed of convergence are also important. This study derives physiological outflow boundary conditions for cases in which the physiological values at those outlets are not known (neither in vivo measurements nor literature-based values are available) and in which the tree exhibits symmetry to some extent. The inputs of the methodology are the three-dimensional domain and the flow rate waveform and the systolic and diastolic pressures at the inlet. The derived physiological outflow boundary conditions, which are a physiological pressure waveform for each outlet, are based on the results of a zero-dimensional model simulation. The methodology assumes symmetrical branching and is able to tackle the flow distribution problem when the domain outlets are at branches with a different number of upstream bifurcations. The methodology is applied to a group of patient-specific arteries in the liver. The methodology is considered to be valid because the pulsatile computational fluid dynamics simulation with the inflow flow rate waveform (input of the methodology) and the derived outflow boundary conditions lead to physiological results, that is, the resulting systolic and diastolic pressures at the inlet match the inputs of the methodology, and the flow split is also physiological. PMID:25934258

  14. Methodology for measurement in schools and kindergartens: experiences.

    PubMed

    Fojtíková, I; Navrátilová Rovenská, K

    2015-06-01

    In more than 1500 schools and preschool facilities, long-term radon measurement was carried out in the last 3 y. The negative effect of thermal retrofitting on the resulting long-term radon averages is evident. In some of the facilities, low ventilation rates and correspondingly high radon levels were found, so it was recommended to change ventilation habits. However, some of the facilities had high radon levels due to its ingress from soil gas. Technical measures should be undertaken to reduce radon exposure in this case. The paper presents the long-term experiences with the two-stage measurement methodology for investigation of radon levels in school and preschool facilities and its possible improvements. PMID:25979748

  15. Methodology for measurement in schools and kindergartens: experiences.

    PubMed

    Fojtíková, I; Navrátilová Rovenská, K

    2015-06-01

    In more than 1500 schools and preschool facilities, long-term radon measurement was carried out in the last 3 y. The negative effect of thermal retrofitting on the resulting long-term radon averages is evident. In some of the facilities, low ventilation rates and correspondingly high radon levels were found, so it was recommended to change ventilation habits. However, some of the facilities had high radon levels due to its ingress from soil gas. Technical measures should be undertaken to reduce radon exposure in this case. The paper presents the long-term experiences with the two-stage measurement methodology for investigation of radon levels in school and preschool facilities and its possible improvements.

  16. Select Methodology for Validating Advanced Satellite Measurement Systems

    NASA Technical Reports Server (NTRS)

    Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.

    2008-01-01

    Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.

  17. Methodological considerations for measuring glucocorticoid metabolites in feathers.

    PubMed

    Berk, Sara A; McGettrick, Julie R; Hansen, Warren K; Breuner, Creagh W

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT.

  18. Methodological considerations for measuring glucocorticoid metabolites in feathers.

    PubMed

    Berk, Sara A; McGettrick, Julie R; Hansen, Warren K; Breuner, Creagh W

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  19. Methodological considerations for measuring glucocorticoid metabolites in feathers

    PubMed Central

    Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  20. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  1. Innovative methodologies and technologies for thermal energy release measurement.

    NASA Astrophysics Data System (ADS)

    Marotta, Enrica; Peluso, Rosario; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Chiodini, Giovanni; Mangiacapra, Annarita; Petrillo, Zaccaria; Sansivero, Fabio; Vilardo, Giuseppe; Marfe, Barbara

    2016-04-01

    Volcanoes exchange heat, gases and other fluids between the interrior of the Earth and its atmosphere influencing processes both at the surface and above it. This work is devoted to improve the knowledge on the parameters that control the anomalies in heat flux and chemical species emissions associated with the diffuse degassing processes of volcanic and hydrothermal zones. We are studying and developing innovative medium range remote sensing technologies to measure the variations through time of heat flux and chemical emissions in order to boost the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The current methodologies used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. Remote sensing of these parameters will allow for measurements faster than already accredited methods therefore it will be both more effective and efficient in case of emergency and it will be used to make quick routine monitoring. We are currently developing a method based on drone-born IR cameras to measure the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. The use of flying drones will allow to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature at distance in the order of hundreds of meters. Further development of remote sensing will be done through the use, on flying drones, of multispectral and/or iperspectral sensors, UV scanners in order to be able to detect the amount of chemical species released in the athmosphere.

  2. Measuring persistence: A literature review focusing on methodological issues

    SciTech Connect

    Wolfe, A.K.; Brown, M.A.; Trumble, D.

    1995-03-01

    This literature review was conducted as part of a larger project to produce a handbook on the measurement of persistence. The past decade has marked the development of the concept of persistence and a growing recognition that the long-term impacts of demand-side management (DSM) programs warrant careful assessment. Although Increasing attention has been paid to the topic of persistence, no clear consensus has emerged either about its definition or about the methods most appropriate for its measurement and analysis. This project strives to fill that gap by reviewing the goals, terminology, and methods of past persistence studies. It was conducted from the perspective of a utility that seeks to acquire demand-side resources and is interested in their long-term durability; it was not conducted from the perspective of the individual consumer. Over 30 persistence studies, articles, and protocols were examined for this report. The review begins by discussing the underpinnings of persistence studies: namely, the definitions of persistence and the purposes of persistence studies. Then. it describes issues relevant to both the collection and analysis of data on the persistence of energy and demand savings. Findings from persistence studies also are summarized. Throughout the review, four studies are used repeatedly to illustrate different methodological and analytical approaches to persistence so that readers can track the data collection. data analysis, and findings of a set of comprehensive studies that represent alternative approaches.

  3. Adaptability of laser diffraction measurement technique in soil physics methodology

    NASA Astrophysics Data System (ADS)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  4. Optimization of Preparation Conditions for Lysozyme Nanoliposomes Using Response Surface Methodology and Evaluation of Their Stability.

    PubMed

    Wu, Zhipan; Guan, Rongfa; Lyu, Fei; Liu, Mingqi; Gao, Jianguo; Cao, Guozou

    2016-01-01

    The main purpose of this study was to optimize the preparation of lysozyme nanoliposomes using response surface methodology and measure their stability. The stabilities of lysozyme nanoliposomes in simulated gastrointestinal fluid (SGF), simulated intestinal fluid (SIF), as well as pH, temperature and sonication treatment time were evaluated. Reverse-phase evaporation method is an easy, speedy, and beneficial approach for nanoliposomes' preparation and optimization. The optimal preparative conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 3.86, lysozyme concentration of 1.96 mg/mL, magnetic stirring time of 40.61 min, and ultrasound time of 14.15 min. At the optimal point, encapsulation efficiency and particle size were found to be 75.36% ± 3.20% and 245.6 nm ± 5.2 nm, respectively. The lysozyme nanoliposomes demonstrated certain stability in SGF and SIF at a temperature of 37 °C for 4 h, and short sonication handling times were required to attain nano-scaled liposomes. Under conditions of high temperature, acidity and alkalinity, lysozyme nanoliposomes are unstable. PMID:27338315

  5. [Measuring the intracoronary pressure gradient--value and methodologic limitations].

    PubMed

    Sievert, H; Kaltenbach, M

    1987-06-01

    Measurements of pressure gradients were performed in a fluid-filled model. The hydrostatically regulated perfusion pressure, as well as the diameter of the tube segments and the regulation of the flow by peripheral resistance, were comparable to conditions in human coronary arteries. Pressure gradients above 20 mm Hg were only measured with a reduction in cross-sectional area of more than 90%. Even after increasing the flow four-fold, which corresponds to the human coronary flow reserve, as well as after probing the stenosis with different catheters (2F-5F), gradients greater than 20 mm Hg were only recorded with high-grade stenoses (more than 80% reduction in cross-sectional area). The findings in this model demonstrate that measurement of pressure gradients allows only a quantitative differentiation between high-grade (greater than 80%) and low-grade (less than 80%) stenoses. The catheter itself can substantially contribute to the gradient by vessel obstruction, depending on the diameter of the catheter and of the coronary vessel. A quantitative assessment of the stenosis therefore requires knowledge of the pre- and post-stenotic vessel diameter as well as of the catheter diameter. However, pressure measurements during transluminal coronary angioplasty should not be abandoned. They can be useful to aid catheter positioning and to estimate dilatation efficacy. Moreover, measurement of coronary capillary wedge pressure during balloon expansion provides valuable information about the extent of collateralisation. PMID:2957862

  6. Conditional Probabilities and Collapse in Quantum Measurements

    NASA Astrophysics Data System (ADS)

    Laura, Roberto; Vanni, Leonardo

    2008-09-01

    We show that including both the system and the apparatus in the quantum description of the measurement process, and using the concept of conditional probabilities, it is possible to deduce the statistical operator of the system after a measurement with a given result, which gives the probability distribution for all possible consecutive measurements on the system. This statistical operator, representing the state of the system after the first measurement, is in general not the same that would be obtained using the postulate of collapse.

  7. The Role of Measuring in the Learning of Environmental Occupations and Some Aspects of Environmental Methodology

    ERIC Educational Resources Information Center

    István, Lüko

    2016-01-01

    The methodology neglected area of pedagogy, within the environmental specialist teaching methodology cultivating the best part is incomplete. In this article I shall attempt to environmental methodology presented in one part of the University of West Environmental Education workshop, based on the measurement of experiential learning as an…

  8. Responsive Surface Methodology Optimizes Extraction Conditions of Industrial by-products, Camellia japonica Seed Cake

    PubMed Central

    Kim, Jae Kyeom; Lim, Ho-Jeong; Kim, Mi-So; Choi, Soo Jung; Kim, Mi-Jeong; Kim, Cho Rong; Shin, Dong-Hoon; Shin, Eui-Cheol

    2016-01-01

    Background: The central nervous system is easily damaged by oxidative stress due to high oxygen consumption and poor defensive capacity. Hence, multiple studies have demonstrated that inhibiting oxidative stress-induced damage, through an antioxidant-rich diet, might be a reasonable approach to prevent neurodegenerative disease. Objective: In the present study, response surface methodology was utilized to optimize the extraction for neuro-protective constituents of Camellia japonica byproducts. Materials and Methods: Rat pheochromocytoma cells were used to evaluate protective potential of Camellia japonica byproducts. Results: Optimum conditions were 33.84 min, 75.24%, and 75.82°C for time, ethanol concentration and temperature. Further, we demonstrated that major organic acid contents were significantly impacted by the extraction conditions, which may explain varying magnitude of protective potential between fractions. Conclusions: Given the paucity of information in regards to defatted C. japonica seed cake and their health promoting potential, our results herein provide interesting preliminary data for utilization of this byproduct from oil processing in both academic and industrial applications. SUMMARY Neuro-protective potential of C. japonica seed cake on cell viability was affected by extraction conditionsExtraction conditions effectively influenced on active constituents of C. japonica seed cakeBiological activity of C. japonica seed cake was optimized by the responsive surface methodology. Abbreviations used: GC-MS: Gas chromatography-mass spectrometer, MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, PC12 cells: Pheochromocytoma, RSM: Response surface methodology. PMID:27601847

  9. Responsive Surface Methodology Optimizes Extraction Conditions of Industrial by-products, Camellia japonica Seed Cake

    PubMed Central

    Kim, Jae Kyeom; Lim, Ho-Jeong; Kim, Mi-So; Choi, Soo Jung; Kim, Mi-Jeong; Kim, Cho Rong; Shin, Dong-Hoon; Shin, Eui-Cheol

    2016-01-01

    Background: The central nervous system is easily damaged by oxidative stress due to high oxygen consumption and poor defensive capacity. Hence, multiple studies have demonstrated that inhibiting oxidative stress-induced damage, through an antioxidant-rich diet, might be a reasonable approach to prevent neurodegenerative disease. Objective: In the present study, response surface methodology was utilized to optimize the extraction for neuro-protective constituents of Camellia japonica byproducts. Materials and Methods: Rat pheochromocytoma cells were used to evaluate protective potential of Camellia japonica byproducts. Results: Optimum conditions were 33.84 min, 75.24%, and 75.82°C for time, ethanol concentration and temperature. Further, we demonstrated that major organic acid contents were significantly impacted by the extraction conditions, which may explain varying magnitude of protective potential between fractions. Conclusions: Given the paucity of information in regards to defatted C. japonica seed cake and their health promoting potential, our results herein provide interesting preliminary data for utilization of this byproduct from oil processing in both academic and industrial applications. SUMMARY Neuro-protective potential of C. japonica seed cake on cell viability was affected by extraction conditionsExtraction conditions effectively influenced on active constituents of C. japonica seed cakeBiological activity of C. japonica seed cake was optimized by the responsive surface methodology. Abbreviations used: GC-MS: Gas chromatography-mass spectrometer, MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, PC12 cells: Pheochromocytoma, RSM: Response surface methodology.

  10. Weak measurement and Bohmian conditional wave functions

    SciTech Connect

    Norsen, Travis; Struyve, Ward

    2014-11-15

    It was recently pointed out and demonstrated experimentally by Lundeen et al. that the wave function of a particle (more precisely, the wave function possessed by each member of an ensemble of identically-prepared particles) can be “directly measured” using weak measurement. Here it is shown that if this same technique is applied, with appropriate post-selection, to one particle from a perhaps entangled multi-particle system, the result is precisely the so-called “conditional wave function” of Bohmian mechanics. Thus, a plausibly operationalist method for defining the wave function of a quantum mechanical sub-system corresponds to the natural definition of a sub-system wave function which Bohmian mechanics uniquely makes possible. Similarly, a weak-measurement-based procedure for directly measuring a sub-system’s density matrix should yield, under appropriate circumstances, the Bohmian “conditional density matrix” as opposed to the standard reduced density matrix. Experimental arrangements to demonstrate this behavior–and also thereby reveal the non-local dependence of sub-system state functions on distant interventions–are suggested and discussed. - Highlights: • We study a “direct measurement” protocol for wave functions and density matrices. • Weakly measured states of entangled particles correspond to Bohmian conditional states. • Novel method of observing quantum non-locality is proposed.

  11. Bone drilling methodology and tool based on position measurements.

    PubMed

    Díaz, Iñaki; Gil, Jorge Juan; Louredo, Marcos

    2013-11-01

    Bone drilling, despite being a very common procedure in hospitals around the world, becomes very challenging when performed close to organs such as the cochlea or when depth control is critical for avoiding damage to surrounding tissue. To date, several mechatronic prototypes have been proposed to assist surgeons by automatically detecting bone layer transitions and breakthroughs. However, none of them is currently accurate enough to be part of the surgeon's standard equipment. The present paper shows a test bench specially designed to evaluate prior methodologies and analyze their drawbacks. Afterward, a new layer detection methodology with improved performance is described and tested. Finally, the prototype of a portable mechatronic bone drill that takes advantage of the proposed detection algorithm is presented. PMID:23522964

  12. Methodology of C factor verification in conditions of the Czech Republic

    NASA Astrophysics Data System (ADS)

    Dvorakova, T.; Dostal, T.; David, V.; Kavka, P.; Krasa, J.; Koudelka, P.

    2012-04-01

    Universal Soil Loss Equation (USLE) is a widely used tool for the assessment of soil erosion in the Czech Republic as well as in many other countries. C factor is one of the six factors composing USLE. This factor represents vegetation cover and management on agricultural land. Its values were derived based on a comparison of the soil loss from a plot with given crops and management and the soil loss which is tilled as continuously fallow. The influence of vegetation cover on the soil loss varies during the vegetation season and it is important to determine the representative value of C factor for each vegetative stage. The year is, according to the original methodology, divided into 5 or 6 periods. Expected protective effect for each of these periods is calculated separately. The value of C factor was first published in the Czech Republic in 1984 and since then has never been revised. The values were taken from a USA catalog and it is uncertain how our conditions were verified. But in fact the C factor is very dependent on the technologies or climates in that particular country. The agriculture crops are cultivated [bred] and their protective effect may be different. For some crops there is no C factor value at all. The Department of Irrigation, Drainage and Landscape Engineering, Faculty of Civil Engineering CTU in Prague has dealt with the protection of soil and water for many years. For several years it has operated an experimental basin in which there are three erosion plots identical to those where the USLE was derived. On these plots the value of C factor can be measured very easily, but it would take a very long time to compile the new catalog of these values. The Department acquired a mobile rainfall simulator to quickly obtain a larger set of data. Parameters of the instrument and methodology of determination of individual C factor values using the mobile rainfall simulator are subject to contribution. The water erosion affects most the rainfall event

  13. Mass-based condition measures and their relationship with fitness: in what condition is condition?

    PubMed Central

    Barnett, Craig A.; Suzuki, Toshitaka N.; Sakaluk, Scott K.; Thompson, Charles F.

    2015-01-01

    Mass or body-size measures of ‘condition’ are of central importance to the study of ecology and evolution, and it is often assumed that differences in condition measures are positively and linearly related to fitness. Using examples drawn from ecological studies, we show that indices of condition frequently are unlikely to be related to fitness in a linear fashion. Researchers need to be more explicit in acknowledging the limitations of mass-based condition measures and accept that, under some circumstances, they may not relate to fitness as traditionally assumed. Any relationship between a particular condition measure and fitness should first be empirically validated before condition is used as a proxy for fitness. In the absence of such evidence, researchers should explicitly acknowledge that assuming such a relationship may be unrealistic. PMID:26019406

  14. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    PubMed

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  15. Measurement-based auralization methodology for the assessment of noise mitigation measures

    NASA Astrophysics Data System (ADS)

    Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick

    2016-09-01

    The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.

  16. Methodology for determining time-dependent mechanical properties of tuff subjected to near-field repository conditions

    SciTech Connect

    Blacic, J.D.; Andersen, R.

    1983-01-01

    We have established a methodology to determine the time dependence of strength and transport properties of tuff under conditions appropriate to a nuclear waste repository. Exploratory tests to determine the approximate magnitudes of thermomechanical property changes are nearly complete. In this report we describe the capabilities of an apparatus designed to precisely measure the time-dependent deformation and permeability of tuff at simulated repository conditions. Preliminary tests with this new apparatus indicate that microclastic creep failure of tuff occurs over a narrow strain range with little precursory Tertiary creep behavior. In one test, deformation under conditions of slowly decreasing effective pressure resulted in failure, whereas some strain indicators showed a decreasing rate of strain.

  17. Conditional Density Estimation in Measurement Error Problems.

    PubMed

    Wang, Xiao-Feng; Ye, Deping

    2015-01-01

    This paper is motivated by a wide range of background correction problems in gene array data analysis, where the raw gene expression intensities are measured with error. Estimating a conditional density function from the contaminated expression data is a key aspect of statistical inference and visualization in these studies. We propose re-weighted deconvolution kernel methods to estimate the conditional density function in an additive error model, when the error distribution is known as well as when it is unknown. Theoretical properties of the proposed estimators are investigated with respect to the mean absolute error from a "double asymptotic" view. Practical rules are developed for the selection of smoothing-parameters. Simulated examples and an application to an Illumina bead microarray study are presented to illustrate the viability of the methods. PMID:25284902

  18. Alcohol Measurement Methodology in Epidemiology: Recent Advances and Opportunities

    PubMed Central

    Greenfield, Thomas K.; Kerr, William C.

    2009-01-01

    Aim To review and discuss measurement issues in survey assessment of alcohol consumption for epidemiological studies. Methods The following areas are considered: implications of cognitive studies of question answering like self-referenced schemata of drinking, reference period and retrospective recall, as well as the assets and liabilities of types of current (e.g., food frequency, quantity frequency, graduated frequencies, and heavy drinking indicators) and lifetime drinking measures. Finally we consider units of measurement and improving measurement by detailing the ethanol content of drinks in natural settings. Results and conclusions Cognitive studies suggest inherent limitations in the measurement enterprise, yet diary studies show promise of broadly validating methods that assess a range of drinking amounts per occasion; improvements in survey measures of drinking in the life course are indicated; attending in detail to on and off-premise drink pour sizes and ethanol concentrations of various beverages shows promise of narrowing the coverage gap plaguing survey alcohol measurement. PMID:18422826

  19. Response Surface Methodology: An Extensive Potential to Optimize in vivo Photodynamic Therapy Conditions

    SciTech Connect

    Tirand, Loraine; Bastogne, Thierry; Bechet, Denise M.Sc.; Linder, Michel; Thomas, Noemie; Frochot, Celine; Guillemin, Francois; Barberi-Heyob, Muriel

    2009-09-01

    Purpose: Photodynamic therapy (PDT) is based on the interaction of a photosensitizing (PS) agent, light, and oxygen. Few new PS agents are being developed to the in vivo stage, partly because of the difficulty in finding the right treatment conditions. Response surface methodology, an empirical modeling approach based on data resulting from a set of designed experiments, was suggested as a rational solution with which to select in vivo PDT conditions by using a new peptide-conjugated PS targeting agent, neuropilin-1. Methods and Materials: A Doehlert experimental design was selected to model effects and interactions of the PS dose, fluence, and fluence rate on the growth of U87 human malignant glioma cell xenografts in nude mice, using a fixed drug-light interval. All experimental results were computed by Nemrod-W software and Matlab. Results: Intrinsic diameter growth rate, a tumor growth parameter independent of the initial volume of the tumor, was selected as the response variable and was compared to tumor growth delay and relative tumor volumes. With only 13 experimental conditions tested, an optimal PDT condition was selected (PS agent dose, 2.80 mg/kg; fluence, 120 J/cm{sup 2}; fluence rate, 85 mW/cm{sup 2}). Treatment of glioma-bearing mice with the peptide-conjugated PS agent, followed by the optimized PDT condition showed a statistically significant improvement in delaying tumor growth compared with animals who received the PDT with the nonconjugated PS agent. Conclusions: Response surface methodology appears to be a useful experimental approach for rapid testing of different treatment conditions and determination of optimal values of PDT factors for any PS agent.

  20. A methodology for hard/soft information fusion in the condition monitoring of aircraft

    NASA Astrophysics Data System (ADS)

    Bernardo, Joseph T.

    2013-05-01

    Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery with electronic sensors that continuously monitor components and collect data for analysis. The addition of the multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation, coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition. Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore, observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem detection capability, improve flight safety, and increase mission readiness. This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of concept, which will support development of activities to demonstrate hard/soft information fusion in operationally relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a prototype.

  1. Application of Response Surface Methodology to Evaluation of Bioconversion Experimental Conditions

    PubMed Central

    Cheynier, Véronique; Feinberg, Max; Chararas, Constantin; Ducauze, Christian

    1983-01-01

    Using Candida tenuis, a yeast isolated from the digestive tube of the larva of Phoracantha semipunctata (Cerambycidae, Coleoptera), we were able to demonstrate the bioconversion of citronellal to citronellol. Response surface methodology was used to achieve the optimization of the experimental conditions for that bioconversion process. To study the proposed second-order polynomial model, we used a central composite experimental design with multiple linear regression to estimate the model coefficients of the five selected factors believed to influence the bioconversion process. Only four were demonstrated to be predominant: the incubation pH, temperature, time, and the amount of substrate. The best reduction yields (close to 90%) were obtained with alkaline pH conditions (pH 7.5), a low temperature (25°C), a small amount of substrate (15 μl), and short incubation time (16 h). This methodology was very efficient: only 36 experiments were necessary to assess these conditions, and model adequacy was very satisfactory as the coefficient of determination was 0.9411. PMID:16346211

  2. Race/ethnicity and socioeconomic status: measurement and methodological issues.

    PubMed

    Williams, D R

    1996-01-01

    This article considers the ways in which race/ethnicity and socioeconomic status (SES) relate to each other and combine to affect racial variations in health status. The author reviews a number of methodological issues concerning the assessment of race in the United States that importantly affect the quality of the available data on racial differences in health. These issues include the discrepancy between self-identification and observer-reported race, changing racial classification categories and racial identification, the difficulties in categorizing persons of mixed racial parentage, and census undercount. In discussing the complex interactions between race and SES, the author first describes the relationship between race and SES and assesses the role of SES in accounting for racial differences in health, then shows how the failure of SES to completely account for racial variations in health status emphasizes the need for health researchers to give more systematic attention to the unique factors linked to race that affect health. These factors include racism, migration, acculturation, and the comprehensive assessment of SES.

  3. Causality analysis in business performance measurement system using system dynamics methodology

    NASA Astrophysics Data System (ADS)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  4. Optimization of hydrolysis conditions for bovine plasma protein using response surface methodology.

    PubMed

    Seo, Hyun-Woo; Jung, Eun-Young; Go, Gwang-Woong; Kim, Gap-Don; Joo, Seon-Tea; Yang, Han-Sul

    2015-10-15

    The purpose of this study was to establish optimal conditions for the hydrolysis of bovine plasma protein. Response surface methodology was used to model and optimize responses [degree of hydrolysis (DH), 2,2-diphenyl-1-picrydrazyl (DPPH) radical-scavenging activity and Fe(2+)-chelating activity]. Hydrolysis conditions, such as hydrolysis temperature (46.6-63.4 °C), hydrolysis time (98-502 min), and hydrolysis pH (6.32-9.68) were selected as the main processing conditions in the hydrolysis of bovine plasma protein. Optimal conditions for maximum DH (%), DPPH radical-scavenging activity (%) and Fe(2+)-chelating activity (%) of the hydrolyzed bovine plasma protein, were respectively established. We discovered the following three conditions for optimal hydrolysis of bovine plasma: pH of 7.82-8.32, temperature of 54.1 °C, and time of 338.4-398.4 min. We consequently succeeded in hydrolyzing bovine plasma protein under these conditions and confirmed the various desirable properties of optimal hydrolysis.

  5. Aqueduct: a methodology to measure and communicate global water risks

    NASA Astrophysics Data System (ADS)

    Gassert, Francis; Reig, Paul

    2013-04-01

    , helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.

  6. Measuring Individual Differences in Decision Biases: Methodological Considerations.

    PubMed

    Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence

    2015-01-01

    Individual differences in people's susceptibility to heuristics and biases (HB) are often measured by multiple-bias questionnaires consisting of one or a few items for each bias. This research approach relies on the assumptions that (1) different versions of a decision bias task measure are interchangeable as they measure the same cognitive failure; and (2) that some combination of these tasks measures the same underlying construct. Based on these assumptions, in Study 1 we developed two versions of a new decision bias survey for which we modified 13 HB tasks to increase their comparability, construct validity, and the participants' motivation. The analysis of the responses (N = 1279) showed weak internal consistency within the surveys and a great level of discrepancy between the extracted patterns of the underlying factors. To explore these inconsistencies, in Study 2 we used three original examples of HB tasks for each of seven biases. We created three decision bias surveys by allocating one version of each HB task to each survey. The participants' responses (N = 527) showed a similar pattern as in Study 1, questioning the assumption that the different examples of the HB tasks are interchangeable and that they measure the same underlying construct. These results emphasize the need to understand the domain-specificity of cognitive biases as well as the effect of the wording of the cover story and the response mode on bias susceptibility before employing them in multiple-bias questionnaires. PMID:26635677

  7. Measuring Individual Differences in Decision Biases: Methodological Considerations

    PubMed Central

    Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence

    2015-01-01

    Individual differences in people's susceptibility to heuristics and biases (HB) are often measured by multiple-bias questionnaires consisting of one or a few items for each bias. This research approach relies on the assumptions that (1) different versions of a decision bias task measure are interchangeable as they measure the same cognitive failure; and (2) that some combination of these tasks measures the same underlying construct. Based on these assumptions, in Study 1 we developed two versions of a new decision bias survey for which we modified 13 HB tasks to increase their comparability, construct validity, and the participants' motivation. The analysis of the responses (N = 1279) showed weak internal consistency within the surveys and a great level of discrepancy between the extracted patterns of the underlying factors. To explore these inconsistencies, in Study 2 we used three original examples of HB tasks for each of seven biases. We created three decision bias surveys by allocating one version of each HB task to each survey. The participants' responses (N = 527) showed a similar pattern as in Study 1, questioning the assumption that the different examples of the HB tasks are interchangeable and that they measure the same underlying construct. These results emphasize the need to understand the domain-specificity of cognitive biases as well as the effect of the wording of the cover story and the response mode on bias susceptibility before employing them in multiple-bias questionnaires. PMID:26635677

  8. Measure of Landscape Heterogeneity by Agent-Based Methodology

    NASA Astrophysics Data System (ADS)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.

    2016-06-01

    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  9. Conceptual and Methodological Issues in Treatment Integrity Measurement

    ERIC Educational Resources Information Center

    McLeod, Bryce D.; Southam-Gerow, Michael A.; Weisz, John R.

    2009-01-01

    This special series focused on treatment integrity in the child mental health and education field is timely. The articles do a laudable job of reviewing (a) the current status of treatment integrity research and measurement, (b) existing conceptual models of treatment integrity, and (c) the limitations of prior research. Overall, this thoughtful…

  10. Theoretical and methodological considerations of self-concept measurement.

    PubMed

    Wayment, H; Zetlin, A G

    1989-01-01

    Partially replicating a study by Savin-Williams and Jaquish (1981), assessment of self-concept was explored by investigating the relationships of "presented" and "experienced" selves among seven adolescent girls participating in a team sport at a high school in Southern California. Behavior observations and self- and peer ratings were used to assess three dimensions of self (self-confidence, popularity, and athletic skill) and examine relationships between these multimethods of self-concept measurement. In general, significant correlations between behavior observations and peer ratings were found, but not between behavior observations and self-ratings, or peer and self-ratings. A behavioral approach to measuring self-concept across situations appeared to be more indicative of the multidimensionality of the self than sole reliance on self-report. The authors concluded that self-concept measurement requires increased sensitivity to definition of, saliency of, and vacillation within a domain, the reference group used for social comparison, and the impact of previous experience on current views of self.

  11. A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles

    PubMed Central

    Janson, Lucas; Rajaratnam, Bala

    2014-01-01

    Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature. PMID:25587203

  12. Bayesian methodology to estimate and update safety performance functions under limited data conditions: a sensitivity analysis.

    PubMed

    Heydari, Shahram; Miranda-Moreno, Luis F; Lord, Dominique; Fu, Liping

    2014-03-01

    In road safety studies, decision makers must often cope with limited data conditions. In such circumstances, the maximum likelihood estimation (MLE), which relies on asymptotic theory, is unreliable and prone to bias. Moreover, it has been reported in the literature that (a) Bayesian estimates might be significantly biased when using non-informative prior distributions under limited data conditions, and that (b) the calibration of limited data is plausible when existing evidence in the form of proper priors is introduced into analyses. Although the Highway Safety Manual (2010) (HSM) and other research studies provide calibration and updating procedures, the data requirements can be very taxing. This paper presents a practical and sound Bayesian method to estimate and/or update safety performance function (SPF) parameters combining the information available from limited data with the SPF parameters reported in the HSM. The proposed Bayesian updating approach has the advantage of requiring fewer observations to get reliable estimates. This paper documents this procedure. The adopted technique is validated by conducting a sensitivity analysis through an extensive simulation study with 15 different models, which include various prior combinations. This sensitivity analysis contributes to our understanding of the comparative aspects of a large number of prior distributions. Furthermore, the proposed method contributes to unification of the Bayesian updating process for SPFs. The results demonstrate the accuracy of the developed methodology. Therefore, the suggested approach offers considerable promise as a methodological tool to estimate and/or update baseline SPFs and to evaluate the efficacy of road safety countermeasures under limited data conditions.

  13. Optimization of hull-less pumpkin seed roasting conditions using response surface methodology.

    PubMed

    Vujasinović, Vesna; Radočaj, Olga; Dimić, Etelka

    2012-05-01

    Response surface methodology (RSM) was applied to optimize hull-less pumpkin seed roasting conditions before seed pressing to maximize the biochemical composition and antioxidant capacity of the virgin pumpkin oils obtained using a hydraulic press. Hull-less pumpkin seeds were roasted for various lengths of time (30 to 70 min) at various roasting temperatures (90 to 130 °C), resulting in 9 different oil samples, while the responses were phospholipids content, total phenols content, α- and γ-tocopherols, and antioxidative activity [by 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical assay]. Mathematical models have shown that roasting conditions influenced all dependent variables at P < 0.05. The higher roasting temperatures had a significant effect (P < 0.05) on phospholipids, phenols, and α-tocopherols contents, while longer roasting time had a significant effect (P < 0.05) on γ-tocopherol content and antioxidant capacity, among the samples prepared under different roasting conditions. The optimum conditions for roasting the hull-less pumpkin seeds were 120 °C for duration of 49 min, which resulted in these oil concentrations: phospholipids 0.29%, total phenols 23.06 mg/kg, α-tocopherol 5.74 mg/100 g, γ-tocopherol 24.41 mg/100 g, and an antioxidative activity (EC(50)) of 27.18 mg oil/mg DPPH.

  14. Made-to-measure galaxy models - I. Methodology

    NASA Astrophysics Data System (ADS)

    Long, R. J.; Mao, S.

    2010-06-01

    We rederive the made-to-measure (M2M) method of Syer & Tremaine for modelling stellar systems and individual galaxies, and demonstrate how extensions to the M2M method may be implemented and used. We illustrate the enhanced M2M method by determining the mass-to-light ratio of a galaxy modelled as a Plummer sphere. From the standard galactic observables of surface brightness and line-of-sight velocity dispersion together with the h4 Gauss-Hermite coefficient of the line-of-sight velocity distribution, we successfully recover the true mass-to-light ratio of our toy galaxy. Using kinematic data from Kleyna et al., we then estimate the mass-to-light ratio of the dwarf spheroidal galaxy Draco achieving a V-band value of 539 +/- 136Msolar/Lsolar. We describe the main aspects of creating a M2M galaxy model and show how the key modelling parameters may be determined.

  15. [Measuring nursing care times--methodologic and documentation problems].

    PubMed

    Bartholomeyczik, S; Hunstein, D

    2001-08-01

    The time for needed nursing care is one important measurement as a basic for financing care. In Germany the Long Term Care Insurance (LTCI) reimburses nursing care depending on the time family care givers need to complete selected activities. The LTCI recommends certain time ranges for these activities, which are wholly compensatory, as a basic for assessment. The purpose is to enhance assessment justice and comparability. With the example of a German research project, which had to investigate the duration of these activities and the reasons for differences, questions are raised about some definition and interpretation problems. There are definition problems, since caring activities especially in private households are nearly never performed as clearly defined modules. Moreover, often different activities are performed simultaneously. However, the most important question is what exactly time numbers can say about the essentials of nursing care. PMID:12385262

  16. Effective speed and agility conditioning methodology for random intermittent dynamic type sports.

    PubMed

    Bloomfield, Jonathan; Polman, Remco; O'Donoghue, Peter; McNaughton, Lars

    2007-11-01

    Different coaching methods are often used to improve performance. This study compared the effectiveness of 2 methodologies for speed and agility conditioning for random, intermittent, and dynamic activity sports (e.g., soccer, tennis, hockey, basketball, rugby, and netball) and the necessity for specialized coaching equipment. Two groups were delivered either a programmed method (PC) or a random method (RC) of conditioning with a third group receiving no conditioning (NC). PC participants used the speed, agility, quickness (SAQ) conditioning method, and RC participants played supervised small-sided soccer games. PC was also subdivided into 2 groups where participants either used specialized SAQ equipment or no equipment. A total of 46 (25 males and 21 females) untrained participants received (mean +/- SD) 12.2 +/- 2.1 hours of physical conditioning over 6 weeks between a battery of speed and agility parameter field tests. Two-way analysis of variance results indicated that both conditioning groups showed a significant decrease in body mass and body mass index, although PC achieved significantly greater improvements on acceleration, deceleration, leg power, dynamic balance, and the overall summation of % increases when compared to RC and NC (p < 0.05). PC in the form of SAQ exercises appears to be a superior method for improving speed and agility parameters; however, this study found that specialized SAQ equipment was not a requirement to observe significant improvements. Further research is required to establish whether these benefits transfer to sport-specific tasks as well as to the underlying mechanisms resulting in improved performance.

  17. Methodology for reliability based condition assessment. Application to concrete structures in nuclear plants

    SciTech Connect

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period.

  18. Measurement of plasma adenosine concentration: methodological and physiological considerations

    SciTech Connect

    Gewirtz, H.; Brown, P.; Most, A.S.

    1987-05-01

    This study tested the hypothesis that measurements of plasma adenosine concentration made on samples of blood obtained in dipyridamole and EHNA (i.e., stopping solution) may be falsely elevated as a result of ongoing in vitro production and accumulation of adenosine during sample processing. Studies were performed with samples of anticoagulated blood obtained from anesthesized domestic swine. Adenosine concentration of ultra filtrated plasma was determined by HPLC. The following parameters were evaluated: (i) rate of clearance of (/sup 3/H)adenosine added to plasma, (ii) endogenous adenosine concentration of matched blood samples obtained in stopping solution alone, stopping solution plus EDTA, and perchloric acid (PCA), (iii) plasma and erythrocyte endogenous adenosine concentration in nonhemolyzed samples, and (iv) plasma adenosine concentration of samples hemolyzed in the presence of stopping solution alone or stopping solution plus EDTA. We observed that (i) greater than or equal to 95% of (/sup 3/H)adenosine added to plasma is removed from it by formed elements of the blood in less than 20 s, (ii) plasma adenosine concentration of samples obtained in stopping solution alone is generally 10-fold greater than that of matched samples obtained in stopping solution plus EDTA, (iii) deliberate mechanical hemolysis of blood samples obtained in stopping solution alone resulted in substantial augmentation of plasma adenosine levels in comparison with matched nonhemolyzed specimens--addition of EDTA to stopping solution prevented this, and (iv) adenosine content of blood samples obtained in PCA agreed closely with the sum of plasma and erythrocyte adenosine content of samples obtained in stopping solution plus EDTA.

  19. Optimization of culture conditions for flexirubin production by Chryseobacterium artocarpi CECT 8497 using response surface methodology.

    PubMed

    Venil, Chidambaram Kulandaisamy; Zakaria, Zainul Akmar; Ahmad, Wan Azlina

    2015-01-01

    Flexirubins are the unique type of bacterial pigments produced by the bacteria from the genus Chryseobacterium, which are used in the treatment of chronic skin disease, eczema etc. and may serve as a chemotaxonomic marker. Chryseobacterium artocarpi CECT 8497, an yellowish-orange pigment producing strain was investigated for maximum production of pigment by optimizing medium composition employing response surface methodology (RSM). Culture conditions affecting pigment production were optimized statistically in shake flask experiments. Lactose, l-tryptophan and KH2PO4 were the most significant variables affecting pigment production. Box Behnken design (BBD) and RSM analysis were adopted to investigate the interactions between variables and determine the optimal values for maximum pigment production. Evaluation of the experimental results signified that the optimum conditions for maximum production of pigment (521.64 mg/L) in 50 L bioreactor were lactose 11.25 g/L, l-tryptophan 6 g/L and KH2PO4 650 ppm. Production under optimized conditions increased to 7.23 fold comparing to its production prior to optimization. Results of this study showed that statistical optimization of medium composition and their interaction effects enable short listing of the significant factors influencing maximum pigment production from Chryseobacterium artocarpi CECT 8497. In addition, this is the first report optimizing the process parameters for flexirubin type pigment production from Chryseobacterium artocarpi CECT 8497.

  20. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    NASA Astrophysics Data System (ADS)

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E.; Pavez, Cristian; Soto, Leopoldo

    2015-03-01

    This work introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from detection of the burst of neutrons. An improvement of more than one order of magnitude in the accuracy of a paraffin wax moderated 3He-filled tube is obtained by using this methodology with respect to previous calibration methods.

  1. Optimization of microwave-assisted hot air drying conditions of okra using response surface methodology.

    PubMed

    Kumar, Deepak; Prasad, Suresh; Murthy, Ganti S

    2014-02-01

    Okra (Abelmoschus esculentus) was dried to a moisture level of 0.1 g water/g dry matter using a microwave-assisted hot air dryer. Response surface methodology was used to optimize the drying conditions based on specific energy consumption and quality of dried okra. The drying experiments were performed using a central composite rotatable design for three variables: air temperature (40-70 °C), air velocity (1-2 m/s) and microwave power level (0.5-2.5 W/g). The quality of dried okra was determined in terms of color change, rehydration ratio and hardness of texture. A second-order polynomial model was well fitted to all responses and high R(2) values (>0.8) were observed in all cases. The color change of dried okra was found higher at high microwave power and air temperatures. Rehydration properties were better for okra samples dried at higher microwave power levels. Specific energy consumption decreased with increase in microwave power due to decrease in drying time. The drying conditions of 1.51 m/s air velocity, 52.09 °C air temperature and 2.41 W/g microwave power were found optimum for product quality and minimum energy consumption for microwave-convective drying of okra.

  2. Optimization conditions for anthocyanin and phenolic content extraction form purple sweet potato using response surface methodology.

    PubMed

    Ahmed, Maruf; Akter, Mst Sorifa; Eun, Jong-Bang

    2011-02-01

    Purple sweet potato flour could be used to enhance the bioactive components such as phenolic compounds and anthocyanin content that might be used as nutraceutical ingredients for formulated foods. Optimization of anthocyanin and phenolic contents of purple sweet potato were investigated using response surface methodology. A face-centered cube design was used to investigate the effects of three independent variables: namely, drying temperature 55-65°C, citric acid concentration 1-3% w/v and soaking time 1-3 min. The optimal conditions for anthocyanin and phenolic contents were 62.91°C, 1.38%, 2.53 min and 60.94°C, 1.04% and 2.24 min, respectively. However, optimal conditions of anthocyanin content were not apparent. The experimental value of anthocyanin content was 19.78 mg/100 g and total phenolic content was 61.55 mg/g. These data showed that the experimental responses were reasonably close to the predicted responses. Therefore, the results showed that treated flours could be used to enhance the antioxidant activities of functional foods.

  3. Optimization of osmotic dehydration conditions of peach slices in sucrose solution using response surface methodology.

    PubMed

    Yadav, Baljeet Singh; Yadav, Ritika B; Jatain, Monika

    2012-10-01

    Osmotic dehydration (OD) conditions of peach slices were optimized using response surface methodology (RSM) with respect to sucrose concentration (50-70°B), immersion time (2-4 h) and process temperature (35-55 °C) for maximum water loss (WL), minimum solute gain (SG) and maximum rehydration ratio (RR) as response variables. A central composite rotatable design (CCRD) was used as experimental design. The models developed for all responses were significant. All model terms were significant in WL except the quadratic levels of sucrose concentration and temperature whereas in SG, linear terms of time and linear and quadratic terms of temperature were significant. All the terms except linear term of time and interaction term of time and sucrose concentration, were significant in RR. The optimized conditions were sucrose concentration = 69.9°B, time = 3.97 h and temperature = 37.63 °C in order to obtain WL of 28.42 (g/100 g of fresh weight), SG of 8.39 (g/100 g of fresh weight) and RR of 3.38. PMID:24082265

  4. Optimization of microwave-assisted hot air drying conditions of okra using response surface methodology.

    PubMed

    Kumar, Deepak; Prasad, Suresh; Murthy, Ganti S

    2014-02-01

    Okra (Abelmoschus esculentus) was dried to a moisture level of 0.1 g water/g dry matter using a microwave-assisted hot air dryer. Response surface methodology was used to optimize the drying conditions based on specific energy consumption and quality of dried okra. The drying experiments were performed using a central composite rotatable design for three variables: air temperature (40-70 °C), air velocity (1-2 m/s) and microwave power level (0.5-2.5 W/g). The quality of dried okra was determined in terms of color change, rehydration ratio and hardness of texture. A second-order polynomial model was well fitted to all responses and high R(2) values (>0.8) were observed in all cases. The color change of dried okra was found higher at high microwave power and air temperatures. Rehydration properties were better for okra samples dried at higher microwave power levels. Specific energy consumption decreased with increase in microwave power due to decrease in drying time. The drying conditions of 1.51 m/s air velocity, 52.09 °C air temperature and 2.41 W/g microwave power were found optimum for product quality and minimum energy consumption for microwave-convective drying of okra. PMID:24493879

  5. Methodology of Blade Unsteady Pressure Measurement in the NASA Transonic Flutter Cascade

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; McFarland, E. R.; Capece, V. R.; Jett, T. A.; Senyitko, R. G.

    2002-01-01

    In this report the methodology adopted to measure unsteady pressures on blade surfaces in the NASA Transonic Flutter Cascade under conditions of simulated blade flutter is described. The previous work done in this cascade reported that the oscillating cascade produced waves, which for some interblade phase angles reflected off the wind tunnel walls back into the cascade, interfered with the cascade unsteady aerodynamics, and contaminated the acquired data. To alleviate the problems with data contamination due to the back wall interference, a method of influence coefficients was selected for the future unsteady work in this cascade. In this approach only one blade in the cascade is oscillated at a time. The majority of the report is concerned with the experimental technique used and the experimental data generated in the facility. The report presents a list of all test conditions for the small amplitude of blade oscillations, and shows examples of some of the results achieved. The report does not discuss data analysis procedures like ensemble averaging, frequency analysis, and unsteady blade loading diagrams reconstructed using the influence coefficient method. Finally, the report presents the lessons learned from this phase of the experimental effort, and suggests the improvements and directions of the experimental work for tests to be carried out for large oscillation amplitudes.

  6. The Methodology of Doppler-Derived Central Blood Flow Measurements in Newborn Infants

    PubMed Central

    de Waal, Koert A.

    2012-01-01

    Central blood flow (CBF) measurements are measurements in and around the heart. It incorporates cardiac output, but also measurements of cardiac input and assessment of intra- and extracardiac shunts. CBF can be measured in the central circulation as right or left ventricular output (RVO or LVO) and/or as cardiac input measured at the superior vena cava (SVC flow). Assessment of shunts incorporates evaluation of the ductus arteriosus and the foramen ovale. This paper describes the methodology of CBF measurements in newborn infants. It provides a brief overview of the evolution of Doppler ultrasound blood flow measurements, basic principles of Doppler ultrasound, and an overview of all used methodology in the literature. A general guide for interpretation and normal values with suggested cutoffs of CBFs are provided for clinical use. PMID:22291718

  7. Tracking MOV operability under degraded voltage condition by periodic test measurements

    SciTech Connect

    Hussain, B.; Behera, A.K.; Alsammarae, A.J.

    1996-12-31

    The purpose of this paper is to develop a methodology for evaluating the operability of Alternating Current (AC) Motor Operated Valve (MOV) under degraded voltage condition, based on the seating parameter measured during surveillance/testing. This approach will help resolve Nuclear Regulatory Commission`s (NRC`s) concern on verifying the AC MOV`s design basis capability through periodic testing.

  8. Optimization extraction conditions for improving phenolic content and antioxidant activity in Berberis asiatica fruits using response surface methodology (RSM).

    PubMed

    Belwal, Tarun; Dhyani, Praveen; Bhatt, Indra D; Rawal, Ranbeer Singh; Pande, Veena

    2016-09-15

    This study for the first time designed to optimize the extraction of phenolic compounds and antioxidant potential of Berberis asiatica fruits using response surface methodology (RSM). Solvent selection was done based on the preliminary experiments and a five-factors-three-level, Central Composite Design (CCD). Extraction temperature (X1), sample to solvent ratio (X3) and solvent concentration (X5) significantly affect response variables. The quadratic model well fitted for all the responses. Under optimal extraction conditions, the dried fruit sample mixed with 80% methanol having 3.0 pH in a ratio of 1:50 and the mixture was heated at 80 °C for 30 min; the measured parameters was found in accordance with the predicted values. High Performance Liquid Chromatography (HPLC) analysis at optimized condition reveals 6 phenolic compounds. The results suggest that optimization of the extraction conditions is critical for accurate quantification of phenolics and antioxidants in Berberis asiatica fruits, which may further be utilized for industrial extraction procedure.

  9. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  10. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  11. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology.

    PubMed

    Arulmathi, P; Elangovan, G; Begum, A Farjana

    2015-01-01

    Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD) and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD). The results showed that electrochemical treatment process effectively removed the COD (89.5%) and color (95.1%) of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm(2), electrolysis time of 103.27 min, and electrolyte (NaCl) concentration of 1.67 g/L, respectively.

  12. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology

    PubMed Central

    Arulmathi, P.; Elangovan, G.; Begum, A. Farjana

    2015-01-01

    Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD) and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD). The results showed that electrochemical treatment process effectively removed the COD (89.5%) and color (95.1%) of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm2, electrolysis time of 103.27 min, and electrolyte (NaCl) concentration of 1.67 g/L, respectively. PMID:26491716

  13. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology.

    PubMed

    Arulmathi, P; Elangovan, G; Begum, A Farjana

    2015-01-01

    Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD) and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD). The results showed that electrochemical treatment process effectively removed the COD (89.5%) and color (95.1%) of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm(2), electrolysis time of 103.27 min, and electrolyte (NaCl) concentration of 1.67 g/L, respectively. PMID:26491716

  14. An Updated Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Hirt, Evelyn H.; Coles, Garill A.; Bonebrake, Christopher A.; Ivans, William J.; Wootan, David W.; Mitchell, Mark R.

    2014-07-18

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment, as AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors and the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results on augmenting an initial methodology for enhanced risk monitors that integrate real-time information about equipment condition and POF into risk monitors. Methods to propagate uncertainty through the enhanced risk monitor are evaluated. Available data to quantify the level of uncertainty and the POF of key components are examined for their relevance, and a status update of this data evaluation is described. Finally, we describe potential targets for developing new risk metrics that may be useful for studying trade-offs for economic

  15. Technical Report on Preliminary Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Coles, Garill A.; Coble, Jamie B.; Hirt, Evelyn H.

    2013-09-17

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. AdvSMRs may provide a longer-term alternative to traditional light-water reactors (LWRs) and SMRs based on integral pressurized water reactor concepts currently being considered. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment. AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors. Some of this loss can be recovered through reduced capital costs through smaller size, fewer components, modular fabrication processes, and the opportunity for modular construction. However, the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments that are a step towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results from an initial methodology for enhanced risk monitors by integrating real-time information about equipment condition and POF into risk monitors.

  16. The influence of construction methodology on structural brain network measures: A review.

    PubMed

    Qi, Shouliang; Meesters, Stephan; Nicolay, Klaas; Romeny, Bart M Ter Haar; Ossenblok, Pauly

    2015-09-30

    Structural brain networks based on diffusion MRI and tractography show robust attributes such as small-worldness, hierarchical modularity, and rich-club organization. However, there are large discrepancies in the reports about specific network measures. It is hypothesized that these discrepancies result from the influence of construction methodology. We surveyed the methodological options and their influences on network measures. It is found that most network measures are sensitive to the scale of brain parcellation, MRI gradient schemes and orientation model, and the tractography algorithm, which is in accordance with the theoretical analysis of the small-world network model. Different network weighting schemes represent different attributes of brain networks, which makes these schemes incomparable between studies. Methodology choice depends on the specific study objectives and a clear understanding of the pros and cons of a particular methodology. Because there is no way to eliminate these influences, it seems more practical to quantify them, optimize the methodologies, and construct structural brain networks with multiple spatial resolutions, multiple edge densities, and multiple weighting schemes.

  17. The Infant Vocal-Conditioning Literature: A Theoretical and Methodological Critique.

    ERIC Educational Resources Information Center

    Poulson, Claire L.; Nunes, Leila R. P.

    1988-01-01

    Focuses on the experimental designs and methodology of 15 studies, most of which used only part of the methodology for the experimental analysis of behavior. Argues that the failure to fully use available research technology may have contributed to researchers' failure to make experimental contact with the definition of reinforcement. (RH)

  18. Towards an Integrated Methodology for Measuring and Monitoring Soil Carbon at Regional Scales (Invited)

    NASA Astrophysics Data System (ADS)

    Izaurralde, R. C.

    2009-12-01

    Soil carbon accounts for the second largest stock of the biosphere. Agricultural soils contain an important fraction of the total stock and depending on management and environmental conditions can behave as sources or sinks of atmospheric carbon dioxide. Implementation of improved agricultural practices and restoration of land productivity can lead to soil carbon sequestration and therefore contribute not only to mitigate climate change but also satisfy the food, fiber, and bioenergy demands of future generations. Further, increasing soil carbon and, thus, soil organic matter, would improve the adaptive capacity of agricultural soils to withstand or attenuate the negative effects of climate change. At the site scale, soil carbon—at times with great precision—has been measured, monitored, and modeled often using long-term observations. At the regional level, however, soil carbon changes are usually modeled using accounting methods, biogeochemical simulations, and remotely sensed data. These procedures usually generate uncertainty in the estimation of soil carbon change due to the lack of local observations, spatial or temporal scales that these studies are conducted, and lack of information concerning the kinetic status of the soil carbon pools. Recent advances in techniques to measure and map soil carbon, conduct spatial simulations of soil carbon, and remotely sense biophysical variables such as yield, net primary productivity, residue cover, and soil moisture promise to enhance our capability to develop an integrated methodology to measure and monitor soil carbon changes at regional scales. Implementation and testing of this type of integrated technologies will be crucial for building robust soil carbon accounting systems be these of regional or national nature.

  19. Validation of a CFD Methodology for Variable Speed Power Turbine Relevant Conditions

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Giel, Paul W.; McVetta, Ashlie B.

    2013-01-01

    Analysis tools are needed to investigate aerodynamic performance of Variable-Speed Power Turbines (VSPT) for rotorcraft applications. The VSPT operates at low Reynolds numbers (transitional flow) and over a wide range of incidence. Previously, the capability of a published three-equation turbulence model to predict accurately the transition location for three-dimensional heat transfer problems was assessed. In this paper, the results of a post-diction exercise using a three-dimensional flow in a transonic linear cascade comprising VSPT blading are presented. The measured blade pressure distributions and exit total pressure and flow angles for two incidence angles corresponding to cruise (i = 5.8deg) and takeoff (i = -36.7deg) were used for this study. For the higher loading condition of cruise and the negative incidence condition of takeoff, overall agreement with data may be considered satisfactory but areas of needed improvement are also indicated.

  20. Integrating Multi-Tiered Measurement Outcomes for Special Education Eligibility with Sequential Decision-Making Methodology

    ERIC Educational Resources Information Center

    Decker, Scott L.; Englund, Julia; Albritton, Kizzy

    2012-01-01

    Changes to federal guidelines for the identification of children with disabilities have supported the use of multi-tiered models of service delivery. This study investigated the impact of measurement methodology as used across numerous tiers in determining special education eligibility. Four studies were completed using a sample of inner-city…

  1. Extended Axiomatic Conjoint Measurement: A Solution to a Methodological Problem in Studying Fertility-Related Behaviors.

    ERIC Educational Resources Information Center

    Nickerson, Carol A.; McClelland, Gary H.

    1988-01-01

    A methodology is developed based on axiomatic conjoint measurement to accompany a fertility decision-making model. The usefulness of the model is then demonstrated via an application to a study of contraceptive choice (N=100 male and female family-planning clinic clients). Finally, the validity of the model is evaluated. (TJH)

  2. Measuring College Students' Alcohol Consumption in Natural Drinking Environments: Field Methodologies for Bars and Parties

    ERIC Educational Resources Information Center

    Clapp, John D.; Holmes, Megan R.; Reed, Mark B.; Shillington, Audrey M.; Freisthler, Bridget; Lange, James E.

    2007-01-01

    In recent years researchers have paid substantial attention to the issue of college students' alcohol use. One limitation to the current literature is an over reliance on retrospective, self-report survey data. This article presents field methodologies for measuring college students' alcohol consumption in natural drinking environments.…

  3. Measuring Discrimination in Education: Are Methodologies from Labor and Markets Useful?

    ERIC Educational Resources Information Center

    Holzer, Harry J.; Ludwig, Jens

    2003-01-01

    Reviews the methodologies most frequently used by social scientists when measuring discrimination in housing and labor markets, assessing their usefulness for analyzing discrimination in education. The paper focuses on standard statistical methods, methods using more complete data, experimental/audit methods, and natural experiments based on…

  4. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Changes to measurement systems, service standards, service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL SERVICE PERFORMANCE AND CUSTOMER SATISFACTION REPORTING Annual Reporting of...

  5. Translating Oral Health-Related Quality of Life Measures: Are There Alternative Methodologies?

    ERIC Educational Resources Information Center

    Brondani, Mario; He, Sarah

    2013-01-01

    Translating existing sociodental indicators to another language involves a rigorous methodology, which can be costly. Free-of-charge online translator tools are available, but have not been evaluated in the context of research involving quality of life measures. To explore the value of using online translator tools to develop oral health-related…

  6. Measuring DNA Replication in Hypoxic Conditions.

    PubMed

    Foskolou, Iosifina P; Biasoli, Deborah; Olcina, Monica M; Hammond, Ester M

    2016-01-01

    It is imperative that dividing cells maintain replication fork integrity in order to prevent DNA damage and cell death. The investigation of DNA replication is of high importance as alterations in this process can lead to genomic instability, a known causative factor of tumor development. A simple, sensitive, and informative technique which enables the study of DNA replication, is the DNA fiber assay, an adaptation of which is described in this chapter. The DNA fiber method is a powerful tool, which allows the quantitative and qualitative analysis of DNA replication at the single molecule level. The sequential pulse labeling of live cells with two thymidine analogues and the subsequent detection with specific antibodies and fluorescence imaging allows direct examination of sites of DNA synthesis. In this chapter, we describe how this assay can be performed in conditions of low oxygen levels (hypoxia)-a physiologically relevant stress that occurs in most solid tumors. Moreover, we suggest ways on how to overcome the technical problems that arise while using the hypoxic chambers.

  7. Measuring fracture energy under coseismic conditions

    NASA Astrophysics Data System (ADS)

    Nielsen, Stefan; Spagnuolo, Elena; Violay, Marie; Smith, Steven; Scarlato, Pier-Giorgio; Romeo, Gianni; Di Felice, Fabio; Di Toro, Giulio

    2013-04-01

    Experiments performed on rocks at deformation conditions typical of seismic slip, show an extremely low friction coefficient, the activation of lubrication processes and a power-law strength decay from a peak value to a residual, steady-state value. The weakening curve has an initially very abrupt decay which can be approximated by a power-law. The resulting experimental fracture energy (defined, for a given slip amount u, as the integral between the frictional curve and the minimum frictional level reached ?f(u)) scales on most of the slip range as G ? uα, a power-law in some aspects in agreement with the seismological estimates of G'? u1.28 proposed by Abercrombie and Rice (2005). The values of G and G' are comparable for slips of about u = 1cm (G ? 104 J/m2). Both gradually increase with slip up to about 106 J/m2, however, it appears that fracture energy G' is slightly larger than G in the range of slip 0.1 < u < 10. The effective G' observed at the seismological scale should implicitly incorporate energy sinks other than frictional dissipation alone, which we discuss (anelastic damage due to high off-fault dynamic stress close to the rupture tip; dissipation during slip-localizing process within fault gouge of finite thickness; strain accomodating fault roughness at different scales). Since G' is obtained by estimating the amount of dissipation with respect to strain energy and radiated energy, it will implicitly incorporate the sum of all dissipative processes due to rupture propagation and fault slip. From the comparison of G obtained in the lab and in earthquakes, it appears that friction alone explains most of the dissipation, except maybe at the larger magnitudes.

  8. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review.

    PubMed

    Pollard, Beth; Johnston, Marie; Dixon, Diane

    2007-03-07

    Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the

  9. Surface monitoring measurements of materials on environmental change conditions

    NASA Astrophysics Data System (ADS)

    Tornari, Vivi; Bernikola, Eirini; Bellendorf, Paul; Bertolin, Chiara; Camuffo, Dario; Kotova, Lola; Jacobs, Daniela; Zarnic, Roko; Rajcic, Vlatka; Leissner, Johanna

    2013-05-01

    Climate Change is one of the most critical global challenges of our time and the burdened cultural heritage of Europe is particularly vulnerable to be left unprotected. Climate for Culture2 project exploits the damage impact of climate change on cultural heritage at regional scale. In this paper the progress of the study with in situ measurements and investigations at cultural heritage sites throughout Europe combined with laboratory simulations is described. Cultural works of art are susceptible to deterioration with environmental changes causing imperceptibly slow but steady accumulation of damaging effects directly impacted on structural integrity. Laser holographic interference method is employed to provide remote non destructive field-wise detection of the structural differences occurred as climate responses. The first results from climate simulation of South East Europe (Crete) are presented. A full study in regards to the four climate regions of Europe is foreseen to provide values for development of a precise and integrated model of thermographic building simulations for evaluation of impact of climate change. Development of a third generation user interface software optimised portable metrology system (DHSPI II) is designed to record in custom intervals the surface of materials witnessing reactions under simulated climatic conditions both onfield and in laboratory. The climate conditions refer to real data-loggers readings representing characteristic historical building in selected climate zones. New generation impact sensors termed Glass Sensors and Free Water Sensors are employed in the monitoring procedure to cross-correlate climate data with deformation data. In this paper results from the combined methodology are additionally presented.

  10. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  11. A system and methodology for measuring volatile organic compounds produced by hydroponic lettuce in a controlled environment

    NASA Technical Reports Server (NTRS)

    Charron, C. S.; Cantliffe, D. J.; Wheeler, R. M.; Manukian, A.; Heath, R. R.

    1996-01-01

    A system and methodology were developed for the nondestructive qualitative and quantitative analysis of volatile emissions from hydroponically grown 'Waldmann's Green' leaf lettuce (Lactuca sativa L.). Photosynthetic photon flux (PPF), photoperiod, and temperature were automatically controlled and monitored in a growth chamber modified for the collection of plant volatiles. The lipoxygenase pathway products (Z)-3-hexenal, (Z)-3-hexenol, and (Z)-3-hexenyl acetate were emitted by lettuce plants after the transition from the light period to the dark period. The volatile collection system developed in this study enabled measurements of volatiles emitted by intact plants, from planting to harvest, under controlled environmental conditions.

  12. The development and application of an automatic boundary segmentation methodology to evaluate the vaporizing characteristics of diesel spray under engine-like conditions

    NASA Astrophysics Data System (ADS)

    Ma, Y. J.; Huang, R. H.; Deng, P.; Huang, S.

    2015-04-01

    Studying the vaporizing characteristics of diesel spray could greatly help to reduce engine emission and improve performance. The high-speed schlieren imaging method is an important optical technique for investigating the macroscopic vaporizing morphological evolution of liquid fuel, and pre-combustion constant volume combustion bombs are often used to simulate the high pressure and high temperature conditions occurring in diesel engines. Complicated background schlieren noises make it difficult to segment the spray region in schlieren spray images. To tackle this problem, this paper develops a vaporizing spray boundary segmentation methodology based on an automatic threshold determination algorithm. The methodology was also used to quantify the macroscopic characteristics of vaporizing sprays including tip penetration, near-field and far-field angles, and projected spray area and spray volume. The spray boundary segmentation methodology was realized in a MATLAB-based program. Comparisons were made between the spray characteristics obtained using the program method and those acquired using a manual method and the Hiroyasu prediction model. It is demonstrated that the methodology can segment and measure vaporizing sprays precisely and efficiently. Furthermore, the experimental results show that the spray angles were slightly affected by the injection pressure at high temperature and high pressure and under inert conditions. A higher injection pressure leads to longer spray tip penetration and a larger projected area and volume, while elevating the temperature of the environment can significantly promote the evaporation of cold fuel.

  13. Uncertainty in urban stormwater quality modelling: the influence of likelihood measure formulation in the GLUE methodology.

    PubMed

    Freni, Gabriele; Mannina, Giorgio; Viviani, Gapare

    2009-12-15

    In the last years, the attention on integrated analysis of sewer networks, wastewater treatment plants and receiving waters has been growing. However, the common lack of data in the urban water-quality field and the incomplete knowledge regarding the interpretation of the main phenomena taking part in integrated urban water systems draw attention to the necessity of evaluating the reliability of model results. Uncertainty analysis can provide useful hints and information regarding the best model approach to be used by assessing its degrees of significance and reliability. Few studies deal with uncertainty assessment in the integrated urban-drainage field. In order to fill this gap, there has been a general trend towards transferring the knowledge and the methodologies from other fields. In this respect, the Generalised Likelihood Uncertainty Evaluation (GLUE) methodology, which is widely applied in the field of hydrology, can be a possible candidate for providing a solution to the above problem. However, the methodology relies on several user-defined hypotheses in the selection of a specific formulation of the likelihood measure. This paper presents a survey aimed at evaluating the influence of the likelihood measure formulation in the assessment of uncertainty in integrated urban-drainage modelling. To accomplish this objective, a home-made integrated urban-drainage model was applied to the Savena case study (Bologna, IT). In particular, the integrated urban-drainage model uncertainty was evaluated employing different likelihood measures. The results demonstrate that the subjective selection of the likelihood measure greatly affects the GLUE uncertainty analysis.

  14. Optimization of flocculation conditions for kaolin suspension using the composite flocculant of MBFGA1 and PAC by response surface methodology.

    PubMed

    Yang, Zhao-Hui; Huang, Jing; Zeng, Guang-Ming; Ruan, Min; Zhou, Chang-Sheng; Li, Lu; Rong, Zong-Gen

    2009-09-01

    The response surface methodology (RSM) was employed to study the treatment of kaolin suspension by the composite flocculant of MBFGA1 and PAC. And the two quadratic models of the five factors were established with the flocculating rate and floc size as the target responses. The optimal flocculating conditions are MBFGA1 99.75 mg/L, PAC 121 mg/L, pH 7.3, CaCl(2) 27 mg/L and the top speed of stir 163 rpm, respectively. That was obtained from the compromised results of two desirable responses, flocculating rate as 100% and floc size as 0.7 mm which were deduced from the frequency of responses. By means of Zeta potential measurement and experiment of flocculating process, it could be concluded that PAC has more capability on changing the potential of colloid and MBFGA1 is good at absorption and bridge effect. The composite of two kinds of predominance makes a significant sense on enhancing flocculating rate, reducing flocculent costs and decreasing secondary pollution. PMID:19419858

  15. ASSESSING THE CONDITION OF SOUTH CAROLINA'S ESTUARIES: A NEW APPROACH INVOLVING INTEGRATED MEASURES OF CONDITION

    EPA Science Inventory

    The South Carolina Estuarine and Coastal Assessment Program (SCECAP) was initiated in 1999 to assess the condition of the state's coastal habitats using multiple measures of water quality, sediment quality, and biological condition. Sampling has subsequently been expanded to incl...

  16. [Roaming through methodology. XXV. Outcome measures, surrogate outcomes, and intermediate measures].

    PubMed

    Boers, M

    2000-10-01

    Clinical medicine is aimed at decreasing the current or future burden of disease. Disease outcome is best expressed as burden of disease or change thereof, and as such the only true 'outcome measure' in clinical research. However, outcome research is expensive in costs and time expenditure, and sometimes impossible. Therefore study results are often expressed in intermediate measures. These measures tell us something about the disease process and the pathophysiological consequences of the disease and should have a relation with outcome. If this relation is strong, the measure is called 'surrogate outcome'. Intermediate measures and surrogate outcomes have advantages and disadvantages. The reader of clinical trial results first has to decide whether the answer to the study question is relevant in his personal situation. If so, the applicability of a measure can be simply appraised by answering 3 questions: 'Is the measure truthful (relevant, unbiased)?'; 'Does it discriminate between situations that are of interest?'; 'Is the measure feasible in my setting?' PMID:11048558

  17. An inverse problem for a class of conditional probability measure-dependent evolution equations

    NASA Astrophysics Data System (ADS)

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-09-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by partial differential equation models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach.

  18. A practical methodology to measure unbiased gas chromatographic retention factor vs. temperature relationships.

    PubMed

    Peng, Baijie; Kuo, Mei-Yi; Yang, Panhia; Hewitt, Joshua T; Boswell, Paul G

    2014-12-29

    Compound identification continues to be a major challenge. Gas chromatography-mass spectrometry (GC-MS) is a primary tool used for this purpose, but the GC retention information it provides is underutilized because existing retention databases are experimentally restrictive and unreliable. A methodology called "retention projection" has the potential to overcome these limitations, but it requires the retention factor (k) vs. T relationship of a compound to calculate its retention time. Direct methods of measuring k vs. T relationships from a series of isothermal runs are tedious and time-consuming. Instead, a series of temperature programs can be used to quickly measure the k vs. T relationships, but they are generally not as accurate when measured this way because they are strongly biased by non-ideal behavior of the GC system in each of the runs. In this work, we overcome that problem by using the retention times of 25 n-alkanes to back-calculate the effective temperature profile and hold-up time vs. T profiles produced in each of the six temperature programs. When the profiles were measured this way and taken into account, the k vs. T relationships measured from each of two different GC-MS instruments were nearly as accurate as the ones measured isothermally, showing less than two-fold more error. Furthermore, temperature-programmed retention times calculated in five other laboratories from the new k vs. T relationships had the same distribution of error as when they were calculated from k vs. T relationships measured isothermally. Free software was developed to make the methodology easy to use. The new methodology potentially provides a relatively fast and easy way to measure unbiased k vs. T relationships.

  19. Methodologies for measuring travelers' risk perception of infectious diseases: A systematic review.

    PubMed

    Sridhar, Shruti; Régner, Isabelle; Brouqui, Philippe; Gautret, Philippe

    2016-01-01

    Numerous studies in the past have stressed the importance of travelers' psychology and perception in the implementation of preventive measures. The aim of this systematic review was to identify the methodologies used in studies reporting on travelers' risk perception of infectious diseases. A systematic search for relevant literature was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. There were 39 studies identified. In 35 of 39 studies, the methodology used was that of a knowledge, attitude and practice (KAP) survey based on questionnaires. One study used a combination of questionnaires and a visual psychometric measuring instrument called the 'pictorial representation of illness and self-measurement" or PRISM. One study used a self-representation model (SRM) method. Two studies measured psychosocial factors. Valuable information was obtained from KAP surveys showing an overall lack of knowledge among travelers about the most frequent travel-associated infections and associated preventive measures. This methodological approach however, is mainly descriptive, addressing knowledge, attitudes, and practices separately and lacking an examination of the interrelationships between these three components. Another limitation of the KAP method is underestimating psychosocial variables that have proved influential in health related behaviors, including perceived benefits and costs of preventive measures, perceived social pressure, perceived personal control, unrealistic optimism and risk propensity. Future risk perception studies in travel medicine should consider psychosocial variables with inferential and multivariate statistical analyses. The use of implicit measurements of attitudes could also provide new insights in the field of travelers' risk perception of travel-associated infectious diseases. PMID:27238906

  20. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a..., territories, or possessions, an OPO must meet all 3 of the following outcome measures: (1) The OPO's...

  1. Direct sample positioning and alignment methodology for strain measurement by diffraction

    NASA Astrophysics Data System (ADS)

    Ratel, N.; Hughes, D. J.; King, A.; Malard, B.; Chen, Z.; Busby, P.; Webster, P. J.

    2005-05-01

    An ISO (International Organization for Standardization) TTA (Technology Trends Assessment) was published in 2001 for the determination of residual stress using neutron diffraction which identifies sample alignment and positioning as a key source of strain measurement error. Although the measurement uncertainty by neutron and synchrotron x-ray diffraction for an individual measurement of lattice strain is typically of the order of 10-100×10-6, specimens commonly exhibit strain gradients of 1000×10-6mm-1 or more, making sample location a potentially considerable source of error. An integrated approach to sample alignment and positioning is described which incorporates standard base-plates and sample holders, instrument alignment procedures, accurate digitization using a coordinate measuring machine and automatic generation of instrument control scripts. The methodology that has been developed is illustrated by the measurement of the transverse residual strain field in a welded steel T-joint using neutrons.

  2. Direct sample positioning and alignment methodology for strain measurement by diffraction

    SciTech Connect

    Ratel, N.; Hughes, D.J.; King, A.; Malard, B.; Chen, Z.; Busby, P.; Webster, P.J.

    2005-05-15

    An ISO (International Organization for Standardization) TTA (Technology Trends Assessment) was published in 2001 for the determination of residual stress using neutron diffraction which identifies sample alignment and positioning as a key source of strain measurement error. Although the measurement uncertainty by neutron and synchrotron x-ray diffraction for an individual measurement of lattice strain is typically of the order of 10-100x10{sup -6}, specimens commonly exhibit strain gradients of 1000x10{sup -6} mm{sup -1} or more, making sample location a potentially considerable source of error. An integrated approach to sample alignment and positioning is described which incorporates standard base-plates and sample holders, instrument alignment procedures, accurate digitization using a coordinate measuring machine and automatic generation of instrument control scripts. The methodology that has been developed is illustrated by the measurement of the transverse residual strain field in a welded steel T-joint using neutrons.

  3. Optimization of processing conditions for the sterilization of retorted short-rib patties using the response surface methodology.

    PubMed

    Choi, Su-Hee; Cheigh, Chan-Ick; Chung, Myong-Soo

    2013-05-01

    The aim of this study was to determine the optimum sterilization conditions for short-rib patties in retort trays by considering microbiological safety, nutritive value, sensory characteristics, and textural properties. In total, 27 sterilization conditions with various temperatures, times, and processing methods were tested using a 3(3) factorial design. The response surface methodology (RSM) and contour analysis were applied to find the optimum sterilization conditions for the patties. Quality attributes were significantly affected by the sterilization temperature, time, and processing method. From RSM and contour analysis, the final optimum sterilization condition of the patties that simultaneously satisfied all specifications was determined to be 119.4°C for 18.55min using a water-cascading rotary mode. The findings of the present study suggest that using optimized sterilization conditions will improve the microbial safety, sensory attributes, and nutritional retention for retorted short-rib patties.

  4. An in-situ soil structure characterization methodology for measuring soil compaction

    NASA Astrophysics Data System (ADS)

    Dobos, Endre; Kriston, András; Juhász, András; Sulyok, Dénes

    2016-04-01

    The agricultural cultivation has several direct and indirect effects on the soil properties, among which the soil structure degradation is the best known and most detectable one. Soil structure degradation leads to several water and nutrient management problems, which reduce the efficiency of agricultural production. There are several innovative technological approaches aiming to reduce these negative impacts on the soil structure. The tests, validation and optimization of these methods require an adequate technology to measure the impacts on the complex soil system. This study aims to develop an in-situ soil structure and root development testing methodology, which can be used in field experiments and which allows one to follow the real time changes in the soil structure - evolution / degradation and its quantitative characterization. The method is adapted from remote sensing image processing technology. A specifically transformed A/4 size scanner is placed into the soil into a safe depth that cannot be reached by the agrotechnical treatments. Only the scanner USB cable comes to the surface to allow the image acquisition without any soil disturbance. Several images from the same place can be taken throughout the vegetation season to follow the soil consolidation and structure development after the last tillage treatment for the seedbed preparation. The scanned image of the soil profile is classified using supervised image classification, namely the maximum likelihood classification algorithm. The resulting image has two principal classes, soil matrix and pore space and other complementary classes to cover the occurring thematic classes, like roots, stones. The calculated data is calibrated with filed sampled porosity data. As the scanner is buried under the soil with no changes in light conditions, the image processing can be automated for better temporal comparison. Besides the total porosity each pore size fractions and their distributions can be calculated for

  5. A methodology to determine boundary conditions from forced convection experiments using liquid crystal thermography

    NASA Astrophysics Data System (ADS)

    Jakkareddy, Pradeep S.; Balaji, C.

    2016-05-01

    This paper reports the results of an experimental study to estimate the heat flux and convective heat transfer coefficient using liquid crystal thermography and Bayesian inference in a heat generating sphere, enclosed in a cubical Teflon block. The geometry considered for the experiments comprises a heater inserted in a hollow hemispherical aluminium ball, resulting in a volumetric heat generation source that is placed at the center of the Teflon block. Calibrated thermochromic liquid crystal sheets are used to capture the temperature distribution at the front face of the Teflon block. The forward model is the three dimensional conduction equation which is solved within the Teflon block to obtain steady state temperatures, using COMSOL. Match up experiments are carried out for various velocities by minimizing the residual between TLC and simulated temperatures for every assumed loss coefficient, to obtain a correlation of average Nusselt number against Reynolds number. This is used for prescribing the boundary condition for the solution to the forward model. A surrogate model obtained by artificial neural network built upon the data from COMSOL simulations is used to drive a Markov Chain Monte Carlo based Metropolis Hastings algorithm to generate the samples. Bayesian inference is adopted to solve the inverse problem for determination of heat flux and heat transfer coefficient from the measured temperature field. Point estimates of the posterior like the mean, maximum a posteriori and standard deviation of the retrieved heat flux and convective heat transfer coefficient are reported. Additionally the effect of number of samples on the performance of the estimation process has been investigated.

  6. Methodology for the calibration of and data acquisition with a six-degree-of-freedom acceleration measurement device

    NASA Astrophysics Data System (ADS)

    Lee, Harvey; Plank, Gordon; Weinstock, Herbert; Coltman, Michael

    1989-06-01

    Described here is a methodology for calibrating and gathering data with a six-degree-of-freedom acceleration measurement device that is intended to measure head acceleration of anthropomorphic dummies and human volunteers in automotive crash testing and head impact trauma studies. Error models (system equations) were developed for systems using six accelerometers in a coplanar (3-2-1) configuration, nine accelerometers in a coplanar (3-3-3) configuration and nine accelerometers in a non-coplanar (3-2-2-2) configuration and the accuracy and stability of these systems were compared. The model was verified under various input and computational conditions. Results of parametric sensitivity analyses which included parameters such as system geometry, coordinate system location, data sample rate and accelerometer cross axis sensitivities are presented. Recommendations to optimize data collection and reduction are given. Complete source listings of all of the software developed are presented.

  7. Sensitive Measures of Condition Change in EEG Data

    SciTech Connect

    Hively, L.M.; Gailey, P.C.; Protopopescu, V.

    1999-03-10

    We present a new, robust, model-independent technique for measuring condition change in nonlinear data. We define indicators of condition change by comparing distribution functions (DF) defined on the attractor for time windowed data sets via L{sub 1}-distance and {chi}{sup 2} statistics. The new measures are applied to EEG data with the objective of detecting the transition between non-seizure and epileptic brain activity in an accurate and timely manner. We find a clear superiority of the new metrics in comparison to traditional nonlinear measures as discriminators of condition change.

  8. An innovative methodology for measurement of stress distribution of inflatable membrane structures

    NASA Astrophysics Data System (ADS)

    Zhao, Bing; Chen, Wujun; Hu, Jianhui; Chen, Jianwen; Qiu, Zhenyu; Zhou, Jinyu; Gao, Chengjun

    2016-02-01

    The inflatable membrane structure has been widely used in the fields of civil building, industrial building, airship, super pressure balloon and spacecraft. It is important to measure the stress distribution of the inflatable membrane structure because it influences the safety of the structural design. This paper presents an innovative methodology for the measurement and determination of the stress distribution of the inflatable membrane structure under different internal pressures, combining photogrammetry and the force-finding method. The shape of the inflatable membrane structure is maintained by the use of pressurized air, and the internal pressure is controlled and measured by means of an automatic pressure control system. The 3D coordinates of the marking points pasted on the membrane surface are acquired by three photographs captured from three cameras based on photogrammetry. After digitizing the markings on the photographs, the 3D curved surfaces are rebuilt. The continuous membrane surfaces are discretized into quadrilateral mesh and simulated by membrane links to calculate the stress distributions using the force-finding method. The internal pressure is simplified to the external node forces in the normal direction according to the contributory area of the node. Once the geometry x, the external force r and the topology C are obtained, the unknown force densities q in each link can be determined. Therefore, the stress distributions of the inflatable membrane structure can be calculated, combining the linear adjustment theory and the force density method based on the force equilibrium of inflated internal pressure and membrane internal force without considering the mechanical properties of the constitutive material. As the use of the inflatable membrane structure is attractive in the field of civil building, an ethylene-tetrafluoroethylene (ETFE) cushion is used with the measurement model to validate the proposed methodology. The comparisons between the

  9. The Epidemiology of Lead Toxicity in Adults: Measuring Dose and Consideration of Other Methodologic Issues

    PubMed Central

    Hu, Howard; Shih, Regina; Rothenberg, Stephen; Schwartz, Brian S.

    2007-01-01

    We review several issues of broad relevance to the interpretation of epidemiologic evidence concerning the toxicity of lead in adults, particularly regarding cognitive function and the cardiovascular system, which are the subjects of two systematic reviews that are also part of this mini-monograph. Chief among the recent developments in methodologic advances has been the refinement of concepts and methods for measuring individual lead dose in terms of appreciating distinctions between recent versus cumulative doses and the use of biological markers to measure these parameters in epidemiologic studies of chronic disease. Attention is focused particularly on bone lead levels measured by K-shell X-ray fluorescence as a relatively new biological marker of cumulative dose that has been used in many recent epidemiologic studies to generate insights into lead’s impact on cognition and risk of hypertension, as well as the alternative method of estimating cumulative dose using available repeated measures of blood lead to calculate an individual’s cumulative blood lead index. We review the relevance and interpretation of these lead biomarkers in the context of the toxico-kinetics of lead. In addition, we also discuss methodologic challenges that arise in studies of occupationally and environmentally exposed subjects and those concerning race/ethnicity and socioeconomic status and other important covariates. PMID:17431499

  10. A methodology to condition distorted acoustic emission signals to identify fracture timing from human cadaver spine impact tests.

    PubMed

    Arun, Mike W J; Yoganandan, Narayan; Stemper, Brian D; Pintar, Frank A

    2014-12-01

    While studies have used acoustic sensors to determine fracture initiation time in biomechanical studies, a systematic procedure is not established to process acoustic signals. The objective of the study was to develop a methodology to condition distorted acoustic emission data using signal processing techniques to identify fracture initiation time. The methodology was developed from testing a human cadaver lumbar spine column. Acoustic sensors were glued to all vertebrae, high-rate impact loading was applied, load-time histories were recorded (load cell), and fracture was documented using CT. Compression fracture occurred to L1 while other vertebrae were intact. FFT of raw voltage-time traces were used to determine an optimum frequency range associated with high decibel levels. Signals were bandpass filtered in this range. Bursting pattern was found in the fractured vertebra while signals from other vertebrae were silent. Bursting time was associated with time of fracture initiation. Force at fracture was determined using this time and force-time data. The methodology is independent of selecting parameters a priori such as fixing a voltage level(s), bandpass frequency and/or using force-time signal, and allows determination of force based on time identified during signal processing. The methodology can be used for different body regions in cadaver experiments.

  11. The methodologies and instruments of vehicle particulate emission measurement for current and future legislative regulations

    NASA Astrophysics Data System (ADS)

    Otsuki, Yoshinori; Nakamura, Hiroshi; Arai, Masataka; Xu, Min

    2015-09-01

    Since the health risks associated with fine particles whose aerodynamic diameters are smaller than 2.5 μm was first proven, regulations restricting particulate matter (PM) mass emissions from internal combustion engines have become increasingly severe. Accordingly, the gravimetric method of PM mass measurement is facing its lower limit of detection as the emissions from vehicles are further reduced. For example, the variation in the adsorption of gaseous components such as hydrocarbons from unburned fuel and lubricant oil and the presence of agglomerated particles, which are not directly generated in engine combustion but re-entrainment particulates from walls of sampling pipes, can cause uncertainty in measurement. The PM mass measurement systems and methodologies have been continuously refined in order to improve measurement accuracy. As an alternative metric, the particle measurement programme (PMP) within the United Nations Economic Commission for Europe (UNECE) developed a solid particle number measurement method in order to improve the sensitivity of particulate emission measurement from vehicles. Consequently, particle number (PN) limits were implemented into the regulations in Europe from 2011. Recently, portable emission measurement systems (PEMS) for in-use vehicle emission measurements are also attracting attention, currently in North America and Europe, and real-time PM mass and PN instruments are under evaluation.

  12. Structural acoustic control of plates with variable boundary conditions: design methodology.

    PubMed

    Sprofera, Joseph D; Cabell, Randolph H; Gibbs, Gary P; Clark, Robert L

    2007-07-01

    A method for optimizing a structural acoustic control system subject to variations in plate boundary conditions is provided. The assumed modes method is used to build a plate model with varying levels of rotational boundary stiffness to simulate the dynamics of a plate with uncertain edge conditions. A transducer placement scoring process, involving Hankel singular values, is combined with a genetic optimization routine to find spatial locations robust to boundary condition variation. Predicted frequency response characteristics are examined, and theoretically optimized results are discussed in relation to the range of boundary conditions investigated. Modeled results indicate that it is possible to minimize the impact of uncertain boundary conditions in active structural acoustic control by optimizing the placement of transducers with respect to those uncertainties. PMID:17614487

  13. Contingency spaces and measures in classical and instrumental conditioning.

    PubMed

    Gibbon, J; Berryman, R; Thompson, R L

    1974-05-01

    The contingency between conditional and unconditional stimuli in classical conditioning paradigms, and between responses and consequences in instrumental conditioning paradigms, is analyzed. The results are represented in two- and three-dimensional spaces in which points correspond to procedures, or procedures and outcomes. Traditional statistical and psychological measures of association are applied to data in classical conditioning. Root mean square contingency, Ø, is proposed as a measure of contingency characterizing classical conditioning effects at asymptote. In instrumental training procedures, traditional measures of association are inappropriate, since one degree of freedom-response probability-is yielded to the subject. Further analysis of instrumental contingencies yields a surprising result. The well established "Matching Law" in free-operant concurrent schedules subsumes the "Probability Matching" finding of mathematical learning theory, and both are equivalent to zero contingency between responses and consequences.

  14. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  15. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a... donation rate of eligible donors as a percentage of eligible deaths is no more than 1.5 standard...

  16. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions.

    PubMed

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-08-07

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10(-3) g/cm(3) (1%).

  17. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions

    PubMed Central

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-01-01

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10−3 g/cm3 (1%). PMID:26262619

  18. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  19. Reducing ultraviolet radiation exposure to prevent skin cancer methodology and measurement.

    PubMed

    Glanz, Karen; Mayer, Joni A

    2005-08-01

    Skin cancer is the most common type of cancer, and is also one of the most preventable. This paper builds on an evidence review of skin cancer prevention interventions that was conducted for the Guide to Community Preventive Services (n=85 studies), and summarizes the state of knowledge about research methodology and measurement in studies of the effectiveness of interventions to reduce ultraviolet radiation (UVR) exposure. As this field advances, researchers should strive to minimize threats to validity in their study designs, as well as to consider the balance between internal and external validity. There is a need for more longer-duration interventions, and follow-up periods that make possible conclusions about the potential of these interventions to affect intermediate markers of skin cancer or at least sustained behavior change. Also, more work is needed to minimize attrition and characterize nonresponders and study dropouts. Verbal report measures of behavior are the most widely used measures of solar protection behavior. Given their limitations, investigators should routinely collect data about reliability and validity of those measures. They should also increase efforts to complement verbal data with objective measures including observations, skin reflectance, personal dosimetry, skin swabbing, and inspection of moles. Measures of environments and policies should incorporate observations, documentation, and direct measures of ambient UVR and shade. This article places the data derived from the evidence review in the context of needs and recommendations for future research in skin cancer prevention. PMID:16005810

  20. A comparison of river water quality sampling methodologies under highly variable load conditions.

    PubMed

    Facchi, A; Gandolfi, C; Whelan, M J

    2007-01-01

    When river water quality fluctuates over relatively short periods of time with respect to the sampling frequency, the collection of grab samples may be inappropriate for characterising average water quality. This paper presents the results of a water quality monitoring study carried out on a stretch of the river Lambro (northern Italy) dominated by a periodically overloaded sewage treatment works (STW) located near its upstream end. Water quality was strongly influenced by a pronounced diurnal cycle in pollutant loads caused by the regular emission of untreated waste water during periods of high domestic flow (daytime). Two different sampling techniques were employed: grab sampling and 24-h composite sampling using automatic samplers. Samples were collected at the plant overflow and at several sites along the river and analysed for two common ingredients of household detergents, linear alkylbenzene sulphonate (LAS) and boron (B) and for routine water quality variables. The results obtained show that: (1) The diurnal variability of point-source-derived chemical concentrations in the river downstream of the undersized STW increased with increasing removal efficiency in sewage treatment. (2) The shape of the diurnal concentration signal remained relatively intact for a considerable distance downstream of the STW for several water quality variables, suggesting that hydrodynamic dispersion plays a relatively minor role in controlling concentration patterns in this river. (3) In-stream degradation of LAS was consistent with first order kinetics with a rate constant of 0.05-0.06 h(-1). (4) Grab sampling is a relatively inefficient methodology for capturing mean concentrations for rivers subjected to highly variable loads, especially when it is restricted to office hours. The inefficiency of grab sampling is more marked for substances (e.g. LAS) which are effectively removed during sewage treatment than for substances which are not. (5) For LAS, diurnal variability in the

  1. Gas concentration measurement by optical similitude absorption spectroscopy: methodology and experimental demonstration.

    PubMed

    Anselmo, Christophe; Welschinger, Jean-Yves; Cariou, Jean-Pierre; Miffre, Alain; Rairoux, Patrick

    2016-06-13

    We propose a new methodology to measure gas concentration by light-absorption spectroscopy when the light source spectrum is larger than the spectral width of one or several molecular gas absorption lines. We named it optical similitude absorption spectroscopy (OSAS), as the gas concentration is derived from a similitude between the light source and the target gas spectra. The main OSAS-novelty lies in the development of a robust inversion methodology, based on the Newton-Raphson algorithm, which allows retrieving the target gas concentration from spectrally-integrated differential light-absorption measurements. As a proof, OSAS is applied in laboratory to the 2ν3 methane absorption band at 1.66 µm with uncertainties revealed by the Allan variance. OSAS has also been applied to non-dispersive infra-red and the optical correlation spectroscopy arrangements. This all-optics gas concentration retrieval does not require the use of a gas calibration cell and opens new tracks to atmospheric gas pollution and greenhouse gases sources monitoring.

  2. Gas concentration measurement by optical similitude absorption spectroscopy: methodology and experimental demonstration.

    PubMed

    Anselmo, Christophe; Welschinger, Jean-Yves; Cariou, Jean-Pierre; Miffre, Alain; Rairoux, Patrick

    2016-06-13

    We propose a new methodology to measure gas concentration by light-absorption spectroscopy when the light source spectrum is larger than the spectral width of one or several molecular gas absorption lines. We named it optical similitude absorption spectroscopy (OSAS), as the gas concentration is derived from a similitude between the light source and the target gas spectra. The main OSAS-novelty lies in the development of a robust inversion methodology, based on the Newton-Raphson algorithm, which allows retrieving the target gas concentration from spectrally-integrated differential light-absorption measurements. As a proof, OSAS is applied in laboratory to the 2ν3 methane absorption band at 1.66 µm with uncertainties revealed by the Allan variance. OSAS has also been applied to non-dispersive infra-red and the optical correlation spectroscopy arrangements. This all-optics gas concentration retrieval does not require the use of a gas calibration cell and opens new tracks to atmospheric gas pollution and greenhouse gases sources monitoring. PMID:27410280

  3. Deception detection with behavioral, autonomic, and neural measures: Conceptual and methodological considerations that warrant modesty.

    PubMed

    Meijer, Ewout H; Verschuere, Bruno; Gamer, Matthias; Merckelbach, Harald; Ben-Shakhar, Gershon

    2016-05-01

    The detection of deception has attracted increased attention among psychological researchers, legal scholars, and ethicists during the last decade. Much of this has been driven by the possibility of using neuroimaging techniques for lie detection. Yet, neuroimaging studies addressing deception detection are clouded by lack of conceptual clarity and a host of methodological problems that are not unique to neuroimaging. We review the various research paradigms and the dependent measures that have been adopted to study deception and its detection. In doing so, we differentiate between basic research designed to shed light on the neurocognitive mechanisms underlying deceptive behavior and applied research aimed at detecting lies. We also stress the distinction between paradigms attempting to detect deception directly and those attempting to establish involvement by detecting crime-related knowledge, and discuss the methodological difficulties and threats to validity associated with each paradigm. Our conclusion is that the main challenge of future research is to find paradigms that can isolate cognitive factors associated with deception, rather than the discovery of a unique (brain) correlate of lying. We argue that the Comparison Question Test currently applied in many countries has weak scientific validity, which cannot be remedied by using neuroimaging measures. Other paradigms are promising, but the absence of data from ecologically valid studies poses a challenge for legal admissibility of their outcomes. PMID:26787599

  4. A method of measuring dynamic strain under electromagnetic forming conditions

    NASA Astrophysics Data System (ADS)

    Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng

    2016-04-01

    Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (˜5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 105 s-1, which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (˜104 s-1). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.

  5. Absolute Radiation Measurements in Earth and Mars Entry Conditions

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2014-01-01

    This paper reports on the measurement of radiative heating for shock heated flows which simulate conditions for Mars and Earth entries. Radiation measurements are made in NASA Ames' Electric Arc Shock Tube at velocities from 3-15 km/s in mixtures of N2/O2 and CO2/N2/Ar. The technique and limitations of the measurement are summarized in some detail. The absolute measurements will be discussed in regards to spectral features, radiative magnitude and spatiotemporal trends. Via analysis of spectra it is possible to extract properties such as electron density, and rotational, vibrational and electronic temperatures. Relaxation behind the shock is analyzed to determine how these properties relax to equilibrium and are used to validate and refine kinetic models. It is found that, for some conditions, some of these values diverge from non-equilibrium indicating a lack of similarity between the shock tube and free flight conditions. Possible reasons for this are discussed.

  6. Using intensive indicators of wetland condition to evaluate a rapid assessment methodology in Oregon tidal wetlands

    EPA Science Inventory

    In 2011, the US EPA and its partners will conduct the first-ever national survey on the condition of the Nation’s wetlands. This survey will utilize a three-tiered assessment approach that includes landscape level indicators (Level 1), rapid indicators (Level 2), and intensive, ...

  7. Hyper-X Mach 10 Engine Flowpath Development: Fifth Entry Test Conditions and Methodology

    NASA Technical Reports Server (NTRS)

    Bakos, R. J.; Tsai, C.-Y.; Rogers, R. C.; Shih, A. T.

    2001-01-01

    A series of Hyper-X Mach 10 flowpath ground tests are underway to obtain engine performance and operation data and to confirm and refine the flowpath design methods. The model used is a full-scale height, partial-width replica of the Hyper-X Research Vehicle propulsive flowpath with truncated forebody and aftbody. This is the fifth test entry for this model in the NASA-HYPULSE facility at GASL. For this entry the facility nozzle and model forebody were modified to better simulate the engine inflow conditions at the target flight conditions. The forebody was modified to be a wide flat plate with no flow fences, the facility nozzle Mach number was increased, and the model was positioned to be tested in a semi-direct-connect arrangement. This paper presents a review of the test conditions, model calibrations, and a description of steady flow confirmation. The test series included runs using hydrogen fuel, and a silane-in-hydrogen fuel mixture. Other test parameters included the model mounting angle (relative to the tunnel flow), and the test gas oxygen fraction to account for the presence of [NO] in the test gas at the M10 conditions.

  8. [Methodology and Implementation of Forced Oscillation Technique for Respiratory Mechanics Measurement].

    PubMed

    Zhang, Zhengbo; Ni, Lu; Liu, Xiaoli; Li, Deyu; Wang, Weidong

    2015-11-01

    The forced oscillation technique (FOT) is a noninvasive method for respiratory mechanics measurement. For the FOT, external signals (e.g. forced oscillations around 4-40 Hz) are used to drive the respiratory system, and the mechanical characteristic of the respiratory system can be determined with the linear system identification theory. Thus, respiratory mechanical properties and components at different frequency and location of the airway can be explored by specifically developed forcing waveforms. In this paper, the theory, methodology and clinical application of the FOT is reviewed, including measure ment theory, driving signals, models of respiratory system, algorithm for impedance identification, and requirement on apparatus. Finally, the future development of this technique is also discussed. PMID:27066685

  9. [Methodology and Implementation of Forced Oscillation Technique for Respiratory Mechanics Measurement].

    PubMed

    Zhang, Zhengbo; Ni, Lu; Liu, Xiaoli; Li, Deyu; Wang, Weidong

    2015-11-01

    The forced oscillation technique (FOT) is a noninvasive method for respiratory mechanics measurement. For the FOT, external signals (e.g. forced oscillations around 4-40 Hz) are used to drive the respiratory system, and the mechanical characteristic of the respiratory system can be determined with the linear system identification theory. Thus, respiratory mechanical properties and components at different frequency and location of the airway can be explored by specifically developed forcing waveforms. In this paper, the theory, methodology and clinical application of the FOT is reviewed, including measure ment theory, driving signals, models of respiratory system, algorithm for impedance identification, and requirement on apparatus. Finally, the future development of this technique is also discussed.

  10. Optimization of fermentation conditions for 1,3-propanediol production by marine Klebsiella pneumonia HSL4 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Li, Lili; Zhou, Sheng; Ji, Huasong; Gao, Ren; Qin, Qiwei

    2014-09-01

    The industrially important organic compound 1,3-propanediol (1,3-PDO) is mainly used as a building block for the production of various polymers. In the present study, response surface methodology protocol was followed to determine and optimize fermentation conditions for the maximum production of 1,3-PDO using marine-derived Klebsiella pneumoniae HSL4. Four nutritional supplements together with three independent culture conditions were optimized as follows: 29.3 g/L glycerol, 8.0 g/L K2 HPO4, 7.6 g/L (NH4)2 SO4, 3.0 g/L KH2 PO4, pH 7.1, cultivation at 35°C for 12 h. Under the optimal conditions, a maximum 1,3-PDO concentration of 14.5 g/L, a productivity of 1.21 g/(L·h) and a conversion of glycerol of 0.49 g/g were obtained. In comparison with the control conditions, fermentation under the optimized conditions achieved an increase of 38.8% in 1,3-PDO concentration, 39.0% in productivity and 25.7% in glycerol conversion in flask. This enhancement trend was further confirmed when the fermentation was conducted in a 5-L fermentor. The optimized fermentation conditions could be an important basis for developing lowcost, large-scale methods for industrial production of 1,3-PDO in the future.

  11. Refinement of current monitoring methodology for electroosmotic flow assessment under low ionic strength conditions.

    PubMed

    Saucedo-Espinosa, Mario A; Lapizco-Encinas, Blanca H

    2016-05-01

    Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813

  12. Refinement of current monitoring methodology for electroosmotic flow assessment under low ionic strength conditions.

    PubMed

    Saucedo-Espinosa, Mario A; Lapizco-Encinas, Blanca H

    2016-05-01

    Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices.

  13. Noncontacting measurement technologies for space propulsion condition monitoring

    NASA Astrophysics Data System (ADS)

    Randall, M. R.; Barkhoudarian, S.; Collins, J. J.; Schwartzbart, A.

    1987-12-01

    This paper describes four noncontacting measurement technologies that can be used in a turbopump condition monitoring system. The isotope wear analyzer, fiberoptic deflectometer, brushless torque-meter, and fiberoptic pyrometer can be used to monitor component wear, bearing degradation, instantaneous shaft torque, and turbine blade cracking, respectively. A complete turbopump condition monitoring system including these four technologies could predict remaining component life, thus reducing engine operating costs and increasing reliability.

  14. Noncontacting measurement technologies for space propulsion condition monitoring

    NASA Technical Reports Server (NTRS)

    Randall, M. R.; Barkhoudarian, S.; Collins, J. J.; Schwartzbart, A.

    1987-01-01

    This paper describes four noncontacting measurement technologies that can be used in a turbopump condition monitoring system. The isotope wear analyzer, fiberoptic deflectometer, brushless torque-meter, and fiberoptic pyrometer can be used to monitor component wear, bearing degradation, instantaneous shaft torque, and turbine blade cracking, respectively. A complete turbopump condition monitoring system including these four technologies could predict remaining component life, thus reducing engine operating costs and increasing reliability.

  15. A new metric for measuring condition in large predatory sharks.

    PubMed

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released. PMID:25130454

  16. Mass measuring instrument for use under microgravity conditions

    SciTech Connect

    Fujii, Yusaku; Yokota, Masayuki; Hashimoto, Seiji; Sugita, Yoichi; Ito, Hitomi; Shimada, Kazuhito

    2008-05-15

    A prototype instrument for measuring astronaut body mass under microgravity conditions has been developed and its performance was evaluated by parabolic flight tests. The instrument, which is the space scale, is applied as follows. Connect the subject astronaut to the space scale with a rubber cord. Use a force transducer to measure the force acting on the subject and an optical interferometer to measure the velocity of the subject. The subject's mass is calculated as the impulse divided by the velocity change, i.e., M={integral}Fdt/{delta}v. Parabolic flight by using a jet aircraft produces a zero-gravity condition lasting approximately 20 s. The performance of the prototype space scale was evaluated during such a flight by measuring the mass of a sample object.

  17. Methodology for a sub-millimeter near-field beam pattern measurement system

    NASA Astrophysics Data System (ADS)

    Davis, Kristina K.; Groppi, Chris; Mani, Hamdi; Wheeler, Caleb; Walker, Chris

    2014-07-01

    Here we present the methodology and initial results for a new near-field antenna radiation measurement system for submillimeter receivers. The system is based on a 4-port vector network analyzer with two synthesized sources. This method improves on similar systems employing this technique with the use of the network analyzer, which reduces the cost and complexity of the system. Furthermore, a single set of test equipment can analyze multiple receivers with different central frequencies; the frequency range of the system is limited by the output range of the network analyzer and/or the power output of the source signal. The amplitude and phase stability of the system in one configuration at 350 GHz was measured and found to be accurate enough to permit near field antenna measurements. The proper characterization of phase drifts across multiple test configurations demonstrates system reliability. These initial results will determine parameters necessary for implementing a near-field radiation pattern measurement of a Schottky diode receiver operating between 340-360 GHz.

  18. Design of measurement methodology for the evaluation of human exposure to vibration in residential environments.

    PubMed

    Sica, G; Peris, E; Woodcock, J S; Moorhouse, A T; Waddington, D C

    2014-06-01

    Exposure-response relationships are important tools for policy makers to assess the impact of an environmental stressor on the populace. Their validity lies partly in their statistical strength which is greatly influenced by the size of the sample from which the relationship is derived. As such, the derivation of meaningful exposure-response relationships requires estimates of vibration exposure at a large number of receiver locations. In the United Kingdom a socio-vibrational survey has been conducted with the aim of deriving exposure-response relationships for annoyance due to vibration from (a) railway traffic and (b) the construction of a new light rail system. Response to vibration was measured via a questionnaire conducted face-to-face with residents in their own homes and vibration exposure was estimated using data from a novel measurement methodology. In total, 1281 questionnaires were conducted: 931 for vibration from railway traffic and 350 for vibration from construction sources. Considering the interdisciplinary nature of this work along with the volume of experimental data required, a number of significant technical and logistical challenges needed to be overcome through the planning and implementation of the fieldwork. Four of these challenges are considered in this paper: the site identification for providing a robust sample of the residents affected, the strategies used for measuring both exposure and response and the coordination between the teams carrying out the social survey and the vibration measurements.

  19. Methodology for Intraoperative Laser Doppler Vibrometry Measurements of Ossicular Chain Reconstruction

    PubMed Central

    Sokołowski, Jacek; Lachowska, Magdalena; Bartoszewicz, Robert; Niemczyk, Kazimierz

    2016-01-01

    Objectives Despite the increasing number of research concerning the applications of the Laser Doppler Vibrometry (LDV) in medicine, its usefulness is still under discussion. The aim of this study is to present a methodology developed in our Department for the LDV intraoperative assessment of ossicular chain reconstruction. Methods Ten patients who underwent “second look” tympanoplasty were involved in the study. The measurements of the acoustic conductivity of the middle ear were performed using the LDV system. Tone bursts with carrier frequencies of 500, 1,000, 2,000, and 4,000 Hz set in motion the ossicular chain. The study was divided into four experiments that examined the intra- and interindividual reproducibility, the utility of the posterior tympanotomy, the impact of changes in the laser beam angle, and the influence of reflective tape presence on measurements. Results There were no statistically significant differences between the two measurements performed in the same patient. However, interindividual differences were significant. In all cases, posterior tympanotomy proved to be useful for LDV measurements of the ossicular prosthesis vibrations. In most cases, changing the laser beam angle decreased signal amplitude about 1.5% (not significant change). The reflective tape was necessary to achieve adequate reflection of the laser beam. Conclusion LDV showed to be a valuable noncontact intraoperative tool for measurements of the middle ear conductive system mobility with a very good intraindividual repeatability. Neither a small change in the angle of the laser beam nor performing the measurements through posterior tympanotomy showed a significant influence on the results. Reflective tape was necessary to obtain good quality responses in LDV measurements. PMID:27090282

  20. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    NASA Astrophysics Data System (ADS)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  1. Study on optical measurement conditions for noninvasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Xu, Kexin; Chen, Wenliang; Jiang, Jingying; Qiu, Qingjun

    2004-05-01

    Utilizing Near-infrared Spectroscopy for non-invasive glucose concentration sensing has been a focusing topic in biomedical optics applications. In this paper study on measuring conditions of spectroscopy on human body is carried out and a series of experiments on glucose concentration sensing are conducted. First, Monte Carlo method is applied to simulate and calculate photons" penetration depth within skin tissues at 1600 nm. The simulation results indicate that applying our designed optical probe, the detected photons can penetrate epidermis of the palm and meet the glucose sensing requirements within the dermis. Second, we analyze the influence of the measured position variations and the contact pressure between the optical fiber probe and the measured position on the measured spectrum during spectroscopic measurement of a human body. And, a measurement conditions reproduction system is introduced to enhance the measurement repeatability. Furthermore, through a series of transmittance experiments on glucose aqueous solutions sensing from simple to complex we found that though some absorption variation information of glucose can be obtained from measurements using NIR spectroscopy, while under the same measuring conditions and with the same modeling method, choices toward measured components reduce when complication degree of components increases, and this causes a decreased prediction accuracy. Finally, OGTT experiments were performed, and a PLS (Partial Least Square) mathematical model for a single experiment was built. We can easily get a prediction expressed as RMSEP (Root Mean Square Error of Prediction) with a value of 0.5-0.8mmol/dl. But the model"s extended application and reliability need more investigation.

  2. Patient-reported outcomes measurement and management with innovative methodologies and technologies.

    PubMed

    Chang, Chih-Hung

    2007-01-01

    Successful integration of modern psychometrics and advanced informatics in patient-reported outcomes (PRO) measurement and management can potentially maximize the value of health outcomes research and optimize the delivery of quality patient care. Unlike the traditional labor-intensive paper-and-pencil data collection method, item response theory-based computerized adaptive testing methodologies coupled with novel technologies provide an integrated environment to collect, analyze and present ready-to-use PRO data for informed and shared decision-making. This article describes the needs, challenges and solutions for accurate, efficient and cost-effective PRO data acquisition and dissemination means in order to provide critical and timely PRO information necessary to actively support and enhance routine patient care in busy clinical settings.

  3. Absolute gain measurement of microstrip antennas under mismatched conditions

    NASA Technical Reports Server (NTRS)

    Lee, R. Q.; Baddour, M. F.

    1988-01-01

    The gain of a single microstrip patch and a two-layer parasitic array is measured using the image method under mismatched conditions. This method produces accurate results, even in the case of low-gain microstrip antennas. The advantages of this method over the gain comparison technique are discussed.

  4. Curriculum-based measurement of math problem solving: a methodology and rationale for establishing equivalence of scores.

    PubMed

    Montague, Marjorie; Penfield, Randall D; Enders, Craig; Huang, Jia

    2010-02-01

    The purpose of this article is to discuss curriculum-based measurement (CBM) as it is currently utilized in research and practice and to propose a new approach for developing measures to monitor the academic progress of students longitudinally. To accomplish this, we first describe CBM and provide several exemplars of CBM in reading and mathematics. Then, we present the research context for developing a set of seven curriculum-based measures for monitoring student progress in math problem solving. The rationale for and advantages of using statistical equating methodology are discussed. Details of the methodology as it was applied to the development of these math problem solving measures are provided.

  5. Moving to Capture Children's Attention: Developing a Methodology for Measuring Visuomotor Attention.

    PubMed

    Hill, Liam J B; Coats, Rachel O; Mushtaq, Faisal; Williams, Justin H G; Aucott, Lorna S; Mon-Williams, Mark

    2016-01-01

    Attention underpins many activities integral to a child's development. However, methodological limitations currently make large-scale assessment of children's attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of 'Visual Motor Attention' (VMA)-a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method's core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults' attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198

  6. Moving to Capture Children’s Attention: Developing a Methodology for Measuring Visuomotor Attention

    PubMed Central

    Coats, Rachel O.; Mushtaq, Faisal; Williams, Justin H. G.; Aucott, Lorna S.; Mon-Williams, Mark

    2016-01-01

    Attention underpins many activities integral to a child’s development. However, methodological limitations currently make large-scale assessment of children’s attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of ‘Visual Motor Attention’ (VMA)—a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method’s core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults’ attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198

  7. Optimizing conditions for E-and Z-ajoene formation from garlic juice using response surface methodology.

    PubMed

    Yoo, Miyoung; Lee, Sanghee; Kim, Sunyoung; Shin, Dongbin

    2014-09-01

    The optimum conditions for the formation of E- and Z-ajoene from garlic juice mixed with soybean oil were determined using response surface methodology. A central composite design was used to investigate the effects of three independent variables temperature (°C, X 1), reaction time (hours, X 2), and oil volume (multiplied by weight, X 3). The dependent variables were Z-ajoene (Y 1) and E-ajoene (Y 2) in oil-macerated garlic. The optimal conditions for E- and Z-ajoene using ridge analysis were 98.80°C, 6.87 h, and weight multiplied by weight 2.57, and 42.24°C, 9.71 h, and weight multiplied by weight 3.08, respectively. These conditions resulted in E- and Z-ajoene compound predicted values of 234.17 and 752.62 μg/g from garlic juice, respectively. The experimental values of E- and Z-ajoene were 222.75 and 833.59 μg/g, respectively. The estimated maximum values at the predicted optimum conditions were in good agreement with experimental values.

  8. Optimizing conditions for E-and Z-ajoene formation from garlic juice using response surface methodology

    PubMed Central

    Yoo, Miyoung; Lee, Sanghee; Kim, Sunyoung; Shin, Dongbin

    2014-01-01

    The optimum conditions for the formation of E- and Z-ajoene from garlic juice mixed with soybean oil were determined using response surface methodology. A central composite design was used to investigate the effects of three independent variables temperature (°C, X1), reaction time (hours, X2), and oil volume (multiplied by weight, X3). The dependent variables were Z-ajoene (Y1) and E-ajoene (Y2) in oil-macerated garlic. The optimal conditions for E- and Z-ajoene using ridge analysis were 98.80°C, 6.87 h, and weight multiplied by weight 2.57, and 42.24°C, 9.71 h, and weight multiplied by weight 3.08, respectively. These conditions resulted in E- and Z-ajoene compound predicted values of 234.17 and 752.62 μg/g from garlic juice, respectively. The experimental values of E- and Z-ajoene were 222.75 and 833.59 μg/g, respectively. The estimated maximum values at the predicted optimum conditions were in good agreement with experimental values. PMID:25473520

  9. Operant conditioning and discrimination of alpha: some methodological limitations inherent in response-discrimination experiments.

    PubMed

    Cott, A; Pavloski, R P; Black, A H

    1981-09-01

    Studies on the operant conditioning of central nervous system activity have produced results interpreted as demonstrating that responses, certain properties of responses, or response-produced stimuli can function as discriminative stimuli. It is assumed that the feedback stimulus in biofeedback makes the subject aware of the internal response and that by becoming aware of the response, the subject can acquire voluntary control over it. In this context, awareness is operationally defined as the ability to use the response as a discriminative stimulus. Since direct evidence for the assumed relationship between control and discrimination is lacking, an attempt was made to test the hypothesis that discrimination of a response automatically leads to control over that response. The discriminative stimuli were the presence and absence of occipital alpha electroencephalograph (EEG) activity. Data from two experiments are reported. The first study, employing naive subjects, was designed to answer the following questions: (a) Since pilot data indicated that subjects seemed to match their responses to the more probable type of trial, would increases in the probability of a correct response result when the probabilities of alpha and nonalpha trials were held near .50? (b) If correct responding does increase, would performance of these subjects in an alpha feedback task be enhanced relative to that of subjects not previously given discrimination training? and (c) If subjects could not learn the discrimination task, would feedback training enhance their performance in a subsequent discrimination task? Results from this study indicate that holding the probabilities of alpha and nonalpha discrimination trials near .50 results in an absence of learning curves, but leaves open the possibility that sophisticated subjects are capable of discriminating alpha and nonalpha activity. The second study deals with two questions: (a) Can sophisticated subjects learn to discriminate occipital

  10. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    SciTech Connect

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  11. A new methodology for measurement of sludge residence time distribution in a paddle dryer using X-ray fluorescence analysis.

    PubMed

    Charlou, Christophe; Milhé, Mathieu; Sauceau, Martial; Arlabosse, Patricia

    2015-02-01

    Drying is a necessary step before sewage sludge energetic valorization. Paddle dryers allow working with such a complex material. However, little is known about sludge flow in this kind of processes. This study intends to set up an original methodology for sludge residence time distribution (RTD) measurement in a continuous paddle dryer, based on the detection of mineral tracers by X-ray fluorescence. This accurate analytical technique offers a linear response to tracer concentration in dry sludge; the protocol leads to a good repeatability of RTD measurements. Its equivalence to RTD measurement by NaCl conductivity in sludge leachates is assessed. Moreover, it is shown that tracer solubility has no influence on RTD: liquid and solid phases have the same flow pattern. The application of this technique on sludge with different storage duration at 4 °C emphasizes the influence of this parameter on sludge RTD, and thus on paddle dryer performances: the mean residence time in a paddle dryer is almost doubled between 24 and 48 h of storage for identical operating conditions.

  12. Application of nitric oxide measurements in clinical conditions beyond asthma

    PubMed Central

    Malinovschi, Andrei; Ludviksdottir, Dora; Tufvesson, Ellen; Rolla, Giovanni; Bjermer, Leif; Alving, Kjell; Diamant, Zuzana

    2015-01-01

    Fractional exhaled nitric oxide (FeNO) is a convenient, non-invasive method for the assessment of active, mainly Th2-driven, airway inflammation, which is sensitive to treatment with standard anti-inflammatory therapy. Consequently, FeNO serves as a valued tool to aid diagnosis and monitoring in several asthma phenotypes. More recently, FeNO has been evaluated in several other respiratory, infectious, and/or immunological conditions. In this short review, we provide an overview of several clinical studies and discuss the status of potential applications of NO measurements in clinical conditions beyond asthma. PMID:26672962

  13. [Measuring quality in the German Guideline Programme in Oncology (GGPO)—methodology and implementation].

    PubMed

    Nothacker, Monika; Muche-Borowski, Cathleen; Kopp, Ina B

    2014-01-01

    The German Guideline Programme in Oncology (GGPO) is a joint initiative between the German Cancer Society, the Association of the Scientific Medical Societies in Germany and German Cancer Aid. In accordance with the aims of the German National Cancer Plan, the GGPO supports the systematic development of high-quality guidelines. To enhance implementation and evaluation, the suggestion of performance measures (PMs) derived from guideline recommendations following a standardised methodology is obligatory within the GGPO. For this purpose, PM teams are convened representing the multidisciplinary guideline development groups including clinical experts, methodologists and patient representatives as well as those organisations that take an active part in and share responsibility for documentation and quality improvement, i.e., clinical cancer registries, certified cancer centres and, if appropriate, the institution responsible for external quality assurance according to the German Social Code (SGB). The primary selection criteria for PMs include strength of the underlying recommendation (strong, grade A), existing potential for improvement of care and measurability. The premises of data economy and standardised documentation are taken into account. Between May 2008 and July 2014, 12 guidelines with suggestions for 100 PMs have been published. The majority of the suggested performance measures is captured by the specific documentation requirements of the clinical cancer registries and certified cancer centres. This creates a solid basis for an active quality management and re-evaluation of the suggested PMs. In addition, the suspension of measures should be considered if improvement has been achieved on a broad scale and for a longer period in order to concentrate on a quality-oriented, economic documentation.

  14. [Measuring quality in the German Guideline Programme in Oncology (GGPO)—methodology and implementation].

    PubMed

    Nothacker, Monika; Muche-Borowski, Cathleen; Kopp, Ina B

    2014-01-01

    The German Guideline Programme in Oncology (GGPO) is a joint initiative between the German Cancer Society, the Association of the Scientific Medical Societies in Germany and German Cancer Aid. In accordance with the aims of the German National Cancer Plan, the GGPO supports the systematic development of high-quality guidelines. To enhance implementation and evaluation, the suggestion of performance measures (PMs) derived from guideline recommendations following a standardised methodology is obligatory within the GGPO. For this purpose, PM teams are convened representing the multidisciplinary guideline development groups including clinical experts, methodologists and patient representatives as well as those organisations that take an active part in and share responsibility for documentation and quality improvement, i.e., clinical cancer registries, certified cancer centres and, if appropriate, the institution responsible for external quality assurance according to the German Social Code (SGB). The primary selection criteria for PMs include strength of the underlying recommendation (strong, grade A), existing potential for improvement of care and measurability. The premises of data economy and standardised documentation are taken into account. Between May 2008 and July 2014, 12 guidelines with suggestions for 100 PMs have been published. The majority of the suggested performance measures is captured by the specific documentation requirements of the clinical cancer registries and certified cancer centres. This creates a solid basis for an active quality management and re-evaluation of the suggested PMs. In addition, the suspension of measures should be considered if improvement has been achieved on a broad scale and for a longer period in order to concentrate on a quality-oriented, economic documentation. PMID:25523845

  15. Analysis and methodology for measuring oxygen concentration in liquid sodium with a plugging meter

    SciTech Connect

    Nollet, B. K.; Hvasta, M.; Anderson, M.

    2012-07-01

    Oxygen concentration in liquid sodium is a critical measurement in assessing the potential for corrosion damage in sodium-cooled fast reactors (SFRs). There has been little recent work on sodium reactors and oxygen detection. Thus, the technical expertise dealing with oxygen measurements within sodium is no longer readily available in the U.S. Two methods of oxygen detection that have been investigated are the plugging meter and the galvanic cell. One of the overall goals of the Univ. of Wisconsin's sodium research program is to develop an affordable, reliable galvanic cell oxygen sensor. Accordingly, attention must first be dedicated to a well-known standard known as a plugging meter. Therefore, a sodium loop has been constructed on campus in effort to develop the plugging meter technique and gain experience working with liquid metal. The loop contains both a galvanic cell test section and a plugging meter test section. Consistent plugging results have been achieved below 20 [wppm], and a detailed process for achieving effective plugging has been developed. This paper will focus both on an accurate methodology to obtain oxygen concentrations from a plugging meter, and on how to easily control the oxygen concentration of sodium in a test loop. Details of the design, materials, manufacturing, and operation will be presented. Data interpretation will also be discussed, since a modern discussion of plugging data interpretation does not currently exist. (authors)

  16. Extremely low frequency electromagnetic field measurements at the Hylaty station and methodology of signal analysis

    NASA Astrophysics Data System (ADS)

    Kulak, Andrzej; Kubisz, Jerzy; Klucjasz, Slawomir; Michalec, Adam; Mlynarczyk, Janusz; Nieckarz, Zenon; Ostrowski, Michal; Zieba, Stanislaw

    2014-06-01

    We present the Hylaty geophysical station, a high-sensitivity and low-noise facility for extremely low frequency (ELF, 0.03-300 Hz) electromagnetic field measurements, which enables a variety of geophysical and climatological research related to atmospheric, ionospheric, magnetospheric, and space weather physics. The first systematic observations of ELF electromagnetic fields at the Jagiellonian University were undertaken in 1994. At the beginning the measurements were carried out sporadically, during expeditions to sparsely populated areas of the Bieszczady Mountains in the southeast of Poland. In 2004, an automatic Hylaty ELF station was built there, in a very low electromagnetic noise environment, which enabled continuous recording of the magnetic field components of the ELF electromagnetic field in the frequency range below 60 Hz. In 2013, after 8 years of successful operation, the station was upgraded by extending its frequency range up to 300 Hz. In this paper we show the station's technical setup, and how it has changed over the years. We discuss the design of ELF equipment, including antennas, receivers, the time control circuit, and power supply, as well as antenna and receiver calibration. We also discuss the methodology we developed for observations of the Schumann resonance and wideband observations of ELF field pulses. We provide examples of various kinds of signals recorded at the station.

  17. Development of plant condition measurement - The Jimah Model

    NASA Astrophysics Data System (ADS)

    Evans, Roy F.; Syuhaimi, Mohd; Mazli, Mohammad; Kamarudin, Nurliyana; Maniza Othman, Faiz

    2012-05-01

    The Jimah Model is an information management model. The model has been designed to facilitate analysis of machine condition by integrating diagnostic data with quantitative and qualitative information. The model treats data as a single strand of information - metaphorically a 'genome' of data. The 'Genome' is structured to be representative of plant function and identifies the condition of selected components (or genes) in each machine. To date in industry, computer aided work processes used with traditional industrial practices, have been unable to consistently deliver a standard of information suitable for holistic evaluation of machine condition and change. Significantly the reengineered site strategies necessary for implementation of this "data genome concept" have resulted in enhanced knowledge and management of plant condition. In large plant with high initial equipment cost and subsequent high maintenance costs, accurate measurement of major component condition becomes central to whole of life management and replacement decisions. A case study following implementation of the model at a major power station site in Malaysia (Jimah) shows that modeling of plant condition and wear (in real time) can be made a practical reality.

  18. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  19. [Methodological Approaches to the Organization of Counter Measures Taking into Account Landscape Features of Radioactively Contaminated Territories].

    PubMed

    Kuznetsov, V K; Sanzharova, N I

    2016-01-01

    Methodological approaches to the organization of counter measures are considered taking into account the landscape features of the radioactively contaminated territories. The current status and new requirements to the organization of counter measures in the contaminated agricultural areas are analyzed. The basic principles, objectives and problems of the formation of counter measures with regard to the landscape characteristics of the territory are presented; also substantiated are the organization and optimization of the counter measures in radioactively contaminated agricultural landscapes.

  20. [Methodological Approaches to the Organization of Counter Measures Taking into Account Landscape Features of Radioactively Contaminated Territories].

    PubMed

    Kuznetsov, V K; Sanzharova, N I

    2016-01-01

    Methodological approaches to the organization of counter measures are considered taking into account the landscape features of the radioactively contaminated territories. The current status and new requirements to the organization of counter measures in the contaminated agricultural areas are analyzed. The basic principles, objectives and problems of the formation of counter measures with regard to the landscape characteristics of the territory are presented; also substantiated are the organization and optimization of the counter measures in radioactively contaminated agricultural landscapes. PMID:27245009

  1. Phoretic and Radiometric Force Measurements on Microparticles in Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. James

    1996-01-01

    Thermophoretic, diffusiophoretic and radiometric forces on microparticles are being measured over a wide range of gas phase and particle conditions using electrodynamic levitation of single particles to simulate microgravity conditions. The thermophoretic force, which arises when a particle exists in a gas having a temperature gradient, is measured by levitating an electrically charged particle between heated and cooled plates mounted in a vacuum chamber. The diffusiophoretic force arising from a concentration gradient in the gas phase is measured in a similar manner except that the heat exchangers are coated with liquids to establish a vapor concentration gradient. These phoretic forces and the radiation pressure force acting on a particle are measured directly in terms of the change in the dc field required to levitate the particle with and without the force applied. The apparatus developed for the research and the experimental techniques are discussed, and results obtained by thermophoresis experiments are presented. The determination of the momentum and energy accommodation coefficients associated with molecular collisions between gases molecules and particles and the measurement of the interaction between electromagnetic radiation and small particles are of particular interest.

  2. Methodologies for measuring the temporal evolution of the lava flow geometry and monitoring the related hazard

    NASA Astrophysics Data System (ADS)

    Marsella, Maria; Junior Valentino D'Aranno, Peppe; Martino, Michele; Nardinocchi, Carla; Scifoni, Silvia; Scutti, Marianna; Sonnessa, Alberico; Coltelli, Mauro; Proietti, Cristina

    2016-04-01

    The lava flow propagation can pose a high risk both for the territory and the infrastructures located on the flanks of an active volcano. The presented study is aimed at analyzing different methodology for measuring the evolution of a flow field by combining ground and aerial observations, as well as satellite data. Data acquired by ground sensors must be opportunely processed to extract orthorectified images. In addition during volcanic crisis, the frequent acquisition of oblique digital images from helicopter allows for quasi-real-time monitoring to support mitigation actions by civil protection. These images can be processed through a straightforward photogrammetric approach to generate digital orthophotos from single-view oblique images. Moreover, airborne remote sensing systems, such as those based on digital photogrammetry and laser scanner sensors, can be also adopted to monitor the lava emplacement processes in active volcanic areas. The capability of extracting accurate topographic data from ground and aerial acquisitions, allow to use these methods to constrain the regular and more frequent measurements derived from satellite observations. The presented work analyses the discrepancy among the different datasets in terms of accuracy and resolution and will attempt to provide an approach for combining the different datasets. The measured evolution of a flow field can be used to evaluate the related hazard by applying a numerical model for evaluating the areas potentially at risk of lava invasion. Numerical simulations can also be aimed at evaluating the reduction of the hazard by simulating the application of different containment systems for mitigating the effect of lava flow propagation.

  3. Body temperature as a conditional response measure for pavlovian fear conditioning.

    PubMed

    Godsil, B P; Quinn, J J; Fanselow, M S

    2000-01-01

    On six days rats were exposed to each of two contexts. They received an electric shock in one context and nothing in the other. Rats were tested later in each environment without shock. The rats froze and defecated more often in the shock-paired environment; they also exhibited a significantly larger elevation in rectal temperature in that environment. The rats discriminated between each context, and we suggest that the elevation in temperature is the consequence of associative learning. Thus, body temperature can be used as a conditional response measure in Pavlovian fear conditioning experiments that use footshock as the unconditional stimulus.

  4. Effect of Culture Condition Variables on Human Endostatin Gene Expression in Escherichia coli Using Response Surface Methodology

    PubMed Central

    Mohajeri, Abbas; Pilehvar-Soltanahmadi, Yones; Abdolalizadeh, Jalal; Karimi, Pouran; Zarghami, Nosratollah

    2016-01-01

    Background Recombinant human endostatin (rhES) is an angiogenesis inhibitor used as a specific drug for the treatment of non-small-cell lung cancer. As mRNA concentration affects the recombinant protein expression level, any factor affecting mRNA concentration can alter the protein expression level. Response surface methodology (RSM) based on the Box-Behnken design (BBD) is a statistical tool for experimental design and for optimizing biotechnological processes. Objectives This investigation aimed to predict and develop the optimal culture conditions for mRNA expression of the synthetic human endostatin (hES) gene in Escherichia coli BL21 (DE3). Materials and Methods The hES gene was amplified, cloned, and expressed in the E. coli expression system. Three factors, including isopropyl β-D-1-thiogalactopyranoside (IPTG) concentration, post-induction time, and cell density before induction, were selected as important factors. The mRNA expression level was determined using real-time PCR. The expression levels of hES mRNA under the different growth conditions were analyzed. SDS-PAGE and western blot analyses were carried out for further confirmation of interest-gene expression. Results A maximum rhES mRNA level of 376.16% was obtained under the following conditions: 0.6 mM IPTG, 7 hours post-induction time, and 0.9 cell density before induction. The level of rhES mRNA was significantly correlated with post-induction time, IPTG concentration, and cell density before induction (P < 0.05). The expression of the hES gene was confirmed by western blot. Conclusions The obtained results indicate that RSM is an effective method for the optimization of culture conditions for hES gene expression in E. coli. PMID:27800134

  5. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    PubMed

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented.

  6. Innovative Methodologies for thermal Energy Release Measurement: case of La Solfatara volcano (Italy)

    NASA Astrophysics Data System (ADS)

    Marfe`, Barbara; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Marotta, Enrica; Peluso, Rosario

    2015-04-01

    This work is devoted to improve the knowledge on the parameters that control the heat flux anomalies associated with the diffuse degassing processes of volcanic and hydrothermal areas. The methodologies currently used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. A new method, based on the use of thermal imaging cameras, has been applied to estimate the heat flux and its time variations. This approach will allow faster heat flux measurement than already accredited methods, improving in this way the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The idea is to extrapolate the heat flux from the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. We use thermal imaging cameras, at short distances (meters to hundreds of meters), to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature. Preliminary studies have been carried out throughout the whole of the La Solfatara crater in order to investigate a possible correlation between the surface temperature and the shallow thermal gradient. We have used a FLIR SC640 thermal camera and K type thermocouples to assess the two measurements at the same time. Results suggest a good correlation between the shallow temperature gradient ΔTs and the surface temperature Ts depurated from background, and despite the campaigns took place during a period of time of a few years, this correlation seems to be stable over the time. This is an extremely motivating result for a further development of a measurement method based only on the use of small range thermal imaging camera. Surveys with thermal cameras may be manually done using a tripod to take thermal images of small contiguous areas and then joining

  7. Extension of laboratory-measured soil spectra to field conditions

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Baumgardner, M. F.; Weismiller, R. A.; Biehl, L. L.; Robinson, B. F.

    1982-01-01

    Spectral responses of two glaciated soils, Chalmers silty clay loam and Fincastle silt loam, formed under prairie grass and forest vegetation, respectively, were measured in the laboratory under controlled moisture equilibria using an Exotech Model 20C spectroradiometer to obtain spectral data in the laboratory under artificial illumination. The same spectroradiometer was used outdoors under solar illumination to obtain spectral response from dry and moistened field plots with and without corn residue cover, representing the two different soils. Results indicate that laboratory-measured spectra of moist soil are directly proportional to the spectral response of that same field-measured moist bare soil over the 0.52 micrometer to 1.75 micrometer wavelength range. The magnitudes of difference in spectral response between identically treated Chalmers and Fincastle soils are greatest in the 0.6 micrometers to 0.8 micrometer transition region between the visible and near infrared, regardless of field condition or laboratory preparation studied.

  8. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.

  9. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126

  10. Thermal-Diffusivity Measurements of Mexican Citrus Essential Oils Using Photoacoustic Methodology in the Transmission Configuration

    NASA Astrophysics Data System (ADS)

    Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.

    2011-05-01

    Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.

  11. Statistical optimization of ultraviolet irradiate conditions for vitamin D₂ synthesis in oyster mushrooms (Pleurotus ostreatus) using response surface methodology.

    PubMed

    Wu, Wei-Jie; Ahn, Byung-Yong

    2014-01-01

    Response surface methodology (RSM) was used to determine the optimum vitamin D2 synthesis conditions in oyster mushrooms (Pleurotus ostreatus). Ultraviolet B (UV-B) was selected as the most efficient irradiation source for the preliminary experiment, in addition to the levels of three independent variables, which included ambient temperature (25-45°C), exposure time (40-120 min), and irradiation intensity (0.6-1.2 W/m2). The statistical analysis indicated that, for the range which was studied, irradiation intensity was the most critical factor that affected vitamin D2 synthesis in oyster mushrooms. Under optimal conditions (ambient temperature of 28.16°C, UV-B intensity of 1.14 W/m2, and exposure time of 94.28 min), the experimental vitamin D2 content of 239.67 µg/g (dry weight) was in very good agreement with the predicted value of 245.49 µg/g, which verified the practicability of this strategy. Compared to fresh mushrooms, the lyophilized mushroom powder can synthesize remarkably higher level of vitamin D2 (498.10 µg/g) within much shorter UV-B exposure time (10 min), and thus should receive attention from the food processing industry. PMID:24736742

  12. Thermophysical Properties Measurement of High-Temperature Liquids Under Microgravity Conditions in Controlled Atmospheric Conditions

    NASA Technical Reports Server (NTRS)

    Watanabe, Masahito; Ozawa, Shumpei; Mizuno, Akotoshi; Hibiya, Taketoshi; Kawauchi, Hiroya; Murai, Kentaro; Takahashi, Suguru

    2012-01-01

    Microgravity conditions have advantages of measurement of surface tension and viscosity of metallic liquids by the oscillating drop method with an electromagnetic levitation (EML) device. Thus, we are preparing the experiments of thermophysical properties measurements using the Materials-Science Laboratories ElectroMagnetic-Levitator (MSL-EML) facilities in the international Space station (ISS). Recently, it has been identified that dependence of surface tension on oxygen partial pressure (Po2) must be considered for industrial application of surface tension values. Effect of Po2 on surface tension would apparently change viscosity from the damping oscillation model. Therefore, surface tension and viscosity must be measured simultaneously in the same atmospheric conditions. Moreover, effect of the electromagnetic force (EMF) on the surface oscillations must be clarified to obtain the ideal surface oscillation because the EMF works as the external force on the oscillating liquid droplets, so extensive EMF makes apparently the viscosity values large. In our group, using the parabolic flight levitation experimental facilities (PFLEX) the effect of Po2 and external EMF on surface oscillation of levitated liquid droplets was systematically investigated for the precise measurements of surface tension and viscosity of high temperature liquids for future ISS experiments. We performed the observation of surface oscillations of levitated liquid alloys using PFLEX on board flight experiments by Gulfstream II (G-II) airplane operated by DAS. These observations were performed under the controlled Po2 and also under the suitable EMF conditions. In these experiments, we obtained the density, the viscosity and the surface tension values of liquid Cu. From these results, we discuss about as same as reported data, and also obtained the difference of surface oscillations with the change of the EMF conditions.

  13. Detection of Chamber Conditioning Through Optical Emission and Impedance Measurements

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.; Rao, M. V. V. S.; Sharma, Surendra P.; Meyyappan, Meyya

    2001-01-01

    During oxide etch processes, buildup of fluorocarbon residues on reactor sidewalls can cause run-to-run drift and will necessitate some time for conditioning and seasoning of the reactor. Though diagnostics can be applied to study and understand these phenomena, many of them are not practical for use in an industrial reactor. For instance, measurements of ion fluxes and energy by mass spectrometry show that the buildup of insulating fluorocarbon films on the reactor surface will cause a shift in both ion energy and current in an argon plasma. However, such a device cannot be easily integrated into a processing system. The shift in ion energy and flux will be accompanied by an increase in the capacitance of the plasma sheath. The shift in sheath capacitance can be easily measured by a common commercially available impedance probe placed on the inductive coil. A buildup of film on the chamber wall is expected to affect the production of fluorocarbon radicals, and thus the presence of such species in the optical emission spectrum of the plasma can be monitored as well. These two techniques are employed on a GEC (Gaseous Electronics Conference) Reference Cell to assess the validity of optical emission and impedance monitoring as a metric of chamber conditioning. These techniques are applied to experimental runs with CHF3 and CHF3/O2/Ar plasmas, with intermediate monitoring of pure argon plasmas as a reference case for chamber conditions.

  14. Classification of heart valve condition using acoustic measurements

    SciTech Connect

    Clark, G.

    1994-11-15

    Prosthetic heart valves and the many great strides in valve design have been responsible for extending the life spans of many people with serious heart conditions. Even though the prosthetic valves are extremely reliable, they are eventually susceptible to long-term fatigue and structural failure effects expected from mechanical devices operating over long periods of time. The purpose of our work is to classify the condition of in vivo Bjork-Shiley Convexo-Concave (BSCC) heart valves by processing acoustic measurements of heart valve sounds. The structural failures of interest for Bscc valves is called single leg separation (SLS). SLS can occur if the outlet strut cracks and separates from the main structure of the valve. We measure acoustic opening and closing sounds (waveforms) using high sensitivity contact microphones on the patient`s thorax. For our analysis, we focus our processing and classification efforts on the opening sounds because they yield direct information about outlet strut condition with minimal distortion caused by energy radiated from the valve disc.

  15. Non-invasive ultrasound based temperature measurements at reciprocating screw plastication units: Methodology and applications

    NASA Astrophysics Data System (ADS)

    Straka, Klaus; Praher, Bernhard; Steinbichler, Georg

    2015-05-01

    Previous attempts to accurately measure the real polymer melt temperature in the screw chamber as well as in the screw channels have failed on account of the challenging metrological boundary conditions (high pressure, high temperature, rotational and axial screw movement). We developed a novel ultrasound system - based on reflection measurements - for the online determination of these important process parameters. Using available pressure-volume-temperature (pvT) data from a polymer it is possible to estimate the density and adiabatic compressibility of the material and therefore the pressure and temperature depending longitudinal ultrasound velocity. From the measured ultrasonic reflection time from the screw root and barrel wall and the pressure it is possible to calculate the mean temperature in the screw channel or in the chamber in front of the screw (in opposition to flush mounted infrared or thermocouple probes). By means of the above described system we are able to measure axial profiles of the mean temperature in the screw chamber. The data gathered by the measurement system can be used to develop control strategies for the plastication process to reduce temperature gradients within the screw chamber or as input data for injection moulding simulation.

  16. The application of conditioning paradigms in the measurement of pain.

    PubMed

    Li, Jun-Xu

    2013-09-15

    Pain is a private experience that involves both sensory and emotional components. Animal studies of pain can only be inferred by their responses, and therefore the measurement of reflexive responses dominates the pain literature for nearly a century. It has been argued that although reflexive responses are important to unveil the sensory nature of pain in organisms, pain affect is equally important but largely ignored in pain studies primarily due to the lack of validated animal models. One strategy to begin to understand pain affect is to use conditioning principles to indirectly reveal the affective condition of pain. This review critically analyzed several procedures that are thought to measure affective learning of pain. The procedures regarding the current knowledge, the applications, and their advantages and disadvantages in pain research are discussed. It is proposed that these procedures should be combined with traditional reflex-based pain measurements in future studies of pain, which could greatly benefit both the understanding of neural underpinnings of pain and preclinical assessment of novel analgesics.

  17. The application of conditioning paradigms in the measurement of pain

    PubMed Central

    Li, Jun-Xu

    2013-01-01

    Pain is a private experience that involves both sensory and emotional components. Animal studies of pain can only be inferred by their responses, and therefore the measurement of reflexive responses dominate the pain literature for nearly a century. It has been argued that although reflexive responses are important to unveil the sensory nature of pain in organisms, pain affect is equally important but largely ignored in pain studies primarily due to the lack of validated animal models. One strategy to begin to understand pain affect is to use conditioning principles to indirectly reveal the affective condition of pain. This review critically analyzed several procedures that are thought to measure affective learning of pain. The procedures regarding the current knowledge, the applications, and their advantages and disadvantages in pain research are discussed. It is proposed that these procedures should be combined with traditional reflex-based pain measurements in future studies of pain, which could greatly benefit both the understanding of neural underpinnings of pain and preclinical assessment of novel analgesics. PMID:23500202

  18. Closed-loop snowplow applicator control using road condition measurements

    NASA Astrophysics Data System (ADS)

    Erdogan, Gurkan; Alexander, Lee; Rajamani, Rajesh

    2011-04-01

    Closed-loop control of a snowplow applicator, based on direct measurement of the road surface condition, is a valuable technology for the optimisation of winter road maintenance costs and for the protection of the environment from the negative impacts of excessive usage of de-icing chemicals. To this end, a novel friction measurement wheel is designed to provide a continuous measurement of road friction coefficient, which is, in turn, utilised to control the applicator automatically on a snowplow. It is desired that the automated snowplow applicator deploy de-icing materials right from the beginning of any slippery surface detected by the friction wheel, meaning that no portion of the slippery road surface should be left untreated behind, as the snowplow travels over it at a reasonably high speed. This paper describes the developed wheel-based measurement system, the friction estimation algorithm and the expected performance of the closed-loop applicator system. Conventional and zero velocity applicators are introduced and their hardware time delays are measured in addition to the time delay of the friction estimation algorithm. The overall performance of the closed-loop applicator control system is shown to be reliable at typical snowplowing speeds if the zero velocity applicator is used.

  19. High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.

    1997-01-01

    To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.

  20. Assessment of hygienic conditions of ground pepper (Piper nigrum L.) on the market in Sao Paulo City, by means of two methodologies for detecting the light filth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Pepper should to be collected, processed, and packed under optimum conditions to avoid the presence of foreign matter. The hygienic conditions of ground pepper marketted in São Paulo city were assessed in determining the presence of foreign matter by means of two extraction methodologies. This study...

  1. Safety Assessment for a Surface Repository in the Chernobyl Exclusion Zone - Methodology for Assessing Disposal under Intervention Conditions - 13476

    SciTech Connect

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, RWDF Buryakovka is still being operated but its maximum capacity is nearly reached. Plans for enlargement of the facility exist since more than 10 years but have not been implemented yet. In the framework of an European Commission Project DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) based on the planned enlargement. Due to its history RWDF Buryakovka does not fully comply with today's best international practices and the latest Ukrainian regulations in this area. The most critical aspects are its inventory of long-lived radionuclides, and the non-existent multi-barrier waste confinement system. A significant part of the project was dedicated, therefore, to the development of a methodology for the safety assessment taking into consideration the facility's special situation and to reach an agreement with all stakeholders involved in the later review and approval procedure of the safety analysis reports. Main aspect of the agreed methodology was to analyze the safety, not strictly based on regulatory requirements but on the assessment of the actual situation of the facility including its location within the Exclusion Zone. For both safety analysis reports, SAR and PSAR, the assessment of the long-term safety led to results that were either within regulatory limits or within the limits allowing for a specific situational evaluation by the regulator. (authors)

  2. Conditional measurements, quantum feedback, and cold atoms in cavity QED

    NASA Astrophysics Data System (ADS)

    Reiner, Joseph Earl

    Two-time correlation functions are equivalent to conditional measurements in the sense that given a fluctuation at time t, they give the evolution of the system at time t + tau. The theoretical description of conditional measurements is well described with the formalism of quantum trajectories, which provide a "measurement friendly" means for understanding the evolution of a quantum system. The quantum system studied in this thesis is the strongly-coupled; atom-cavity QED system which consists of N-atoms coupled to a single electro-magnetic field mode of a Fabry-Perot cavity. When the cavity emits a single photon the intra-cavity field undergoes large fluctuations. The coherent evolution of the intra-cavity field; following a photoemission, reduces the cavity field noise below the shot-noise limit. A connection exists between this reduction, known as squeezing, and the conditioned field evolution. The cosine-Fourier transform of the conditioned field evolution and the spectrum of squeezing are proportional. In the first part of my thesis I use this connection, along with quantum trajectory theory, to study the dynamic origins of the spectrum of squeezing. This led to a better understanding of previous experimental results in our cavity QED system. In the second and third parts of my thesis I used quantum trajectories to formulate two different quantum feedback schemes for a strongly-coupled cavity QED system. In both feedback proposals it is the experimenter's knowledge of the system, and the detection of a single photon, that is used to control the evolution of the cavity QED system. We have implemented the first of these feedback proposals which conditions feedback upon single photon detections from our low-intensity cavity QED system. Previous experimental realizations have used a thermal beam to place the atoms inside the cavity. This degrades the effectiveness of the feedback proposals and the detection of quantum fluctuations. The final portion of my thesis

  3. "MARK I" MEASUREMENT METHODOLOGY FOR POLLUTION PREVENTION PROGRESS OCCURRING AS A RESULT OF PRODUCT DECISIONS

    EPA Science Inventory

    A methodology for assessing progress in pollution prevention resulting from product redesign, reformulation or replacement is described. The method compares the pollution generated by the original product with that from the modified or replacement product, taking into account, if...

  4. Suspended matter concentrations in coastal waters: Methodological improvements to quantify individual measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo

    2014-12-01

    Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to <60%. The determination errors for the TSM concentration was <40% when three different volume were used, and for the majority of the samples the error was <10%. When six different volumes were used and outliers removed, the error was always <25%, very often errors of only a few percent were obtained. The approach proposed here can determine the individual determination error for each sample, is independent of bias errors, can be used for TSM and PIM determination, and allows individual quality control for samples from coastal and estuarine waters. It should

  5. Practical Experience of Discharge Measurement in Flood Conditions with ADP

    NASA Astrophysics Data System (ADS)

    Vidmar, A.; Brilly, M.; Rusjan, S.

    2009-04-01

    Accurate discharge estimation is important for an efficient river basin management and especially for flood forecasting. The traditional way of estimating the discharge in hydrological practice is to measure the water stage and to convert the recorded water stage values into discharge by using the single-valued rating curve .Relationship between the stage and discharge values of the rating curve for the extreme events are usually extrapolated by using different mathematical methods and are not directly measured. Our practice shows that by using the Accoustic Doppler Profiler (ADP) instrument we can record the actual relation between the water stage and the flow velocity at the occurrence of flood waves very successfully. Measurement in flood conditions it is not easy task, because of high water surface velocity and large amounts of sediments in the water and floating objects on the surface like branches, bushes, trees, piles and others which can also easily damage ADP instrument. We made several measurements in such extreme events on the Sava River down to the nuclear power plant Kr\\vsko where we have install fixed cable way. During the several measurement with traditional "moving-boat" measurement technique a mowing bed phenomenon was clearly seen. Measuring flow accurately using ADP that uses the "moving-boat" technique, the system needs a reference against which to relate water velocities to. This reference is river bed and must not move. During flood events we detected difficulty finding a static bed surface to which to relate water velocities. This is caused by motion of the surface layer of bed material or also sediments suspended in the water near bed very densely. So these traditional »moving-boat« measurement techniques that we normally use completely fail. Using stationary measurement method to making individual velocity profile measurements, using an Acoustic Doppler Profiler (ADP), at certain time at fixed locations across the width of a stream gave

  6. Evaluation of methodological aspects of digestibility measurements in ponies fed different grass hays.

    PubMed

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; Wartena, F C; Zoon, M V; Blok, M C; Hendriks, W H

    2015-10-01

    Methodological aspects of digestibility measurements of feedstuffs for equines were studied in four Welsh pony geldings consuming four grass-hay diets in a 4 × 4 Latin square design. Diets contained either a low (L), medium (M), high (H), or very high (VH) ADF content (264, 314, 375, or 396 g·kg DM, respectively). Diets were supplemented with minerals, vitamins, and TiO (3.9 g Ti·d). Daily feces excreted were collected quantitatively over 10 consecutive days and analyzed for moisture, ash, ADL, AIA, and titanium (Ti). Minimum duration of total fecal collection (TFC) required for an accurate estimation of apparent organic matter digestibility (OMD) of grass hay was assessed. Based on literature and the calculated cumulative OMD assessed over 10 consecutive days of TFC, a minimum duration of at least 5 consecutive days of fecal collection is recommended for accurate estimation of dry matter digestibility (DMD) and OMD in ponies. The 5-d collection should be preceded by a 14-d adaptation period to allow the animals to adapt to the diets and become accustomed to the collection procedures. Mean fecal recovery over 10 d across diets for ADL, AIA, and Ti was 93.1% (SE 1.9), 98.9% (SE 5.5), and 97.1% (SE 1.8), respectively. Evaluation of CV of mean fecal recoveries obtained by ADL, AIA, and Ti showed that variation in fecal Ti (6.8) and ADL excretion (7.0) was relatively low compared to AIA (12.3). In conclusion, the use of internal ADL and externally supplemented Ti are preferred as markers to be used in digestibility trials in equine fed grass-hay diets.

  7. Nuclear Structure Measurements of Fermium-254 and Advances in Target Production Methodologies

    NASA Astrophysics Data System (ADS)

    Gothe, Oliver Ralf

    The Berkeley Gas-filled Separator (BGS) has been upgraded with a new gas control system. It allows for accurate control of hydrogen and helium gas mixtures. This greatly increases the capabilities of the separator by reducing background signals in the focal plane detector for asymmetric nuclear reactions. It has also been shown that gas mixtures can be used to focus the desired reaction products into a smaller area, thereby increasing the experimental efficiency. A new electrodeposition cell has been developed to produce metal oxide targets for experiments at the BGS. The new cell has been characterized and was used to produce americium targets for the production of element 115 in the reaction 243Am(48Ca.3n) 288115. Additionally, a new method of producing targets for nuclear reactions was explored. A procedure for producing targets via Polymer Assisted Deposition (PAD) was developed and targets produced via this method were tested using the nuclear reaction 208Pb(40Ar.4 n)244Fm to determine their in-beam performance. It was determined that the silicon nitride backings used in this procedure are not feasible due to their crystal structures, and alternative backing materials have been tested and proposed. A previously unknown level in 254Fm has been identified at 985.7 keV utilizing a newly developed low background coincident apparatus. 254m was produced in the reaction 208Pb(48Ca. n)254No. Reaction products were guided to the two-clover low background detector setup via a recoil transfer chamber. The new level has been assigned a spin of 2- and has tentatively been identified as the octupole vibration in 254Fm. Transporting evaporation residues to a two-clover, low background detector setup can effectively be used to perform gamma-spectroscopy measurements of nuclei that are not accessible by current common methodologies. This technique provides an excellent addition to previously available tools such as in-beam spectroscopy and gamma-ray tracking arrays.

  8. Evaluation of methodological aspects of digestibility measurements in ponies fed different grass hays.

    PubMed

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; Wartena, F C; Zoon, M V; Blok, M C; Hendriks, W H

    2015-10-01

    Methodological aspects of digestibility measurements of feedstuffs for equines were studied in four Welsh pony geldings consuming four grass-hay diets in a 4 × 4 Latin square design. Diets contained either a low (L), medium (M), high (H), or very high (VH) ADF content (264, 314, 375, or 396 g·kg DM, respectively). Diets were supplemented with minerals, vitamins, and TiO (3.9 g Ti·d). Daily feces excreted were collected quantitatively over 10 consecutive days and analyzed for moisture, ash, ADL, AIA, and titanium (Ti). Minimum duration of total fecal collection (TFC) required for an accurate estimation of apparent organic matter digestibility (OMD) of grass hay was assessed. Based on literature and the calculated cumulative OMD assessed over 10 consecutive days of TFC, a minimum duration of at least 5 consecutive days of fecal collection is recommended for accurate estimation of dry matter digestibility (DMD) and OMD in ponies. The 5-d collection should be preceded by a 14-d adaptation period to allow the animals to adapt to the diets and become accustomed to the collection procedures. Mean fecal recovery over 10 d across diets for ADL, AIA, and Ti was 93.1% (SE 1.9), 98.9% (SE 5.5), and 97.1% (SE 1.8), respectively. Evaluation of CV of mean fecal recoveries obtained by ADL, AIA, and Ti showed that variation in fecal Ti (6.8) and ADL excretion (7.0) was relatively low compared to AIA (12.3). In conclusion, the use of internal ADL and externally supplemented Ti are preferred as markers to be used in digestibility trials in equine fed grass-hay diets. PMID:26523567

  9. An analysis of the pilot point methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields

    USGS Publications Warehouse

    Cooley, R.L.

    2000-01-01

    An analysis of the pilot point method for automated calibration of an ensemble of conditionally simulated transmissivity fields was conducted on the basis of the simplifying assumption that the flow model is a linear function of log transmissivity. The analysis shows that the pilot point and conditional simulation method of model calibration and uncertainty analysis can produce accurate uncertainty measures if it can be assumed that errors of unknown origin in the differences between observed and model-computed water pressures are small. When this assumption is not met, the method could yield significant errors from overparameterization and the neglect of potential sources of model inaccuracy. The conditional simulation part of the method is also shown to be a variant of the percentile bootstrap method, so that when applied to a nonlinear model, the method is subject to bootstrap errors. These sources of error must be considered when using the method.

  10. Collector probe measurements of ohmic conditioning discharges in TFTR

    SciTech Connect

    Kilpatrick, S.J.; Dylla, H.F.; Manos, D.M.; Cohen, S.A.; Wampler, W.R.; Bastasz, R.

    1989-03-01

    Special limiter conditioning techniques using low density deuterium or helium discharges have produced enhanced plasma confinement in TFTR. Measurements with a rotatable collector probe have been made to increase our understanding of the boundary layer during these conditioning sequences. A set of silicon films behind slits facing in the ion and electron drift directions was exposed to four different D/sup +/ and He/sup 2 +/ discharge sequences. The amounts of deuterium and impurities trapped in the surface regions of the samples have been measured by different analytical techniques, including nuclear reaction analysis for retained deuterium, Rutherford backscattering spectroscopy for carbon and metals, and Auger electron spectroscopy for carbon, oxygen, and metals. Up to 1.9 /times/ 10/sup 17/ cm/sup /minus/2/ of deuterium was detected in codeposited carbon layers with D/C generally in the range of the bulk saturation limit. Radial profiles and ion drift/electron drift asymmetries are discussed. 21 refs., 3 figs., 1 tab.

  11. Measurement in Learning Games Evolution: Review of Methodologies Used in Determining Effectiveness of "Math Snacks" Games and Animations

    ERIC Educational Resources Information Center

    Trujillo, Karen; Chamberlin, Barbara; Wiburg, Karin; Armstrong, Amanda

    2016-01-01

    This article captures the evolution of research goals and methodologies used to assess the effectiveness and impact of a set of mathematical educational games and animations for middle-school aged students. The researchers initially proposed using a mixed model research design of formative and summative measures, such as user-testing,…

  12. Development of a methodology to evaluate probable maximum precipitation (PMP) under changing climate conditions: Application to southern Quebec, Canada

    NASA Astrophysics Data System (ADS)

    Rousseau, Alain N.; Klein, Iris M.; Freudiger, Daphné; Gagnon, Patrick; Frigon, Anne; Ratté-Fortin, Claudie

    2014-11-01

    Climate change (CC) needs to be accounted for in the estimation of probable maximum floods (PMFs). However, there does not exist a unique way to estimate PMFs and, furthermore the challenge in estimating them is that they should neither be underestimated for safety reasons nor overestimated for economical ones. By estimating PMFs without accounting for CC, the risk of underestimation could be high for Quebec, Canada, since future climate simulations indicate that in all likelihood extreme precipitation events will intensify. In this paper, simulation outputs from the Canadian Regional Climate Model (CRCM) are used to develop a methodology to estimate probable maximum precipitations (PMPs) while accounting for changing climate conditions for the southern region of the Province of Quebec, Canada. The Kénogami and Yamaska watersheds are herein of particular interest, since dam failures could lead to major downstream impacts. Precipitable water (w) represents one of the key variables in the estimation process of PMPs. Results of stationary tests indicate that CC will not only affect precipitation and temperature but also the monthly maximum precipitable water, wmax, and the ensuing maximization ratio used for the estimation of PMPs. An up-to-date computational method is developed to maximize w using a non-stationary frequency analysis, and then calculate the maximization ratios. The ratios estimated this way are deemed reliable since they rarely exceed threshold values set for Quebec, and, therefore, provide consistent PMP estimates. The results show an overall significant increase of the PMPs throughout the current century compared to the recent past.

  13. Reverse micellar extraction of lectin from black turtle bean (Phaseolus vulgaris): optimisation of extraction conditions by response surface methodology.

    PubMed

    He, Shudong; Shi, John; Walid, Elfalleh; Zhang, Hongwei; Ma, Ying; Xue, Sophia Jun

    2015-01-01

    Lectin from black turtle bean (Phaseolus vulgaris) was extracted and purified by reverse micellar extraction (RME) method. Response surface methodology (RSM) was used to optimise the processing parameters for both forward and backward extraction. Hemagglutinating activity analysis, SDS-PAGE, RP-HPLC and FTIR techniques were used to characterise the lectin. The optimum extraction conditions were determined as 77.59 mM NaCl, pH 5.65, AOT 127.44 mM sodium bis (2-ethylhexyl) sulfosuccinate (AOT) for the forward extraction; and 592.97 mM KCl, pH 8.01 for the backward extraction. The yield was 63.21 ± 2.35 mg protein/g bean meal with a purification factor of 8.81 ± 0.17. The efficiency of RME was confirmed by SDS-PAGE and RP-HPLC, respectively. FTIR analysis indicated there were no significant changes in the secondary protein structure. Comparison with conventional chromatographic method confirmed that the RME method could be used for the purification of lectin from the crude extract.

  14. Reverse micellar extraction of lectin from black turtle bean (Phaseolus vulgaris): optimisation of extraction conditions by response surface methodology.

    PubMed

    He, Shudong; Shi, John; Walid, Elfalleh; Zhang, Hongwei; Ma, Ying; Xue, Sophia Jun

    2015-01-01

    Lectin from black turtle bean (Phaseolus vulgaris) was extracted and purified by reverse micellar extraction (RME) method. Response surface methodology (RSM) was used to optimise the processing parameters for both forward and backward extraction. Hemagglutinating activity analysis, SDS-PAGE, RP-HPLC and FTIR techniques were used to characterise the lectin. The optimum extraction conditions were determined as 77.59 mM NaCl, pH 5.65, AOT 127.44 mM sodium bis (2-ethylhexyl) sulfosuccinate (AOT) for the forward extraction; and 592.97 mM KCl, pH 8.01 for the backward extraction. The yield was 63.21 ± 2.35 mg protein/g bean meal with a purification factor of 8.81 ± 0.17. The efficiency of RME was confirmed by SDS-PAGE and RP-HPLC, respectively. FTIR analysis indicated there were no significant changes in the secondary protein structure. Comparison with conventional chromatographic method confirmed that the RME method could be used for the purification of lectin from the crude extract. PMID:25053033

  15. Inference of human affective states from psychophysiological measurements extracted under ecologically valid conditions

    PubMed Central

    Betella, Alberto; Zucca, Riccardo; Cetnarski, Ryszard; Greco, Alberto; Lanatà, Antonio; Mazzei, Daniele; Tognetti, Alessandro; Arsiwalla, Xerxes D.; Omedas, Pedro; De Rossi, Danilo; Verschure, Paul F. M. J.

    2014-01-01

    Compared to standard laboratory protocols, the measurement of psychophysiological signals in real world experiments poses technical and methodological challenges due to external factors that cannot be directly controlled. To address this problem, we propose a hybrid approach based on an immersive and human accessible space called the eXperience Induction Machine (XIM), that incorporates the advantages of a laboratory within a life-like setting. The XIM integrates unobtrusive wearable sensors for the acquisition of psychophysiological signals suitable for ambulatory emotion research. In this paper, we present results from two different studies conducted to validate the XIM as a general-purpose sensing infrastructure for the study of human affective states under ecologically valid conditions. In the first investigation, we recorded and classified signals from subjects exposed to pictorial stimuli corresponding to a range of arousal levels, while they were free to walk and gesticulate. In the second study, we designed an experiment that follows the classical conditioning paradigm, a well-known procedure in the behavioral sciences, with the additional feature that participants were free to move in the physical space, as opposed to similar studies measuring physiological signals in constrained laboratory settings. Our results indicate that, by using our sensing infrastructure, it is indeed possible to infer human event-elicited affective states through measurements of psychophysiological signals under ecological conditions. PMID:25309310

  16. Measurement of leukocyte rheology in vascular disease: clinical rationale and methodology. International Society of Clinical Hemorheology.

    PubMed

    Wautier, J L; Schmid-Schönbein, G W; Nash, G B

    1999-01-01

    The measurement of leukocyte rheology in vascular disease is a recent development with a wide range of new opportunities. The International Society of Clinical Hemorheology has asked an expert panel to propose guidelines for the investigation of leukocyte rheology in clinical situations. This article first discusses the mechanical, adhesive and related functional properties of leukocytes (especially neutrophils) which influence their circulation, and establishes the rationale for clinically-related measurements of parameters which describe them. It is concluded that quantitation of leukocyte adhesion molecules, and of their endothelial receptors may assist understanding of leukocyte behaviour in vascular disease, along with measurements of flow resistance of leukocytes, free radical production, degranulation and gene expression. For instance, vascular cell adhesion molecule (VCAM-1) is abnormally present on endothelial cells in atherosclerosis, diabetes mellitus and inflammatory conditions. Soluble forms of intercellular adhesion molecule (ICAM-1) or VCAM can be found elevated in the blood of patients with rheumatoid arthritis or infections disease. In the second part of the article, possible technical approaches are presented and possible avenues for leukocyte rheological investigations are discussed. PMID:10517484

  17. A METHODOLOGY TO INTEGRATE MAGNETIC RESONANCE AND ACOUSTIC MEASUREMENTS FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Jorge O. Parra; Chris L. Hackert; Lorna L. Wilson

    2002-09-20

    The work reported herein represents the third year of development efforts on a methodology to interpret magnetic resonance and acoustic measurements for reservoir characterization. In this last phase of the project we characterize a vuggy carbonate aquifer in the Hillsboro Basin, Palm Beach County, South Florida, using two data sets--the first generated by velocity tomography and the second generated by reflection tomography. First, we integrate optical macroscopic (OM), scanning electron microscope (SEM) and x-ray computed tomography (CT) images, as well as petrography, as a first step in characterizing the aquifer pore system. This pore scale integration provides information with which to evaluate nuclear magnetic resonance (NMR) well log signatures for NMR well log calibration, interpret ultrasonic data, and characterize flow units at the field scale between two wells in the aquifer. Saturated and desaturated NMR core measurements estimate the irreducible water in the rock and the variable T{sub 2} cut-offs for the NMR well log calibration. These measurements establish empirical equations to extract permeability from NMR well logs. Velocity and NMR-derived permeability and porosity relationships integrated with velocity tomography (based on crosswell seismic measurements recorded between two wells 100 m apart) capture two flow units that are supported with pore scale integration results. Next, we establish a more detailed picture of the complex aquifer pore structures and the critical role they play in water movement, which aids in our ability to characterize not only carbonate aquifers, but reservoirs in general. We analyze petrography and cores to reveal relationships between the rock physical properties that control the compressional and shear wave velocities of the formation. A digital thin section analysis provides the pore size distributions of the rock matrix, which allows us to relate pore structure to permeability and to characterize flow units at the

  18. A novel methodology to measure methane bubble sizes in the water column

    NASA Astrophysics Data System (ADS)

    Hemond, H.; Delwiche, K.; Senft-Grupp, S.; Manganello, T.

    2014-12-01

    The fate of methane ebullition from lake sediments is dependent on initial bubble size. Rising bubbles are subject to dissolution, reducing the fraction of methane that ultimately enters the atmosphere while increasing concentrations of aqueous methane. Smaller bubbles not only rise more slowly, but dissolve more rapidly larger bubbles. Thus, understanding methane bubble size distributions in the water column is critical to predicting atmospheric methane emissions from ebullition. However, current methods of measuring methane bubble sizes in-situ are resource-intensive, typically requiring divers, video equipment, sonar, or hydroacoustic instruments. The complexity and cost of these techniques points to the strong need for a simple, autonomous device that can measure bubble size distributions and be deployed unattended over long periods of time. We describe a bubble sizing device that can be moored in the subsurface and can intercept and measure the size of bubbles as they rise. The instrument uses a novel optical measurement technique with infrared LEDs and IR-sensitive photodetectors combined with a custom-designed printed circuit board. An on-board microcomputer handles raw optical signals and stores the relevant information needed to calculate bubble volume. The electronics are housed within a pressure case fabricated from standard PVC fittings and are powered by size C alkaline batteries. The bill of materials cost is less than $200, allowing us to deploy multiple sensors at various locations within Upper Mystic Lake, MA. This novel device will provide information on how methane bubble sizes may vary both spatially and temporally. We present data from tests under controlled laboratory conditions and from deployments in Upper Mystic Lake.

  19. Optimization of Extraction Conditions for Maximal Phenolic, Flavonoid and Antioxidant Activity from Melaleuca bracteata Leaves Using the Response Surface Methodology.

    PubMed

    Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping

    2016-01-01

    Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the "tea tree oil" derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography-mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant. PMID

  20. Optimization of Extraction Conditions for Maximal Phenolic, Flavonoid and Antioxidant Activity from Melaleuca bracteata Leaves Using the Response Surface Methodology

    PubMed Central

    Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping

    2016-01-01

    Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the “tea tree oil” derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography–mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant. PMID

  1. Measurement of Two-Phase Flow Characteristics Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.; Lin, C. S.; Edwards, L. G.; Knapp, J.; Harrison, M. E.; Xhang, X.

    1999-01-01

    This paper describes the technical approach and initial results of a test program for studying two-phase annular flow under the simulated microgravity conditions of KC-135 aircraft flights. A helical coil flow channel orientation was utilized in order to circumvent the restrictions normally associated with drop tower or aircraft flight tests with respect to two-phase flow, namely spatial restrictions preventing channel lengths of sufficient size to accurately measure pressure drops. Additionally, the helical coil geometry is of interest in itself, considering that operating in a microgravity environment vastly simplifies the two-phase flows occurring in coiled flow channels under 1-g conditions for virtually any orientation. Pressure drop measurements were made across four stainless steel coil test sections, having a range of inside tube diameters (0.95 to 1.9 cm), coil diameters (25 - 50 cm), and length-to-diameter ratios (380 - 720). High-speed video photographic flow observations were made in the transparent straight sections immediately preceding and following the coil test sections. A transparent coil of tygon tubing of 1.9 cm inside diameter was also used to obtain flow visualization information within the coil itself. Initial test data has been obtained from one set of KC-135 flight tests, along with benchmark ground tests. Preliminary results appear to indicate that accurate pressure drop data is obtainable using a helical coil geometry that may be related to straight channel flow behavior. Also, video photographic results appear to indicate that the observed slug-annular flow regime transitions agree quite reasonably with the Dukler microgravity map.

  2. A methodological evaluation of volumetric measurement techniques including three-dimensional imaging in breast surgery.

    PubMed

    Hoeffelin, H; Jacquemin, D; Defaweux, V; Nizet, J L

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery. PMID:24511536

  3. Behavioral Observation Scales for Measuring Children's Distress: The Effects of Increased Methodological Rigor.

    ERIC Educational Resources Information Center

    Jay, Susan M.; Elliott, Charles

    1984-01-01

    Evaluated the effects of increased methodological rigor on the validity of the Observation Scale of Behavioral Distress and on findings concerning whether children habituate to painful procedures. Data were scored with and without refinements. Results indicated that children do habituate but that refinements had little effect on validity. (BH)

  4. The Measure of a Nation: The USDA and the Rise of Survey Methodology

    ERIC Educational Resources Information Center

    Mahoney, Kevin T.; Baker, David B.

    2007-01-01

    Survey research has played a major role in American social science. An outgrowth of efforts by the United States Department of Agriculture in the 1930s, the Division of Program Surveys (DPS) played an important role in the development of survey methodology. The DPS was headed by the ambitious and entrepreneurial Rensis Likert, populated by young…

  5. Measuring Sense of Community: A Methodological Interpretation of the Factor Structure Debate

    ERIC Educational Resources Information Center

    Peterson, N. Andrew; Speer, Paul W.; Hughey, Joseph

    2006-01-01

    Instability in the factor structure of the Sense of Community Index (SCI) was tested as a methodological artifact. Confirmatory factor analyses, tested with two data sets, supported neither the proposed one-factor nor the four-factor (needs fulfillment, group membership, influence, and emotional connection) SCI. Results demonstrated that the SCI…

  6. Measuring the Regional Economic Importance of Early Care and Education: The Cornell Methodology Guide

    ERIC Educational Resources Information Center

    Ribeiro, Rosaria; Warner, Mildred

    2004-01-01

    This methodology guide is designed to help study teams answer basic questions about how to conduct a regional economic analysis of the child care sector. Specific examples are drawn from a local study, Tompkins County, NY, and two state studies, Kansas and New York, which the Cornell team conducted. Other state and local studies are also…

  7. Measuring Team Shared Understanding Using the Analysis-Constructed Shared Mental Model Methodology

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; O'Connor, Debra L.

    2008-01-01

    Teams are an essential part of successful performance in learning and work environments. Analysis-constructed shared mental model (ACSMM) methodology is a set of techniques where individual mental models are elicited and sharedness is determined not by the individuals who provided their mental models but by an analytical procedure. This method…

  8. Defining and Measuring Engagement and Learning in Science: Conceptual, Theoretical, Methodological, and Analytical Issues

    ERIC Educational Resources Information Center

    Azevedo, Roger

    2015-01-01

    Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring…

  9. Use of Response Surface Methodology to Optimize Culture Conditions for Hydrogen Production by an Anaerobic Bacterial Strain from Soluble Starch

    NASA Astrophysics Data System (ADS)

    Kieu, Hoa Thi Quynh; Nguyen, Yen Thi; Dang, Yen Thi; Nguyen, Binh Thanh

    2016-05-01

    Biohydrogen is a clean source of energy that produces no harmful byproducts during combustion, being a potential sustainable energy carrier for the future. Therefore, biohydrogen produced by anaerobic bacteria via dark fermentation has attracted attention worldwide as a renewable energy source. However, the hydrogen production capability of these bacteria depends on major factors such as substrate, iron-containing hydrogenase, reduction agent, pH, and temperature. In this study, the response surface methodology (RSM) with central composite design (CCD) was employed to improve the hydrogen production by an anaerobic bacterial strain isolated from animal waste in Phu Linh, Soc Son, Vietnam (PL strain). The hydrogen production process was investigated as a function of three critical factors: soluble starch concentration (8 g L-1 to 12 g L-1), ferrous iron concentration (100 mg L-1 to 200 mg L-1), and l-cysteine concentration (300 mg L-1 to 500 mg L-1). RSM analysis showed that all three factors significantly influenced hydrogen production. Among them, the ferrous iron concentration presented the greatest influence. The optimum hydrogen concentration of 1030 mL L-1 medium was obtained with 10 g L-1 soluble starch, 150 mg L-1 ferrous iron, and 400 mg L-1 l-cysteine after 48 h of anaerobic fermentation. The hydrogen concentration produced by the PL strain was doubled after using RSM. The obtained results indicate that RSM with CCD can be used as a technique to optimize culture conditions for enhancement of hydrogen production by the selected anaerobic bacterial strain. Hydrogen production from low-cost organic substrates such as soluble starch using anaerobic fermentation methods may be one of the most promising approaches.

  10. Measurements of optical underwater turbulence under controlled conditions

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Gladysz, S.; Almeida de Sá Barros, R.; Matt, S.; Nootz, G. A.; Josset, D. B.; Hou, W.

    2016-05-01

    Laser beam propagation underwater is becoming an important research topic because of high demand for its potential applications. Namely, ability to image underwater at long distances is highly desired for scientific and military purposes, including submarine awareness, diver visibility, and mine detection. Optical communication in the ocean can provide covert data transmission with much higher rates than that available with acoustic techniques, and it is now desired for certain military and scientific applications that involve sending large quantities of data. Unfortunately underwater environment presents serious challenges for propagation of laser beams. Even in clean ocean water, the extinction due to absorption and scattering theoretically limit the useful range to few attenuation lengths. However, extending the laser light propagation range to the theoretical limit leads to significant beam distortions due to optical underwater turbulence. Experiments show that the magnitude of the distortions that are caused by water temperature and salinity fluctuations can significantly exceed the magnitude of the beam distortions due to atmospheric turbulence even for relatively short propagation distances. We are presenting direct measurements of optical underwater turbulence in controlled conditions of laboratory water tank using two separate techniques involving wavefront sensor and LED array. These independent approaches will enable development of underwater turbulence power spectrum model based directly on the spatial domain measurements and will lead to accurate predictions of underwater beam propagation.

  11. A methodology for successfully producing global translations of patient reported outcome measures for use in multiple countries.

    PubMed

    Two, Rebecca; Verjee-Lorenz, Aneesa; Clayson, Darren; Dalal, Mehul; Grotzinger, Kelly; Younossi, Zobair M

    2010-01-01

    The production of accurate and culturally relevant translations of patient reported outcome (PRO) measures is essential for the success of international clinical trials. Although there are many reports in publication regarding the translation of PRO measures, the techniques used to produce single translations for use in multiple countries (global translations) are not well documented. This article addresses this apparent lack of documentation and presents the methodology used to create global translations of the Chronic Liver Disease Questionnaire-Hepatitis C Virus (CLDQ-HCV). The challenges of creating a translation for use in multiple countries are discussed, and the criteria for a global translation project explained. Based on a thorough translation and linguistic validation methodology including a concept elaboration, multiple forward translations, two back translations, reviews by in-country clinicians and the instrument developer, pilot testing in each target country and multiple sets of proofreading, the key concept of the global translation methodology is consistent international harmonization, achieved through the involvement of linguists from each target country at every stage of the process. This methodology enabled the successful resolution of the translation issues encountered, and resulted in consistent translations of the CLDQ-HCV that were linguistically and culturally appropriate for all target countries.

  12. A methodology for successfully producing global translations of patient reported outcome measures for use in multiple countries.

    PubMed

    Two, Rebecca; Verjee-Lorenz, Aneesa; Clayson, Darren; Dalal, Mehul; Grotzinger, Kelly; Younossi, Zobair M

    2010-01-01

    The production of accurate and culturally relevant translations of patient reported outcome (PRO) measures is essential for the success of international clinical trials. Although there are many reports in publication regarding the translation of PRO measures, the techniques used to produce single translations for use in multiple countries (global translations) are not well documented. This article addresses this apparent lack of documentation and presents the methodology used to create global translations of the Chronic Liver Disease Questionnaire-Hepatitis C Virus (CLDQ-HCV). The challenges of creating a translation for use in multiple countries are discussed, and the criteria for a global translation project explained. Based on a thorough translation and linguistic validation methodology including a concept elaboration, multiple forward translations, two back translations, reviews by in-country clinicians and the instrument developer, pilot testing in each target country and multiple sets of proofreading, the key concept of the global translation methodology is consistent international harmonization, achieved through the involvement of linguists from each target country at every stage of the process. This methodology enabled the successful resolution of the translation issues encountered, and resulted in consistent translations of the CLDQ-HCV that were linguistically and culturally appropriate for all target countries. PMID:19695006

  13. A new methodology of determining directional albedo models from Nimbus 7 ERB scanning radiometer measurements

    NASA Technical Reports Server (NTRS)

    House, Frederick B.

    1986-01-01

    The Nimbus 7 Earth Radiation Budget (ERB) data set is reviewed to examine its strong and weak points. In view of the timing of this report relative to the processing schedule of Nimbus 7 ERB observations, emphasis is placed on the methodology of interpreting the scanning radiometer data to develop directional albedo models. These findings enhance the value of the Nimbus 7 ERB data set and can be applied to the interpretation of both the scanning and nonscanning radiometric observations.

  14. Water depression storage under different tillage conditions: measuring and modelling

    NASA Astrophysics Data System (ADS)

    Giménez, R.; Campo, M. A.; González-Audicana, M.; Álvarez-Mozos, J.; Casalí, J.

    2012-04-01

    Water storage in surface depressions (DS) is an important process which affects infiltration, runoff and erosion. Since DS is driven by micro relief, in agricultural soils DS is much affected by tillage and by the direction of tillage rows in relation to the main slope. A direct and accurate measurement of DS requires making the soil surface waterproof -soil is very permeable especially under tillage- but preserving all details of the soil roughness including aggregates over the soil surface (micro-roughness). All this is a very laborious and time-consuming task. That is why hydrological and erosion models for DS estimation normally use either empirical relationships based on some roughness index or numerical approaches. The aim of this work was (i) to measure directly in the field the DS of a soil under different tillage conditions and (ii) to assess the performance of existing empirical 2D models and of a numerical 2D algorithm for DS estimation. Three types of tillage classes (mouldbard+roller, roller compacted and chisel) in 2 tillage directions (parallel and perpendicular to the main slope) were assessed in an experimental hillslope (10% slope) which defines then 6 treatments. Experiments were carried out in 12, 1-m2 micro-plots delimited by metal sheets; that is, a pair of repetitions for each treatment. In each plot, soil surface was gently impregnated with a waterproof, white paint but without altering micro-roughness. A known amount of water (stained with a blue dye) was poured all over the surface with a measuring cup. The excess water was captured in a gutter and measured. Soon after finishing the experiment, pictures of the surface was taken in order to analyze water storage pattern (from stained water) by image processing. Besides, longitudinal height profiles were measured using a laser profilemeter. Finally, infiltration rate was measured near the plot using a double ring infiltrometer. For all the treatments, DS ranged from 2 mm to 17 mm. For the

  15. A methodological note on the application of the generalized grid technique in the measurement of perceived intercultural distance.

    PubMed

    Puddifoot, J E

    1996-09-01

    A renewed interest in the concept of perceived 'social distance' has recently occurred in relation to intercultural perception. The research reported here examines the utility of the generalized grid technique in measuring intercultural distance in a sample of English adolescents. The methodology described offers some advantages in ease of administration and comprehension by participants and a form of analysis that allows for the representation and examination of patterns of response of numerous individuals collectively. PMID:8856952

  16. Martian dust threshold measurements: Simulations under heated surface conditions

    NASA Technical Reports Server (NTRS)

    White, Bruce R.; Greeley, Ronald; Leach, Rodman N.

    1991-01-01

    Diurnal changes in solar radiation on Mars set up a cycle of cooling and heating of the planetary boundary layer, this effect strongly influences the wind field. The stratification of the air layer is stable in early morning since the ground is cooler than the air above it. When the ground is heated and becomes warmer than the air its heat is transferred to the air above it. The heated parcels of air near the surface will, in effect, increase the near surface wind speed or increase the aeolian surface stress the wind has upon the surface when compared to an unheated or cooled surface. This means that for the same wind speed at a fixed height above the surface, ground-level shear stress will be greater for the heated surface than an unheated surface. Thus, it is possible to obtain saltation threshold conditions at lower mean wind speeds when the surface is heated. Even though the mean wind speed is less when the surface is heated, the surface shear stress required to initiate particle movement remains the same in both cases. To investigate this phenomenon, low-density surface dust aeolian threshold measurements have been made in the MARSWIT wind tunnel located at NASA Ames Research Center, Moffett Field, California. The first series of tests examined threshold values of the 100 micron sand material. At 13 mb surface pressure the unheated surface had a threshold friction speed of 2.93 m/s (and approximately corresponded to a velocity of 41.4 m/s at a height of 1 meter) while the heated surface equivalent bulk Richardson number of -0.02, yielded a threshold friction speed of 2.67 m/s (and approximately corresponded to a velocity of 38.0 m/s at a height of 1 meter). This change represents an 8.8 percent decrease in threshold conditions for the heated case. The values of velocities are well within the threshold range as observed by Arvidson et al., 1983. As the surface was heated the threshold decreased. At a value of bulk Richardson number equal to -0.02 the threshold

  17. Methodological challenges in measuring vaccine effectiveness using population cohorts in low resource settings

    PubMed Central

    King, C.; Beard, J.; Crampin, A.C.; Costello, A.; Mwansambo, C.; Cunliffe, N.A.; Heyderman, R.S.; French, N.; Bar-Zeev, N.

    2015-01-01

    Post-licensure real world evaluation of vaccine implementation is important for establishing evidence of vaccine effectiveness (VE) and programme impact, including indirect effects. Large cohort studies offer an important epidemiological approach for evaluating VE, but have inherent methodological challenges. Since March 2012, we have conducted an open prospective cohort study in two sites in rural Malawi to evaluate the post-introduction effectiveness of 13-valent pneumococcal conjugate vaccine (PCV13) against all-cause post-neonatal infant mortality and monovalent rotavirus vaccine (RV1) against diarrhoea-related post-neonatal infant mortality. Our study sites cover a population of 500,000, with a baseline post-neonatal infant mortality of 25 per 1000 live births. We conducted a methodological review of cohort studies for vaccine effectiveness in a developing country setting, applied to our study context. Based on published literature, we outline key considerations when defining the denominator (study population), exposure (vaccination status) and outcome ascertainment (mortality and cause of death) of such studies. We assess various definitions in these three domains, in terms of their impact on power, effect size and potential biases and their direction, using our cohort study for illustration. Based on this iterative process, we discuss the pros and cons of our final per-protocol analysis plan. Since no single set of definitions or analytical approach accounts for all possible biases, we propose sensitivity analyses to interrogate our assumptions and methodological decisions. In the poorest regions of the world where routine vital birth and death surveillance are frequently unavailable and the burden of disease and death is greatest We conclude that provided the balance between definitions and their overall assumed impact on estimated VE are acknowledged, such large scale real-world cohort studies can provide crucial information to policymakers by providing

  18. A supervised vibration-based statistical methodology for damage detection under varying environmental conditions & its laboratory assessment with a scale wind turbine blade

    NASA Astrophysics Data System (ADS)

    Gómez González, A.; Fassois, S. D.

    2016-03-01

    The problem of vibration-based damage detection under varying environmental conditions and uncertainty is considered, and a novel, supervised, PCA-type statistical methodology is postulated. The methodology employs vibration data records from the healthy and damaged states of a structure under various environmental conditions. Unlike standard PCA-type methods in which a feature vector corresponding to the least important eigenvalues is formed in a single step, the postulated methodology uses supervised learning in which damaged-state data records are employed to sequentially form a feature vector by appending a transformed scalar element at a time under the condition that it optimally, among all remaining elements, improves damage detectability. This leads to the formulation of feature vectors with optimized sensitivity to damage, and thus high damage detectability. Within this methodology three particular methods, two non-parametric and one parametric, are formulated. These are validated and comparatively assessed via a laboratory case study focusing on damage detection on a scale wind turbine blade under varying temperature and the potential presence of sprayed water. Damage detection performance is shown to be excellent based on a single vibration response sensor and a limited frequency bandwidth.

  19. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology

    PubMed Central

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800

  20. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology.

    PubMed

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800

  1. Measurements methodology for evaluation of Digital TV operation in VHF high-band

    NASA Astrophysics Data System (ADS)

    Pudwell Chaves de Almeida, M.; Vladimir Gonzalez Castellanos, P.; Alfredo Cal Braz, J.; Pereira David, R.; Saboia Lima de Souza, R.; Pereira da Soledade, A.; Rodrigues Nascimento Junior, J.; Ferreira Lima, F.

    2016-07-01

    This paper describes the experimental setup of field measurements carried out for evaluating the operation of the ISDB-TB (Integrated Services Digital Broadcasting, Terrestrial, Brazilian version) standard digital TV in the VHF-highband. Measurements were performed in urban and suburban areas in a medium-sized Brazilian city. Besides the direct measurements of received power and environmental noise, a measurement procedure involving the injection of Gaussian additive noise was employed to achieve the signal to noise ratio threshold at each measurement site. The analysis includes results of static reception measurements for evaluating the received field strength and the signal to noise ratio thresholds for correct signal decoding.

  2. Methodological issues in measuring patient-reported outcomes: the agenda of the Work Group on Outcomes Assessment.

    PubMed

    Fowler, F J; Cleary, P D; Magaziner, J; Patrick, D L; Benjamin, K L

    1994-07-01

    The primary goal of the Inter-PORT work group on Outcomes Assessment is to foster methodological knowledge about the implications of various measure and design decisions for studies of the outcomes of treatment. A number of key methodological issues currently are unresolved and are in need of further study. These include: 1) the best questions to ask to assess how patients are affected by their treatments; 2) the comparative advantages of various study designs, including prospective cohorts, retrospective studies, and randomized clinical trials; 3) the way data collection decisions, such as mode of data collection and use of proxy respondents, affect study results; and 4) the best way to assess the significance of observed effects from patient, provider, and public policy perspectives. Studies conducted by the Patients Outcomes Research Teams (PORTs) are using diverse designs, measures, and data collection procedures. They provide a unique opportunity to further knowledge about methods of obtaining information about treatment outcomes. Through meetings, conferences, and publications, the Inter-PORT work group on Outcomes Assessment is trying to stimulate analyses aimed at methodological issues summarized in this paper and to ensure that new knowledge about methods is disseminated to a wide audience.

  3. A Novel Methodology for Measurements of an LED's Heat Dissipation Factor

    NASA Astrophysics Data System (ADS)

    Jou, R.-Y.; Haung, J.-H.

    2015-12-01

    Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.

  4. Creating ‘obesogenic realities’; do our methodological choices make a difference when measuring the food environment?

    PubMed Central

    2013-01-01

    Background The use of Geographical Information Systems (GIS) to objectively measure ‘obesogenic’ food environment (foodscape) exposure has become common-place. This increase in usage has coincided with the development of a methodologically heterogeneous evidence-base, with subsequent perceived difficulties for inter-study comparability. However, when used together in previous work, different types of food environment metric have often demonstrated some degree of covariance. Differences and similarities between density and proximity metrics, and within methodologically different conceptions of density and proximity metrics need to be better understood. Methods Frequently used measures of food access were calculated for North East England, UK. Using food outlet data from local councils, densities of food outlets per 1000 population and per km2 were calculated for small administrative areas. Densities (counts) were also calculated based on population-weighted centroids of administrative areas buffered at 400/800/1000m street network and Euclidean distances. Proximity (street network and Euclidean distances) from these centroids to the nearest food outlet were also calculated. Metrics were compared using Spearman’s rank correlations. Results Measures of foodscape density and proximity were highly correlated. Densities per km2 and per 1000 population were highly correlated (rs = 0.831). Euclidean and street network based measures of proximity (rs = 0.865) and density (rs = 0.667-0.764, depending on neighbourhood size) were also highly correlated. Density metrics based on administrative areas and buffered centroids of administrative areas were less strongly correlated (rs = 0.299-0.658). Conclusions Density and proximity metrics were largely comparable, with some exceptions. Whilst results suggested a substantial degree of comparability across existing studies, future comparability could be ensured by moving towards a more standardised set of

  5. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Jorge O.; Hackert, Chris L.; Collier, Hughbert A.; Bennett, Michael

    2002-01-29

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate NMR techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This is accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging are being linked with a balanced petrographical analysis of the core and theoretical model.

  6. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, J.O.

    2001-01-26

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate magnetic resonance (MR) techniques and acoustic measurements to improve predictability of the pay zone in two hydrocarbon reservoirs. This was accomplished by extracting the fluid property parameters using MR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurements were compared with petrographic analysis results to determine the relative roles of petrographic elements such as porosity type, mineralogy, texture, and distribution of clay and cement in creating permeability heterogeneity.

  7. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Ph.D., Jorge O.

    2002-06-10

    The objective of the project was to develop an advanced imaging method, including pore scale imaging, to integrate nuclear magnetic resonance (NMR) techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This will be accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging were linked with a balanced petrographical analysis of cores and theoretical modeling.

  8. Grapevine Yield and Leaf Area Estimation Using Supervised Classification Methodology on RGB Images Taken under Field Conditions

    PubMed Central

    Diago, Maria-Paz; Correa, Christian; Millán, Borja; Barreiro, Pilar; Valero, Constantino; Tardaguila, Javier

    2012-01-01

    The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine’s leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management. PMID:23235443

  9. Evaluation of constraint methodologies applied to a shallow-flaw cruciform bend specimen tested under biaxial loading conditions

    SciTech Connect

    Bass, B.R.; McAfee, W.J.; Williams, P.T.; Pennell, W.E.

    1998-01-01

    A technology to determine shallow-flaw fracture toughness of reactor pressure vessel (RPV) steels is being developed for application to the safety assessment of RPVs containing postulated shallow surface flaws. Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shallow surface flaws. The cruciform beam specimens were developed at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far-field. out-of-plane biaxial stress component in the test section that approximates the nonlinear stresses resulting from pressurized-thermal-shock or pressure-temperature loading of an RPV. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. The cruciform fracture toughness data were used to evaluate fracture methodologies for predicting the observed effects of biaxial loading on shallow-flaw fracture toughness. Initial emphasis was placed on assessment of stress-based methodologies. namely, the J-Q formulation, the Dodds-Anderson toughness scaling model, and the Weibull approach. Applications of these methodologies based on the hydrostatic stress fracture criterion indicated an effect of loading-biaxiality on fracture toughness, the conventional maximum principal stress criterion indicated no effect.

  10. A methodology for investigating interdependencies between measured throughfall, meteorological variables and canopy structure on a small catchment.

    NASA Astrophysics Data System (ADS)

    Maurer, Thomas; Gustavos Trujillo Siliézar, Carlos; Oeser, Anne; Pohle, Ina; Hinz, Christoph

    2016-04-01

    In evolving initial landscapes, vegetation development depends on a variety of feedback effects. One of the less understood feedback loops is the interaction between throughfall and plant canopy development. The amount of throughfall is governed by the characteristics of the vegetation canopy, whereas vegetation pattern evolution may in turn depend on the spatio-temporal distribution of throughfall. Meteorological factors that may influence throughfall, while at the same time interacting with the canopy, are e.g. wind speed, wind direction and rainfall intensity. Our objective is to investigate how throughfall, vegetation canopy and meteorological variables interact in an exemplary eco-hydrological system in its initial development phase, in which the canopy is very heterogeneous and rapidly changing. For that purpose, we developed a methodological approach combining field methods, raster image analysis and multivariate statistics. The research area for this study is the Hühnerwasser ('Chicken Creek') catchment in Lower Lusatia, Brandenburg, Germany, where after eight years of succession, the spatial distribution of plant species is highly heterogeneous, leading to increasingly differentiated throughfall patterns. The constructed 6-ha catchment offers ideal conditions for our study due to the rapidly changing vegetation structure and the availability of complementary monitoring data. Throughfall data were obtained by 50 tipping bucket rain gauges arranged in two transects and connected via a wireless sensor network that cover the predominant vegetation types on the catchment (locust copses, dense sallow thorn bushes and reeds, base herbaceous and medium-rise small-reed vegetation, and open areas covered by moss and lichens). The spatial configuration of the vegetation canopy for each measurement site was described via digital image analysis of hemispheric photographs of the canopy using the ArcGIS Spatial Analyst, GapLight and ImageJ software. Meteorological data

  11. Experimental Measurements And Evaluation Of Indoor Microclimate Conditions

    NASA Astrophysics Data System (ADS)

    Kraliková, Ružena; Sokolová, Hana

    2015-07-01

    The paper deals with monitoring of workplace where technological equipment produces heat during hot summer days. The thermo-hygric microclimate measurement took place during daily work shift, and was carried out at 5 choosen measuring points. Since there was radiation heat presented in workplace and workers worked at different places, the thermal environment was classified as a heterogeneous and unstationary area. The measurement, result processing and interpretation was carried out according to the valid legislation of Slovak Republic.

  12. Negotiating Measurement: Methodological and Interpersonal Considerations in the Choice and Interpretation of Instruments

    ERIC Educational Resources Information Center

    Braverman, Marc T.

    2013-01-01

    Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…

  13. Methodologies for Investigating Item- and Test-Level Measurement Equivalence in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D.

    2012-01-01

    In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…

  14. Advanced Ultrasonic Measurement Methodology for Non-Invasive Interrogation and Identification of Fluids in Sealed Containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-16

    The Hazardous Materials Response Unit (HMRU) and the Counterterrorism and Forensic Science Research Unit (CTFSRU), Laboratory Division, Federal Bureau of Investigation (FBI) have been mandated to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a portable, hand-held, hazardous materials acoustic inspection device (HAZAID) that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The HAZAID prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the HAZAID prototype. High bandwidth ultrasonic transducers combined with the advanced pulse compression technique allowed researchers to 1) impart large amounts of energy, 2) obtain high signal-to-noise ratios, and 3) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of this feasibility study demonstrated that the HAZAID experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  15. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    NASA Astrophysics Data System (ADS)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  16. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-05-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  17. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    PubMed

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-01

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services.

  18. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    PubMed

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-01

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services. PMID:26928842

  19. A Novel Instrument and Methodology for the In-Situ Measurement of the Stress in Thin Films

    NASA Technical Reports Server (NTRS)

    Broadway, David M.; Omokanwaye, Mayowa O.; Ramsey, Brian D.

    2014-01-01

    We introduce a novel methodology for the in-situ measurement of mechanical stress during thin film growth utilizing a highly sensitive non-contact variation of the classic spherometer. By exploiting the known spherical deformation of the substrate the value of the stress induced curvature is inferred by measurement of only one point on the substrate's surface-the sagittal. From the known curvature the stress can be calculated using the well-known Stoney equation. Based on this methodology, a stress sensor has been designed which is simple, highly sensitive, compact, and low cost. As a result of its compact nature, the sensor can be mounted in any orientation to accommodate a given deposition geometry without the need for extensive modification to an already existing deposition system. The technique employs the use of a double side polished substrate that offers good specular reflectivity and is isotropic in its mechanical properties, such as <111> oriented crystalline silicon or amorphous soda lime glass, for example. The measurement of the displacement of the uncoated side during deposition is performed with a high resolution (i.e. 5nm), commercially available, inexpensive, fiber optic sensor which can be used in both high vacuum and high temperature environments (i.e. 10(exp-7) Torr and 480oC, respectively). A key attribute of this instrument lies in its potential to achieve sensitivity that rivals other measurement techniques such as the micro cantilever method but, due to the comparatively larger substrate area, offers a more robust and practical alternative for subsequent measurement of additional characteristics of the film that can might be correlated to film stress. We present measurement results of nickel films deposited by magnetron sputtering which show good qualitative agreement to the know behavior of polycrystalline films previously reported by Hoffman.

  20. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    NASA Technical Reports Server (NTRS)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  1. Comparison of efficiency of distance measurement methodologies in mango (Mangifera indica) progenies based on physicochemical descriptors.

    PubMed

    Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X

    2012-03-14

    We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.

  2. Conditioning of sexual proceptivity in female quail: Measures of conditioned place preference

    PubMed Central

    Gutiérrez, Germán; Domjan, Michael

    2011-01-01

    The present experiments were conducted to explore the nature of conditioned sexual proceptivity in female quail. Females exposed to males subsequently approached the area where the males were previously housed (Experiment 1). This increased preference for the male’s area reflected an increase in female sexual proceptivity and not an increase in non-directed locomotor activity (Experiment 2). These findings provide the first evidence that female quail show conditioned responses that may be considered to be proceptive responses toward male conspecifics. The proceptive responses are expressed as tonic changes in preference for areas where males have been observed in the past rather than as specific phasic conditioned responses. PMID:21664442

  3. Measuring Science Teachers' Stress Level Triggered by Multiple Stressful Conditions

    ERIC Educational Resources Information Center

    Halim, Lilia; Samsudin, Mohd Ali; Meerah, T. Subahan M.; Osman, Kamisah

    2006-01-01

    The complexity of science teaching requires science teachers to encounter a range of tasks. Some tasks are perceived as stressful while others are not. This study aims to investigate the extent to which different teaching situations lead to different stress levels. It also aims to identify the easiest and most difficult conditions to be regarded…

  4. Measurement of Phonated Intervals during Four Fluency-Inducing Conditions

    ERIC Educational Resources Information Center

    Davidow, Jason H.; Bothe, Anne K.; Andreatta, Richard D.; Ye, Jun

    2009-01-01

    Purpose: Previous investigations of persons who stutter have demonstrated changes in vocalization variables during fluency-inducing conditions (FICs). A series of studies has also shown that a reduction in short intervals of phonation, those from 30 to 200 ms, is associated with decreased stuttering. The purpose of this study, therefore, was to…

  5. [The condition of crowding and spacing. Measuring or estimation?].

    PubMed

    Reukers, H A; Kuijpers-Jagtman, A M; van 't Hof, M A

    1994-10-01

    Two methods for the assessment of the amount of crowding or spacing (measuring and assessment by eye) are compared. Both methods are well comparable and reproducible. Assessment by eye has the practical advantage that it takes considerable less time.

  6. Measurements of aerosol chemical composition in boreal forest summer conditions

    NASA Astrophysics Data System (ADS)

    ńijälä, M.; Junninen, H.; Ehn, M.; Petäjä, T.; Vogel, A.; Hoffmann, T.; Corrigan, A.; Russell, L.; Makkonen, U.; Virkkula, A.; Mäntykenttä, J.; Kulmala, M.; Worsnop, D.

    2012-04-01

    Boreal forests are an important biome, covering vast areas of the northern hemisphere and affecting the global climate change via various feedbacks [1]. Despite having relatively few anthropogenic primary aerosol sources, they always contain a non-negligible aerosol population [2]. This study describes aerosol chemical composition measurements using Aerodyne Aerosol Mass Spectrometer (C-ToF AMS, [3]), carried out at a boreal forest area in Hyytiälä, Southern Finland. The site, Helsinki University SMEAR II measurement station [4], is situated at a homogeneous Scots pine (Pinus sylvestris) forest stand. In addition to the station's permanent aerosol, gas phase and meteorological instruments, during the HUMPPA (Hyytiälä United Measurements of Photochemistry and Particles in Air) campaign in July 2010, a very comprehensive set of atmospheric chemistry measurement instrumentation was provided by the Max Planck Institute for chemistry, Johannes Gutenberg-University, University of California and the Finnish Meteorological institute. In this study aerosol chemical composition measurements from the campaign are presented. The dominant aerosol chemical species during the campaign were the organics, although periods with elevated amounts of particulate sulfates were also seen. The overall AMS measured particle mass concentrations varied from near zero to 27 μg/m observed during a forest fire smoke episode. The AMS measured aerosol mass loadings were found to agree well with DMPS derived mass concentrations (r2=0.998). The AMS data was also compared with three other aerosol instruments. The Marga instrument [5] was used to provide a quantitative semi-online measurement of inorganic chemical compounds in particle phase. Fourier Transform Infrared Spectroscopy (FTIR) analysis was performed on daily filter samples, enabling the identification and quantification of organic aerosol subspecies. Finally an Atmospheric Pressure Chemical Ionization Ion Trap Mass Spectrometer (APCI

  7. Towards a Methodology for Validation of Centrality Measures in Complex Networks

    PubMed Central

    2014-01-01

    Background Living systems are associated with Social networks — networks made up of nodes, some of which may be more important in various aspects as compared to others. While different quantitative measures labeled as “centralities” have previously been used in the network analysis community to find out influential nodes in a network, it is debatable how valid the centrality measures actually are. In other words, the research question that remains unanswered is: how exactly do these measures perform in the real world? So, as an example, if a centrality of a particular node identifies it to be important, is the node actually important? Purpose The goal of this paper is not just to perform a traditional social network analysis but rather to evaluate different centrality measures by conducting an empirical study analyzing exactly how do network centralities correlate with data from published multidisciplinary network data sets. Method We take standard published network data sets while using a random network to establish a baseline. These data sets included the Zachary's Karate Club network, dolphin social network and a neural network of nematode Caenorhabditis elegans. Each of the data sets was analyzed in terms of different centrality measures and compared with existing knowledge from associated published articles to review the role of each centrality measure in the determination of influential nodes. Results Our empirical analysis demonstrates that in the chosen network data sets, nodes which had a high Closeness Centrality also had a high Eccentricity Centrality. Likewise high Degree Centrality also correlated closely with a high Eigenvector Centrality. Whereas Betweenness Centrality varied according to network topology and did not demonstrate any noticeable pattern. In terms of identification of key nodes, we discovered that as compared with other centrality measures, Eigenvector and Eccentricity Centralities were better able to identify important nodes

  8. Doctoral Training in Statistics, Measurement, and Methodology in Psychology: Replication and Extension of Aiken, West, Sechrest, and Reno's (1990) Survey of PhD Programs in North America

    ERIC Educational Resources Information Center

    Aiken, Leona S.; West, Stephen G.; Millsap, Roger E.

    2008-01-01

    In a survey of all PhD programs in psychology in the United States and Canada, the authors documented the quantitative methodology curriculum (statistics, measurement, and research design) to examine the extent to which innovations in quantitative methodology have diffused into the training of PhDs in psychology. In all, 201 psychology PhD…

  9. Dual Wavelength Laser Writing and Measurement Methodology for High Resolution Bimetallic Grayscale Photomasks

    NASA Astrophysics Data System (ADS)

    Qarehbaghi, Reza

    Grayscale bimetallic photomasks consist of bi-layer thermal resists (Bismuth-on-Indium or Tin-on-Indium) which become controllably transparent when exposed to a focused laser beam as a function of the absorbed power changing from ~3OD (unexposed) to <0.22OD (fully exposed). To achieve high accuracy grayscale pattern, the OD must be measured and controlled while writing. This thesis investigates using two wavelength beams for mask writing (514.5nm) and OD measurement (457.9nm) separated from a multi-line Argon ion laser source: a Dual Wavelength Writing and Measurement System. The writing laser profile was modified to a top-hat using a beam shaper. Several mask patterns tested the creation of high resolution grayscale masks. Finally, for creation of 3D structures in photoresist, the mask transparency to resist thickness requirements was formulated and linear slope patterns were successfully created.

  10. A methodology suitable for TEM local measurements of carbon concentration in retained austenite

    SciTech Connect

    Kammouni, A.; Saikaly, W. Dumont, M.; Marteau, C.; Bano, X.; Charai, A.

    2008-09-15

    Carbon concentration in retained austenite grains is of great importance determining the mechanical properties of hot-rolled TRansformation Induced Plasticity steels. Among the different techniques available to measure such concentrations, Kikuchi lines obtained in Transmission Electron Microscopy provide a relatively easy and accurate method. The major problem however is to be able to locate an austenitic grain in the observed Transmission Electron Microscopy thin foil. Focused Ion Beam in combination with Scanning Electron Microscopy was used to successfully prepare a thin foil for Transmission Electron Microscopy and carbon concentration measurements from a 700 nm retained austenite grain.

  11. The Sensor Fish: Measuring Fish Passage in Severe Hydraulic Conditions

    SciTech Connect

    Carlson, Thomas J. ); Duncan, Joanne P. ); Gilbride, Theresa L. )

    2003-05-28

    This article describes PNNL's efforts to develop the Sensor Fish, a waterproof sensor package that travels thru the turbines of spillways of hydroelectric dam to collect pressure and acceleration data on the conditions experienced by live salmon smolts during dam passage. Sensor Fish development is sponsored by the DOE Advanced Hydropower Turbine Survival Program. The article also gave two recent examples of Sensor Fish use: turbine passage at a McNary Kaplan turbine and spill passage in topspill at Rock Island Dam.

  12. Rigorous Measures of Implementation: A Methodological Framework for Evaluating Innovative STEM Programs

    ERIC Educational Resources Information Center

    Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.

    2011-01-01

    The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…

  13. METHODOLOGY FOR MEASURING PM 2.5 SEPARATOR CHARACTERISTICS USING AN AEROSIZER

    EPA Science Inventory

    A method is presented that enables the measurement of the particle size separation characteristics of an inertial separator in a rapid fashion. Overall penetration is determined for discrete particle sizes using an Aerosizer (Model LD, TSI, Incorporated, Particle Instruments/Am...

  14. METHODOLOGICAL APPROACH FOR MEASURING PRIORITY DBPS IN REVERSE OSMOSIS CONCENTRATED DRINKING WATER

    EPA Science Inventory

    Many disinfection by-products (DBPs) are formed when drinking water is chlorinated, but only a few are routinely measured or regulated. Various studies have revealed a plethora of DBPs for which sensitive and quantitative analytical methods have always been a major limiting facto...

  15. Rotational vibrational-rotational Raman differential absorption lidar for atmospheric ozone measurements: methodology and experiment.

    PubMed

    Reichardt, J; Bisson, S E; Reichardt, S; Weitkamp, C; Neidhart, B

    2000-11-20

    A single-laser Raman differential absorption lidar (DIAL) for ozone measurements in clouds is proposed. An injection-locked XeCl excimer laser serves as the radiation source. The ozone molecule number density is calculated from the differential absorption of the anti-Stokes rotational Raman return signals from molecular nitrogen and oxygen as the on-resonance wavelength and the vibrational-rotational Raman backscattering from molecular nitrogen or oxygen as the off-resonance wavelength. Model calculations show that the main advantage of the new rotational vibrational-rotational (RVR) Raman DIAL over conventional Raman DIAL is a 70-85% reduction in the wavelength-dependent effects of cloud-particle scattering on the measured ozone concentration; furthermore the complexity of the apparatus is reduced substantially. We describe a RVR Raman DIAL setup that uses a narrow-band interference-filter polychromator as the lidar receiver. Single-laser ozone measurements in the troposphere and lower stratosphere are presented, and it is shown that on further improvement of the receiver performance, ozone measurements in clouds are attainable with the filter-polychromator approach.

  16. A novel, non-invasive transdermal fluid sampling methodology: IGF-I measurement following exercise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study tested the hypothesis that transdermal fluid (TDF) provides a more sensitive and accurate measure of exercise-induced increases in insulin-like growth factor-I (IGF-I) than serum, and that these increases are detectable proximal, but not distal, to the exercising muscle. A novel, noninvas...

  17. Measuring the Impact of University Technology Transfer: A Guide to Methodologies, Data Needs, and Sources

    ERIC Educational Resources Information Center

    Lowe, Robert A.; Quick, Suzanne K.

    2005-01-01

    This paper discusses measures that capture the impact of university technology transfer activities on a university?s local and regional economies (economic impact). Such assessments are of increasing interest to policy makers, researchers and technology transfer professionals, yet there have been few published discussions of the merits of various…

  18. The Sidewalk Survey: A Field Methodology to Measure Late-Night College Drinking

    ERIC Educational Resources Information Center

    Johnson, Mark B.; Lange, James E.; Voas, Robert B.; Clapp, John D.; Lauer, Elizabeth; Snowden, Cecelia B.

    2006-01-01

    Alcohol use is highly prevalent among U.S. college students, and alcohol-related problems are often considered the most serious public health threat on American college campuses. Although empirical examinations of college drinking have relied primarily on self-report measures, several investigators have implemented field studies to obtain…

  19. Statistical Methodology for Classifying Units on the Basis of Multiple Related Measures

    PubMed Central

    Teixeira-Pinto, Armando; Normand, Sharon-Lise T.

    2008-01-01

    SUMMARY Both the private and public sectors have begun giving financial incentives to health care providers, such as hospitals, delivering superior ”quality of care”. Quality of care is assessed through a set of disease specific measures that characterize the performance of health care providers. These measure are then combined into an unidimensional composite score. Most of the programs that reward superior performance use raw averages of the measures as the composite score. The scores based on raw averages fail to take into account typical characteristics of data used for performance evaluation, such as within-patient and within-hospital correlation, variable number of measures available in different hospitals, and missing data. In this article we contrast two different versions of composites based on raw average scores with a model-based score constructed using a latent variable model. We also present two methods to identify hospitals with superior performance. The methods are illustrated using national data collected to evaluate quality of care delivered by US acute care hospitals. PMID:18181221

  20. Difference in blood pressure measurements between arms: methodological and clinical implications.

    PubMed

    Clark, Christopher E

    2015-01-01

    Differences in blood pressure measurements between arms are commonly encountered in clinical practice. If such differences are not excluded they can delay the diagnosis of hypertension and can lead to poorer control of blood pressure levels. Differences in blood pressure measurements between arms are associated cross sectionally with other signs of vascular disease such as peripheral arterial disease or cerebrovascular disease. Differences are also associated prospectively with increased cardiovascular mortality and morbidity and all cause mortality. Numbers of publications on inter-arm difference are rising year on year, indicating a growing interest in the phenomenon. The prevalence of an inter-arm difference varies widely between reports, and is correlated with the underlying cardiovascular risk of the population studied. Prevalence is also sensitive to the method of measurement used. This review discusses the prevalence of an inter-arm difference in different populations and addresses current best practice for the detection and the measurement of a difference. The evidence for clinical and for vascular associations of an inter-arm difference is presented in considering the emerging role of an inter-arm blood pressure difference as a novel risk factor for increased cardiovascular morbidity and mortality. Competing aetiological explanations for an inter-arm difference are explored, and gaps in our current understanding of this sign, along with areas in need of further research, are considered.

  1. Radio Weak Lensing Shear Measurement in the Visibility Domain - I. Methodology

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.; Makhathini, S.; Abdalla, F. B.

    2016-08-01

    The high sensitivity of the new generation of radio telescopes such as the Square Kilometre Array (SKA) will allow cosmological weak lensing measurements at radio wavelengths that are competitive with optical surveys. We present an adaptation to radio data of lensfit, a method for galaxy shape measurement originally developed and used for optical weak lensing surveys. This likelihood method uses an analytical galaxy model and makes a Bayesian marginalisation of the likelihood over uninteresting parameters. It has the feature of working directly in the visibility domain, which is the natural approach to adopt with radio interferometer data, avoiding systematics introduced by the imaging process. As a proof of concept, we provide results for visibility simulations of individual galaxies with flux density S ≥ 10μJy at the phase centre of the proposed SKA1-MID baseline configuration, adopting 12 frequency channels in the band 950 - 1190 MHz. Weak lensing shear measurements from a population of galaxies with realistic flux and scalelength distributions are obtained after natural gridding of the raw visibilities. Shear measurements are expected to be affected by `noise bias': we estimate the bias in the method as a function of signal-to-noise ratio (SNR). We obtain additive and multiplicative bias values that are comparable to SKA1 requirements for SNR > 18 and SNR > 30, respectively. The multiplicative bias for SNR >10 is comparable to that found in ground-based optical surveys such as CFHTLenS, and we anticipate that similar shear measurement calibration strategies to those used for optical surveys may be used to good effect in the analysis of SKA radio interferometer data.

  2. Radon-222 activity flux measurement using activated charcoal canisters: revisiting the methodology.

    PubMed

    Alharbi, Sami H; Akber, Riaz A

    2014-03-01

    The measurement of radon ((222)Rn) activity flux using activated charcoal canisters was examined to investigate the distribution of the adsorbed (222)Rn in the charcoal bed and the relationship between (222)Rn activity flux and exposure time. The activity flux of (222)Rn from five sources of varying strengths was measured for exposure times of one, two, three, five, seven, 10, and 14 days. The distribution of the adsorbed (222)Rn in the charcoal bed was obtained by dividing the bed into six layers and counting each layer separately after the exposure. (222)Rn activity decreased in the layers that were away from the exposed surface. Nevertheless, the results demonstrated that only a small correction might be required in the actual application of charcoal canisters for activity flux measurement, where calibration standards were often prepared by the uniform mixing of radium ((226)Ra) in the matrix. This was because the diffusion of (222)Rn in the charcoal bed and the detection efficiency as a function of the charcoal depth tended to counterbalance each other. The influence of exposure time on the measured (222)Rn activity flux was observed in two situations of the canister exposure layout: (a) canister sealed to an open bed of the material and (b) canister sealed over a jar containing the material. The measured (222)Rn activity flux decreased as the exposure time increased. The change in the former situation was significant with an exponential decrease as the exposure time increased. In the latter case, lesser reduction was noticed in the observed activity flux with respect to exposure time. This reduction might have been related to certain factors, such as absorption site saturation or the back diffusion of (222)Rn gas occurring at the canister-soil interface.

  3. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  4. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    PubMed

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively. PMID:26840314

  5. Auditory evoked potential measurement methodology for odontocetes and a comparison of measured thresholds with those obtained using psychophysical techniques

    NASA Astrophysics Data System (ADS)

    Nachtigall, Paul E.; Yuen, Michelle; Mooney, T. Aran; Taylor, Kristen

    2005-04-01

    Most measurements of the hearing capabilities of toothed whales and dolphins have been taken using traditional psychophysical procedures in which the animals have been maintained in laboratory environments and trained to behaviorally report the sensation or difference of acoustic stimuli. Because of the advantage of rapid data collection, increased opportunities, and new methods, Auditory Evoked Potentials (AEPs) have become increasingly used to measure audition. The use of this new procedure calls to question the comparability of the established literature and the new results collected with AEPs. The results of behavioral and AEP methods have been directly compared with basic audiogram measurements and have been shown to produce similar (but not exactly the same) values when the envelope following response procedure has been used and the length of the stimulus is taken into account. The AEP methods allow possible audiometric opportunities beyond those available with conventional psychophysics including: (1) the measurement of stranded dolphins and whales that may never be kept in laboratories, (2) the testing of stranded animals for hearing deficits perhaps caused by overexposure to noise, and (3) passive testing of hearing mechanisms while animals actively echolocate. [Work supported by the Office of Naval Research and NOAA-NMFS.

  6. Measuring ICT Use and Contributing Conditions in Primary Schools

    ERIC Educational Resources Information Center

    Vanderlinde, Ruben; Aesaert, Koen; van Braak, Johan

    2015-01-01

    Information and communication technology (ICT) use became of major importance for primary schools across the world as ICT has the potential to foster teaching and learning processes. ICT use is therefore a central measurement concept (dependent variable) in many ICT integration studies. This data paper presents two datasets (2008 and 2011) that…

  7. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  8. Muscle stiffness measured under conditions simulating natural sound production.

    PubMed

    Dobrunz, L E; Pelletier, D G; McMahon, T A

    1990-08-01

    Isolated whole frog gastrocnemius muscles were electrically stimulated to peak twitch tension while held isometrically in a bath at 4 degrees C. A quartz hydrophone detected vibrations of the muscle by measuring the pressure fluctuations caused by muscle movement. A small steel collar was slipped over the belly of the muscle. Transient forces including plucks and steady sinusoidal driving were applied to the collar by causing currents to flow in a coil held near the collar. The instantaneous resonant frequencies measured by the pluck and driving techniques were the same at various times during a twitch contraction cycle. The strain produced by the plucking technique in the outermost fibers was less than 1.6 x 10(-4%), a strain three orders of magnitude less than that required to drop the tension to zero in quick-length-change experiments. Because the pressure transients recorded by the hydrophone during plucks and naturally occurring sounds were of comparable amplitude, strains in the muscle due to naturally occurring sound must also be of the order 10(-3%). A simple model assuming that the muscle is an elastic bar under tension was used to calculate the instantaneous elastic modulus E as a function of time during a twitch, given the tension and resonant frequency. The result for Emax, the peak value of E during a twitch, was typically 2.8 x 10(6) N/m2. The methods used here for measuring muscle stiffness are unusual in that the apparatus used for measuring stiffness is separate from the apparatus controlling and measuring force and length. PMID:2207252

  9. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    NASA Astrophysics Data System (ADS)

    Renbaum-Wolff, L.; Grayson, J. W.; Bertram, A. K.

    2012-10-01

    Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions). The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η) ranging between 10-3 and 103 Pascal seconds (Pa s) in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter) are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  10. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    NASA Astrophysics Data System (ADS)

    Renbaum-Wolff, L.; Grayson, J. W.; Bertram, A. K.

    2013-01-01

    Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions). The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η) ranging between 10-3 and 103 Pascal seconds (Pa s) in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter) are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  11. High frequency measurement of P- and S-wave velocities on crystalline rock massif surface - methodology of measurement

    NASA Astrophysics Data System (ADS)

    Vilhelm, Jan; Slavík, Lubomír

    2014-05-01

    For the purpose of non-destructive monitoring of rock properties in the underground excavation it is possible to perform repeated high-accuracy P- and S-wave velocity measurements. This contribution deals with preliminary results gained during the preparation of micro-seismic long-term monitoring system. The field velocity measurements were made by pulse-transmission technique directly on the rock outcrop (granite) in Bedrichov gallery (northern Bohemia). The gallery at the experimental site was excavated using TBM (Tunnel Boring Machine) and it is used for drinking water supply, which is conveyed in a pipe. The stable measuring system and its automatic operation lead to the use of piezoceramic transducers both as a seismic source and as a receiver. The length of measuring base at gallery wall was from 0.5 to 3 meters. Different transducer coupling possibilities were tested namely with regard of repeatability of velocity determination. The arrangement of measuring system on the surface of the rock massif causes better sensitivity of S-transducers for P-wave measurement compared with the P-transducers. Similarly P-transducers were found more suitable for S-wave velocity determination then P-transducers. The frequency dependent attenuation of fresh rock massif results in limited frequency content of registered seismic signals. It was found that at the distance between the seismic source and receiver from 0.5 m the frequency components above 40 kHz are significantly attenuated. Therefore for the excitation of seismic wave 100 kHz transducers are most suitable. The limited frequency range should be also taken into account for the shape of electric impulse used for exciting of piezoceramic transducer. The spike pulse generates broad-band seismic signal, short in the time domain. However its energy after low-pass filtration in the rock is significantly lower than the energy of seismic signal generated by square wave pulse. Acknowledgments: This work was partially

  12. [New methodology for heavy metals measurement in water samples by PGNAA-XRF].

    PubMed

    Jia, Wen-Bao; Zhang, Yan; Hei, Da-Qian; Ling, Yong-Sheng; Shan, Qing; Cheng, Can

    2014-11-01

    In the present paper, a new combined detection method was proposed using prompt gamma neutron activation analysis (PGNAA) and characteristic X-ray fluorescence to improve the heavy metals measurement accuracy for in-situ environmental water rejects analysis by PGNAA technology. Especially, the characteristic X-ray fluorescence (XRF) of heavy metals is induced by prompt gamma-ray directly instead of the traditional excitation sources. Thus, a combined measurement facility with an 241 AmBe neutron source, a BGO detector and a NaI-Be detector was developed to analyze the pollutants in water. The two detectors were respectively used to record prompt gamma-ray and characteristic X-ray fluorescence of heavy metals. The prompt gamma-ray intensity (I(γ)) and characteristic X-ray fluorescence intensity (I(x)) was determined by MCNP calculations for different concentration (c(i)) of chromium (Cr), cadmium (Cd), mercury (Hg) and lead (Pb), respectively. The simulation results showed that there was a good linear relationship between I(γ), I(x) and (c(i)), respectively. The empirical formula of combined detection method was given based on the above calculations. It was found that the combined detection method was more sensitive for high atomic number heavy metals like Hg and Pb measurement than low atomic number like Cr and Cd by comparing and analyzing I(γ) and I(x). The limits of detection for Hg and Pb by the combined measurement instrument were 17.4 and 24.2 mg x kg(-1), respectively. PMID:25752071

  13. [New methodology for heavy metals measurement in water samples by PGNAA-XRF].

    PubMed

    Jia, Wen-Bao; Zhang, Yan; Hei, Da-Qian; Ling, Yong-Sheng; Shan, Qing; Cheng, Can

    2014-11-01

    In the present paper, a new combined detection method was proposed using prompt gamma neutron activation analysis (PGNAA) and characteristic X-ray fluorescence to improve the heavy metals measurement accuracy for in-situ environmental water rejects analysis by PGNAA technology. Especially, the characteristic X-ray fluorescence (XRF) of heavy metals is induced by prompt gamma-ray directly instead of the traditional excitation sources. Thus, a combined measurement facility with an 241 AmBe neutron source, a BGO detector and a NaI-Be detector was developed to analyze the pollutants in water. The two detectors were respectively used to record prompt gamma-ray and characteristic X-ray fluorescence of heavy metals. The prompt gamma-ray intensity (I(γ)) and characteristic X-ray fluorescence intensity (I(x)) was determined by MCNP calculations for different concentration (c(i)) of chromium (Cr), cadmium (Cd), mercury (Hg) and lead (Pb), respectively. The simulation results showed that there was a good linear relationship between I(γ), I(x) and (c(i)), respectively. The empirical formula of combined detection method was given based on the above calculations. It was found that the combined detection method was more sensitive for high atomic number heavy metals like Hg and Pb measurement than low atomic number like Cr and Cd by comparing and analyzing I(γ) and I(x). The limits of detection for Hg and Pb by the combined measurement instrument were 17.4 and 24.2 mg x kg(-1), respectively.

  14. [Nitrous oxide emissions from municipal solid waste landfills and its measuring methodology: a review].

    PubMed

    Jia, Ming-Sheng; Wang, Xiao-Jun; Chen, Shao-Hua

    2014-06-01

    Nitrous oxide (N2O) is one of three major greenhouse gases and the dominant ozone-depleting substance. Landfilling is the major approach for the treatment and disposal of municipal solid waste (MSW), while MSW landfills can be an important anthropogenic source for N2O emissions. Measurements at lab-scale and full-scale landfills have demonstrated that N2O can be emitted in substantial amounts in MSW landfills; however, a large variation in reported emission values exists. Currently, the mechanisms of N2O production and emission in landfills and its contribution to global warming are still lack of sufficient studies. Meanwhile, obtaining reliable N2O fluxes data in landfills remains a question with existing in-situ measurement techniques. This paper summarized relevant literature data on this issue and analyzed the potential production and emission mechanisms of N2O in traditional anaerobic sanitary landfill by dividing it into the MSW buried and the cover soil. The corresponding mechanisms in nitrogen removal bioreactor landfills were analyzed. Finally, the applicability of existing in-situ approaches measuring N2O fluxes in landfills, such as chamber and micrometeorological methods, was discussed and areas in which further research concerning N2O emissions in landfills was urgently required were proposed as well.

  15. Vertical profiles of aerosol volume from high-spectral-resolution infrared transmission measurements. I. Methodology.

    PubMed

    Eldering, A; Irion, F W; Chang, A Y; Gunson, M R; Mills, F P; Steele, H M

    2001-06-20

    The wavelength-dependent aerosol extinction in the 800-1250-cm(-1) region has been derived from ATMOS (atmospheric trace molecule spectroscopy) high-spectral-resolution IR transmission measurements. Using models of aerosol and cloud extinction, we have performed weighted nonlinear least-squares fitting to determine the aerosol-volume columns and vertical profiles of stratospheric sulfate aerosol and cirrus cloud volume. Modeled extinction by use of cold-temperature aerosol optical constants for a 70-80% sulfuric-acid-water solution shows good agreement with the measurements, and the derived aerosol volumes for a 1992 occultation are consistent with data from other experiments after the eruption of Mt. Pinatubo. The retrieved sulfuric acid aerosol-volume profiles are insensitive to the aerosol-size distribution and somewhat sensitive to the set of optical constants used. Data from the nonspherical cirrus extinction model agree well with a 1994 mid-latitude measurement indicating the presence of cirrus clouds at the tropopause.

  16. [Nitrous oxide emissions from municipal solid waste landfills and its measuring methodology: a review].

    PubMed

    Jia, Ming-Sheng; Wang, Xiao-Jun; Chen, Shao-Hua

    2014-06-01

    Nitrous oxide (N2O) is one of three major greenhouse gases and the dominant ozone-depleting substance. Landfilling is the major approach for the treatment and disposal of municipal solid waste (MSW), while MSW landfills can be an important anthropogenic source for N2O emissions. Measurements at lab-scale and full-scale landfills have demonstrated that N2O can be emitted in substantial amounts in MSW landfills; however, a large variation in reported emission values exists. Currently, the mechanisms of N2O production and emission in landfills and its contribution to global warming are still lack of sufficient studies. Meanwhile, obtaining reliable N2O fluxes data in landfills remains a question with existing in-situ measurement techniques. This paper summarized relevant literature data on this issue and analyzed the potential production and emission mechanisms of N2O in traditional anaerobic sanitary landfill by dividing it into the MSW buried and the cover soil. The corresponding mechanisms in nitrogen removal bioreactor landfills were analyzed. Finally, the applicability of existing in-situ approaches measuring N2O fluxes in landfills, such as chamber and micrometeorological methods, was discussed and areas in which further research concerning N2O emissions in landfills was urgently required were proposed as well. PMID:25223043

  17. Hydrogen isotope measurement of bird feather keratin, one laboratory's response to evolving methodologies.

    PubMed

    Fan, Majie; Dettman, David L

    2015-01-01

    Hydrogen in organic tissue resides in a complex mixture of molecular contexts. Some hydrogen, called non-exchangeable (H(non)), is strongly bound, and its isotopic ratio is fixed when the tissue is synthesized. Other pools of hydrogen, called exchangeable hydrogen (H(ex)), constantly exchange with ambient water vapor. The measurement of the δ(2)H(non) in organic tissues such as hair or feather therefore requires an analytical process that accounts for exchangeable hydrogen. In this study, swan feather and sheep wool keratin were used to test the effects of sample drying and capsule closure on the measurement of δ(2)H(non) values, and the rate of back-reaction with ambient water vapor. Homogenous feather or wool keratins were also calibrated at room temperature for use as control standards to correct for the effects of exchangeable hydrogen on feathers. Total δ(2)H values of both feather and wool samples showed large changes throughout the first ∼6 h of drying. Desiccant plus low vacuum seems to be more effective than room temperature vacuum pumping for drying samples. The degree of capsule closure affects exchangeable hydrogen equilibration and drying, with closed capsules responding more slowly. Using one control keratin standard to correct for the δ(2)H(ex) value for a batch of samples leads to internally consistent δ(2)H(non) values for other calibrated keratins run as unknowns. When placed in the context of other recent improvements in the measurement of keratin δ(2)H(non) values, we make recommendations for sample handing, data calibration and the reporting of results. PMID:25358407

  18. Hydrogen isotope measurement of bird feather keratin, one laboratory's response to evolving methodologies.

    PubMed

    Fan, Majie; Dettman, David L

    2015-01-01

    Hydrogen in organic tissue resides in a complex mixture of molecular contexts. Some hydrogen, called non-exchangeable (H(non)), is strongly bound, and its isotopic ratio is fixed when the tissue is synthesized. Other pools of hydrogen, called exchangeable hydrogen (H(ex)), constantly exchange with ambient water vapor. The measurement of the δ(2)H(non) in organic tissues such as hair or feather therefore requires an analytical process that accounts for exchangeable hydrogen. In this study, swan feather and sheep wool keratin were used to test the effects of sample drying and capsule closure on the measurement of δ(2)H(non) values, and the rate of back-reaction with ambient water vapor. Homogenous feather or wool keratins were also calibrated at room temperature for use as control standards to correct for the effects of exchangeable hydrogen on feathers. Total δ(2)H values of both feather and wool samples showed large changes throughout the first ∼6 h of drying. Desiccant plus low vacuum seems to be more effective than room temperature vacuum pumping for drying samples. The degree of capsule closure affects exchangeable hydrogen equilibration and drying, with closed capsules responding more slowly. Using one control keratin standard to correct for the δ(2)H(ex) value for a batch of samples leads to internally consistent δ(2)H(non) values for other calibrated keratins run as unknowns. When placed in the context of other recent improvements in the measurement of keratin δ(2)H(non) values, we make recommendations for sample handing, data calibration and the reporting of results.

  19. A new methodology for phase-locking value: a measure of true dynamic functional connectivity

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty; Roberts, Timothy P. L.

    2012-03-01

    Phase-Locking value (PLV) is used to measure phase synchrony of narrowband signals, therefore, it is able to provide a measure of dynamic functional connectivity (DFC) of brain interactions. Currently used PLV methods compute the convolution of the signal at the target frequency with a complex Gabor wavelet centered at that frequency. The phase of this convolution is extracted for all time-bins over trials for a pair of neural signals. These time-bins set a limit on the temporal resolution for PLV, hence, for DFC. Therefore, these methods cannot provide a true DFC in a strict sense. PLV is defined as the absolute value of the characteristic function of the difference of instantaneous phases (IP) of two analytic signals evaluated at s = 1. It is a function of the time. For the narrowband signal in the stationary Gaussian white noise, we investigated statistics of (i) its phase, (ii) the maximum likelihood estimate of its phase, and (iii) the phase-lock loop (PLL) measurement of its phase, derived the analytic form of the probability density function (pdf) of the difference of IP, and expressed this pdf in terms of signal-to-noise ratio (SNR) of signals. PLV is finally given by analytic formulas in terms of SNRs of a pair of neural signals under investigation. In this new approach, SNR, hence PLV, is evaluated at any time instant over repeated trials. Thus, the new approach can provide a true DFC via PLV. This paper presents detailed derivations of this approach and results obtained by using simulations for magnetoencephalography (MEG) data.

  20. Conceptual and methodological challenges to measuring political commitment to respond to HIV

    PubMed Central

    2011-01-01

    Background Researchers have long recognized the importance of a central government’s political “commitment” in order to mount an effective response to HIV. The concept of political commitment remains ill-defined, however, and little guidance has been given on how to measure this construct and its relationship with HIV-related outcomes. Several countries have experienced declines in HIV infection rates, but conceptual difficulties arise in linking these declines to political commitment as opposed to underlying social and behavioural factors. Methods This paper first presents a critical review of the literature on existing efforts to conceptualize and measure political commitment to respond to HIV and the linkages between political commitment and HIV-related outcomes. Based on the elements identified in this review, the paper then develops and presents a framework to assist researchers in making choices about how to assess a government's level of political commitment to respond to HIV and how to link political commitment to HIV-related outcomes. Results The review of existing studies identifies three components of commitment (expressed, institutional and budgetary commitment) as different dimensions along which commitment can be measured. The review also identifies normative and ideological aspects of commitment and a set of variables that mediate and moderate political commitment that need to be accounted for in order to draw valid inferences about the relationship between political commitment and HIV-related outcomes. The framework summarizes a set of steps that researchers can follow in order to assess a government's level of commitment to respond to HIV and suggests ways to apply the framework to country cases. Conclusions Whereas existing studies have adopted a limited and often ambiguous conception of political commitment, we argue that conceiving of political commitment along a greater number of dimensions will allow researchers to draw a more complete

  1. An evaluation of advantages and cost measurement methodology for leasing in the health care industry.

    PubMed

    Henry, J B; Roenfeldt, R L

    1977-01-01

    Lease financing in hospitals is growing rapidly. Many articles published on the topic of lease financing point only to the benefits that may be derived. Very few articles actually analyze the pros and cons of leasing from a financial cost measurement point of view, which includes real world parameters. This article critically evaluates two articles published in this issue which lead the reader to believe leasing for the most part is a bargain when compared to debt financing. The authors discuss some misconceptions in these articles and point out some facts viewed from a financial analyst's position.

  2. An evaluation of advantages and cost measurement methodology for leasing in the health care industry.

    PubMed

    Henry, J B; Roenfeldt, R L

    1977-01-01

    Lease financing in hospitals is growing rapidly. Many articles published on the topic of lease financing point only to the benefits that may be derived. Very few articles actually analyze the pros and cons of leasing from a financial cost measurement point of view, which includes real world parameters. This article critically evaluates two articles published in this issue which lead the reader to believe leasing for the most part is a bargain when compared to debt financing. The authors discuss some misconceptions in these articles and point out some facts viewed from a financial analyst's position. PMID:840066

  3. Acceptance Plan and Performance Measurement Methodology for the ITER Cryoline System

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Bonneton, M.; Shah, N.; Chalifour, M.; Chang, H.-S.; Fauve, E.; Forgeas, A.; Navion-Maillot, N.; Sarkar, B.

    The cryoline (CL) systemof ITER consists of a complex network of vacuum insulated multi and single process pipe lines distributed over three different areas with a total length of about 5 km. The thermal performance of the CL system will be measured during the final acceptance tests using the ITER cryoplant and cryo-distribution (CD) infrastructure. The method proposed is based on temperature measurementsof a small calibrated cryogenic helium flow through lines. Thecryoplant will be set to establish constant pressure and temperature whereas dedicated heater and valves in the CD will be used to generate stable mass flow rate.

  4. The organizational stress measure: an integrated methodology for assessing job-stress and targeting organizational interventions.

    PubMed

    Spurgeon, Peter; Mazelan, Patti; Barwell, Fred

    2012-02-01

    This paper briefly describes the OSM (Organizational Stress Measure) which was developed over a decade ago and has evolved to become a well-established practical method not only for assessing wellbeing at work but also as a cost-effective strategy to tackle workplace stress. The OSM measures perceived organizational pressures and felt individual strains within the same instrument, and provides a rich and subtle picture of both the organizational culture and the personal perspectives of the constituent staff groups. There are many types of organizational pressure that may impact upon the wellbeing and potential effectiveness of staff including skill shortages, ineffective strategic planning and poor leadership, and these frequently result in reduced performance, absenteeism, high turnover and poor staff morale. These pressures may increase the probability of some staff reacting negatively and research with the OSM has shown that increased levels of strain for small clusters of staff may be a leading indicator of future organizational problems. One of the main benefits of using the OSM is the ability to identify 'hot-spots', where organizational pressures are triggering high levels of personal strain in susceptible clusters of staff. In this way, the OSM may act as an 'early warning alarm' for potential organizational problems. PMID:22323666

  5. Mitotic spindle asymmetry in rodents and primates: 2D vs. 3D measurement methodologies

    PubMed Central

    Delaunay, Delphine; Robini, Marc C.; Dehay, Colette

    2015-01-01

    Recent data have uncovered that spindle size asymmetry (SSA) is a key component of asymmetric cell division (ACD) in the mouse cerebral cortex (Delaunay et al., 2014). In the present study we show that SSA is independent of spindle orientation and also occurs during cortical progenitor divisions in the ventricular zone (VZ) of the macaque cerebral cortex, pointing to a conserved mechanism in the mammalian lineage. Because SSA magnitude is smaller in cortical precursors than in invertebrate neuroblasts, the unambiguous demonstration of volume differences between the two half spindles is considered to require 3D reconstruction of the mitotic spindle (Delaunay et al., 2014). Although straightforward, the 3D analysis of SSA is time consuming, which is likely to hinder SSA identification and prevent further explorations of SSA related mechanisms in generating ACD. We therefore set out to develop an alternative method for accurately measuring spindle asymmetry. Based on the mathematically demonstrated linear relationship between 2D and 3D analysis, we show that 2D assessment of spindle size in metaphase cells is as accurate and reliable as 3D reconstruction provided a specific procedure is applied. We have examined the experimental accuracy of the two methods by applying them to different sets of in vivo and in vitro biological data, including mouse and primate cortical precursors. Linear regression analysis demonstrates that the results from 2D and 3D reconstructions are equally powerful. We therefore provide a reliable and efficient technique to measure SSA in mammalian cells. PMID:25709568

  6. A scuba diving direct sediment sampling methodology on benthic transects in glacial lakes: procedure description, safety measures, and tests results.

    PubMed

    Pardo, Alfonso

    2014-11-01

    This work presents an in situ sediment sampling method on benthic transects, specifically intended for scientific scuba diver teams. It was originally designed and developed to sample benthic surface and subsurface sediments and subaqueous soils in glacial lakes up to a maximum depth of 25 m. Tests were conducted on the Sabocos and Baños tarns (i.e., cirque glacial lakes) in the Spanish Pyrenees. Two 100 m transects, ranging from 24.5 to 0 m of depth in Sabocos and 14 m to 0 m deep in Baños, were conducted. In each test, 10 sediment samples of 1 kg each were successfully collected and transported to the surface. This sampling method proved operative even in low visibility conditions (<2 m). Additional ice diving sampling tests were conducted in Sabocos and Truchas tarns. This sampling methodology can be easily adapted to accomplish underwater sampling campaigns in nonglacial lakes and other continental water or marine environments.

  7. Conditions necessary for low-level measurements of reactive oxidants

    SciTech Connect

    Nakareseisoon, S.

    1988-01-01

    Chlorine dioxide and ozone are considered to be the alternatives to chlorine for the disinfection of drinking water supplies and also for the treatment of wastewaters prior to discharge. Chlorine dioxide, under normal circumstances, is reduced to chlorite ion which is toxic. The recommended seven-day suggested no-adverse-response levels (SNARL's) of chlorite ion is 0.007 mg/l (7 ppb). Chlorite ion at these low levels cannot be satisfactorily determined by existing methods, and so, it became necessary to develop an analytical method for determining ppb levels of chlorite ion. Such a method can be developed using differential pulse polarography (DPP). The electrochemical reduction of chlorite ion has been studied between pH 3.7-14 and in an ionic strength range of 0.05-3.0 M. The optimum conditions are pH 4.1-4.4 and an ionic strength of 0.45 M. The current under these conditions is a linear function of chlorite ion concentration ranging from 2.77 {times} 10{sup {minus}7} to 2.80 {times} 10{sup {minus}4} M (19 ppb to 19 ppm). The imprecision is better than {plus minus} 1.0% and {plus minus} 3.4% at concentrations of 2.87 {times} 10{sup {minus}5} M and 1.74 {times} 10{sup {minus}6} M, respectively, with a detection limit of 1 {times} 10{sup {minus}7} M (7 ppb). The rate of ozone decomposition has been studied in highly basic solutions (8-15 NaOH), where ozone becomes stable. The mechanism of ozone regeneration was proposed to explain the observed kinetic and to clarify the contradiction concerning the very slow observed rate of ozone decomposition in basic solution.

  8. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Jorge O.; Hackert, Chris L.; Ni, Qingwen; Collier, Hughbert A.

    2000-09-22

    This report contains eight sections. Some individual subsections contain lists of references as well as figures and conclusions when appropriate. The first section includes the introduction and summary of the first-year project efforts. The next section describes the results of the project tasks: (1) implementation of theoretical relations between effect dispersion and the stochastic medium, (2) imaging analyses using core and well log data, (3) construction of dispersion and attenuation models at the core and borehole scales in poroelastic media, (4) petrophysics and a catalog of core and well log data from Siberia Ridge field, (5) acoustic/geotechnical measurements and CT imaging of core samples from Florida carbonates, and (6) development of an algorithm to predict pore size distribution from NMR core data. The last section includes a summary of accomplishments, technology transfer activities and follow-on work for Phase II.

  9. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  10. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  11. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  12. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  13. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  14. Thermal conductivity measurements of particulate materials under Martian conditions

    NASA Technical Reports Server (NTRS)

    Presley, M. A.; Christensen, P. R.

    1993-01-01

    The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to

  15. The Ocean Colour Climate Change Initiative: I. A Methodology for Assessing Atmospheric Correction Processors Based on In-Situ Measurements

    NASA Technical Reports Server (NTRS)

    Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic; Platt, Trevor; Regner, Peter; Sathyendranath, Shubha; Steinmetz, Francois; Swinton, John

    2015-01-01

    The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.

  16. Development of a cognitive bias methodology for measuring low mood in chimpanzees.

    PubMed

    Bateson, Melissa; Nettle, Daniel

    2015-01-01

    There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure 'pessimistic' bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes). The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107-240 trials), and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index) that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess longitudinal changes in sub

  17. Development of a cognitive bias methodology for measuring low mood in chimpanzees

    PubMed Central

    Nettle, Daniel

    2015-01-01

    There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure ‘pessimistic’ bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes). The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107–240 trials), and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index) that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess longitudinal changes in

  18. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  19. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2014-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-inch chord, 2-D straight wing with NACA 23012 airfoil section. For six ice accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 x 10(exp 6) and a Mach number of 0.18 with an 18-inch chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For four of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3% with corresponding differences in stall angle of approximately one degree or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice

  20. The measurement of the normal thorax using the Haller index methodology at multiple vertebral levels.

    PubMed

    Archer, James E; Gardner, Adrian; Berryman, Fiona; Pynsent, Paul

    2016-10-01

    The Haller index is a ratio of thoracic width and height, measured from an axial CT image and used to describe the internal dimensions of the thoracic cage. Although the Haller index for a normal thorax has been established (Haller et al. 1987; Daunt et al. 2004), this is only at one undefined vertebral level in the thorax. What is not clear is how the Haller index describes the thorax at every vertebral level in the absence of sternal deformity, or how this is affected by age. This paper documents the shape of the thorax using the Haller index calculated from the thoracic width and height at all vertebral levels of the thorax between 8 and 18 years of age. The Haller Index changes with vertebral level, with the largest ratio seen in the most cranial levels of the thorax. Increasing age alters the shape of the thorax, with the most cranial vertebral levels having a greater Haller index over the mid thorax, which does not change. A slight increase is seen in the more caudal vertebral levels. These data highlight that a 'one size fits all' rule for chest width and depth ratio at all ages and all thoracic levels is not appropriate. The normal range for width to height ratio should be based on a patient's age and vertebral level.

  1. Biochemical quantification of sympathetic nervous activity in humans using radiotracer methodology: fallibility of plasma noradrenaline measurements

    SciTech Connect

    Esler, M.; Leonard, P.; O'Dea, K.; Jackman, G.; Jennings, G.; Korner, P.

    1982-01-01

    We have developed radiotracer techniques for studying noradrenaline kinetics, to assess better sympathetic nervous system function in humans. Tritiated l-noradrenaline was infused intravenously (0.35 microCi/m2/min) to plateau plasma concentration. Noradrenaline plasma clearance was calculated from plasma tritiated noradrenaline concentration at steady state, and the rate of spillover of noradrenaline to plasma derived from plasma noradrenaline specific radioactivity. Mean noradrenaline spillover at rest in 34 normal subjects was 0.33 micrograms/m2/min (range 0.17-0.61 micrograms/m2/min). Predictably, noradrenaline spillover was reduced in patients with subnormal sympathetic nervous system activity, 0.16 +/- 0.09 micrograms/m2/min in eight patients with idiopathic peripheral autonomic insufficiency, and 0.11 +/- 0.07 micrograms/m2/min (mean +/- SD) in six patients with essential hypertension treated with clonidine (0.45 mg daily). Noradrenaline line plasma clearance in normal subjects was 1.32 +/- 0.28 L/m2/min. Clearance fell with age, causing the previously described rise in plasma noradrenaline concentration with aging. Unexpected effects of drugs were encountered, for example chronic beta-adrenergic blockade in patients with essential hypertension reduced noradrenaline clearance. Plasma noradrenaline concentration measurements were not in agreement with noradrenaline release rate values, and do not reliably indicate sympathetic nervous system activity, in instances such as these where noradrenaline clearance is abnormal.

  2. Binding interaction between sorafenib and calf thymus DNA: Spectroscopic methodology, viscosity measurement and molecular docking

    NASA Astrophysics Data System (ADS)

    Shi, Jie-Hua; Chen, Jun; Wang, Jing; Zhu, Ying-Yao

    2015-02-01

    The binding interaction of sorafenib with calf thymus DNA (ct-DNA) was studied using UV-vis absorption spectroscopy, fluorescence emission spectroscopy, circular dichroism (CD), viscosity measurement and molecular docking methods. The experimental results revealed that there was obvious binding interaction between sorafenib and ct-DNA. The binding constant (Kb) of sorafenib with ct-DNA was 5.6 × 103 M-1 at 298 K. The enthalpy and entropy changes (ΔH0 and ΔS0) in the binding process of sorafenib with ct-DNA were -27.66 KJ mol-1 and -21.02 J mol-1 K-1, respectively, indicating that the main binding interaction forces were van der Waals force and hydrogen bonding. The docking results suggested that sorafenib preferred to bind on the minor groove of A-T rich DNA and the binding site of sorafenib was 4 base pairs long. The conformation change of sorafenib in the sorafenib-DNA complex was obviously observed and the change was close relation with the structure of DNA, implying that the flexibility of sorafenib molecule played an important role in the formation of the stable sorafenib-ct-DNA complex.

  3. Retrieval of stratospheric aerosol distributions from SCIAMACHY limb measurements: methodology, sensitivity studies and first results

    NASA Astrophysics Data System (ADS)

    Ernst, Florian; von Savigny, Christian; Rozanov, Alexei; Rozanov, Vladimir; Bovensmann, Heinrich; Burrows, John P.

    Stratospheric aerosols play an important role for the global radiation budget and may signif-icantly affect the retrieval of trace gases from satellite observations. SAGE I -III provided a 25-year record of stratospheric aerosols by means of solar occultation technique. Since the demise of SAGE II and III in 2005/2006, the long-term stratospheric aerosol satellite record is jeopardized. The main goal of this work is to demonstrate that aerosol extinction profiles can be retrieved from SCIAMACHY limb scatter measurements to sustain the time series. Since the eruption of Pinatubo in 1991 was the last large source of volcanic aerosols in the strato-sphere, we have now the opportunity to retrieve background aerosol profiles. The radiative transfer model and retrieval package SCIATRAN is used to derive aerosol extinction profiles from SCIAMACHY limb data. The algorithm is based on a color-index ratio using limb radi-ance profiles at 470 nm and 750 nm wavelength. The algorithm, sensitivity studies and first results are presented here.

  4. The measurement of the normal thorax using the Haller index methodology at multiple vertebral levels.

    PubMed

    Archer, James E; Gardner, Adrian; Berryman, Fiona; Pynsent, Paul

    2016-10-01

    The Haller index is a ratio of thoracic width and height, measured from an axial CT image and used to describe the internal dimensions of the thoracic cage. Although the Haller index for a normal thorax has been established (Haller et al. 1987; Daunt et al. 2004), this is only at one undefined vertebral level in the thorax. What is not clear is how the Haller index describes the thorax at every vertebral level in the absence of sternal deformity, or how this is affected by age. This paper documents the shape of the thorax using the Haller index calculated from the thoracic width and height at all vertebral levels of the thorax between 8 and 18 years of age. The Haller Index changes with vertebral level, with the largest ratio seen in the most cranial levels of the thorax. Increasing age alters the shape of the thorax, with the most cranial vertebral levels having a greater Haller index over the mid thorax, which does not change. A slight increase is seen in the more caudal vertebral levels. These data highlight that a 'one size fits all' rule for chest width and depth ratio at all ages and all thoracic levels is not appropriate. The normal range for width to height ratio should be based on a patient's age and vertebral level. PMID:27240848

  5. Antioxidant Activity of Selected Thyme (Thymus L.) Species and Study of the Equivalence of Different Measuring Methodologies.

    PubMed

    Orłowska, Marta; Kowalska, Teresa; Sajewicz, Mieczysław; Pytlakowska, Katarzyna; Bartoszek, Mariola; Polak, Justyna; Waksmundzka-Hajnos, Monika

    2015-01-01

    This study presents the results of comparative evaluation of the antioxidant activity of the phenolic fraction exhaustively extracted with aqueous methanol from 18 different thyme (Thymus L.) specimens and species. This evaluation is made with use of the same free radical source (DPPH• radical), three different free radical scavenging models (gallic acid, ascorbic acid, and Trolox), and three different measuring techniques (the dot blot test, UV-Vis spectrophotometry, and electron paramagnetic resonance spectroscopy, EPR). A comparison of the equivalence of these three different measuring techniques (performed with use of hierarchical clustering with Euclidean distance as a similarity measure and Ward's linkage) is particularly important in view of the fact that different laboratories use different antioxidant activity measuring techniques, which makes any interlaboratory comparison hardly possible. The results obtained confirm a semiquantitative equivalence among the three compared methodologies, and a proposal is made of a simple and cost-effective dot blot test that uses the DPPH• radical and provides differentiation of antioxidant activity of herbal matter comparable with the results of the UV-Vis spectrophotometry and EPR.

  6. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia.

    PubMed

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by ektacytometry (LORRCA MaxSis, Mechatronics, The Netherlands) in 6 healthy individuals and 49 SCA patients and tested the effects of different heights of the RBC diffraction patterns, obtained by altering the camera gain of the LORRCA, on the result of RBC deformability measurements, expressed as Elongation Index (EI). Results indicate that the pattern of RBCs from control subjects adopts an elliptical shape under shear stress, whereas the pattern of RBCs from individuals with SCA adopts a diamond shape arising from the superposition of elliptical and circular patterns. The latter represent rigid RBCs. While the EI measures did not change with the variations of the RBC diffraction pattern heights in the control subjects, we observed a decrease of EI when the RBC diffraction pattern height is increased in the SCA group. The differences in SCA EI values measured at 5 Pa between the different diffraction pattern heights correlated with the percent of hemoglobin S and the percent of sickled RBC observed by microscopy. Our study confirms that the camera gain or aperture of the ektacytometer should be used to standardize the size of the RBC diffraction pattern height when measuring RBC deformability in sickle cell patients and underscores the potential clinical utility of this technique. PMID:26444610

  7. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia.

    PubMed

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by ektacytometry (LORRCA MaxSis, Mechatronics, The Netherlands) in 6 healthy individuals and 49 SCA patients and tested the effects of different heights of the RBC diffraction patterns, obtained by altering the camera gain of the LORRCA, on the result of RBC deformability measurements, expressed as Elongation Index (EI). Results indicate that the pattern of RBCs from control subjects adopts an elliptical shape under shear stress, whereas the pattern of RBCs from individuals with SCA adopts a diamond shape arising from the superposition of elliptical and circular patterns. The latter represent rigid RBCs. While the EI measures did not change with the variations of the RBC diffraction pattern heights in the control subjects, we observed a decrease of EI when the RBC diffraction pattern height is increased in the SCA group. The differences in SCA EI values measured at 5 Pa between the different diffraction pattern heights correlated with the percent of hemoglobin S and the percent of sickled RBC observed by microscopy. Our study confirms that the camera gain or aperture of the ektacytometer should be used to standardize the size of the RBC diffraction pattern height when measuring RBC deformability in sickle cell patients and underscores the potential clinical utility of this technique.

  8. Measuring the habitat as an indicator of socioeconomic position: methodology and its association with hypertension

    PubMed Central

    Galobardes, B; Morabia, A

    2003-01-01

    Study objectives: (1) to develop an indicator of socioeconomic position based on the social standing of the habitat (SSH), that is, the residential building, its immediate surroundings, and local neighbourhood; (2) to assess the relation of SSH to two usual markers of socioeconomic position (education and occupation) and a known, socially determined health outcome (hypertension). Design: Population survey measuring SSH, detailed educational and occupational histories, and blood pressure. The SSH is a standardised assessment of the external and internal aspects of someone's building (or house), and of the characteristics of its immediate surroundings and local neighbourhood. Setting: A sample of participants to the Bus Santé survey between 1993 and 1998, in Geneva, Switzerland. Participants: 588 men and women, aged 35 to 74. Main results: The SSH index was highly reproducible (κ=0.8). Concordance of SSH with education or occupation was good for people of either high or low socioeconomic position, but not for those with medium education and/or occupation. There was a higher prevalence of hypertension in the lowest compared with the highest groups, defined on the basis of education or occupation, but the SSH was the only indicator that showed a higher prevalence of hypertension among people in the middle of the social spectrum. Conclusions: People of medium education or occupation are heterogeneous with respect to their habitat. Those living in habitats of medium social standing may be most affected by hypertension but this association could not be revealed on the basis of education and occupation alone. The habitat seems to capture different aspects of the socioeconomic position compared with the usual indicators of social class. PMID:12646538

  9. Phoretic Force Measurement for Microparticles Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. J.; Zheng, R.

    1999-01-01

    This theoretical and experimental investigation of the collisional interactions between gas molecules and solid and liquid surfaces of microparticles involves fundamental studies of the transfer of energy, mass and momentum between gas molecules and surfaces. The numerous applications include particle deposition on semiconductor surfaces and on surfaces in combustion processes, containerless processing, the production of nanophase materials, pigments and ceramic precursors, and pollution abatement technologies such as desulfurization of gaseous effluents from combustion processes. Of particular emphasis are the forces exerted on microparticles present in a nonuniform gas, that is, in gaseous surroundings involving temperature and concentration gradients. These so-called phoretic forces become the dominant forces when the gravitational force is diminished, and they are strongly dependent on the momentum transfer between gas molecules and the surface. The momentum transfer, in turn, depends on the gas and particle properties and the mean free path and kinetic energy of the gas molecules. The experimental program involves the particle levitation system shown. A micrometer size particle is held between two heat exchangers enclosed in a vacuum chamber by means of ac and dc electric fields. The ac field keeps the particle centered on the vertical axis of the chamber, and the dc field balances the gravitational force and the thermophoretic force. Some measurements of the thermophoretic force are presented in this paper.

  10. Arduino-based control system for measuring ammonia in air using conditionally-deployed diffusive samplers

    NASA Astrophysics Data System (ADS)

    Ham, J. M.; Williams, C.; Shonkwiler, K. B.

    2012-12-01

    Arduino microcontrollers, wireless modules, and other low-cost hardware were used to develop a new type of air sampler for monitoring ammonia at strong areal sources like dairies, cattle feedlots, and waste treatment facilities. Ammonia was sampled at multiple locations on the periphery of an operation using Radiello diffusive passive samplers (Cod. RAD168- and RAD1201-Sigma-Aldrich). However, the samplers were not continuously exposed to the air. Instead, each sampling station included two diffusive samplers housed in specialized tubes that sealed the cartridges from the atmosphere. If a user-defined set of wind and weather conditions were met, the Radiellos were deployed into the air using a micro linear actuator. Each station was solar-powered and controlled by Arduinos that were linked to a central weather station using Xbee wireless modules (Digi International Inc.). The Arduinos also measured the total time of exposure using hall-effect sensors to verify the position of the cartridge (i.e., deployed or retracted). The decision to expose or retract the samplers was made every five minutes based on wind direction, wind speed, and time of day. Typically, the diffusive samplers were replaced with fresh cartridges every two weeks and the used samplers were analyzed in the laboratory using ion chromatography. Initial studies were conducted at a commercial dairy in northern Colorado. Ammonia emissions along the Front Range of Colorado can be transported into the mountains where atmospheric deposition of nitrogen can impact alpine ecosystems. Therefore, low-cost air quality monitoring equipment is needed that can be widely deployed in the region. Initial work at the dairy showed that ammonia concentrations ranged between 600 to 1200 ppb during the summer; the highest concentrations were downwind of a large anaerobic lagoon. Time-averaged ammonia concentrations were also used to approximate emissions using inverse dispersion models. This methodology provides a

  11. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    PubMed Central

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  12. Methodology for evaluating lateral boundary conditions in the regional chemical transport model MATCH (v5.5.0) using combined satellite and ground-based observations

    NASA Astrophysics Data System (ADS)

    Andersson, E.; Kahnert, M.; Devasthale, A.

    2015-11-01

    Hemispheric transport of air pollutants can have a significant impact on regional air quality, as well as on the effect of air pollutants on regional climate. An accurate representation of hemispheric transport in regional chemical transport models (CTMs) depends on the specification of the lateral boundary conditions (LBCs). This study focuses on the methodology for evaluating LBCs of two moderately long-lived trace gases, carbon monoxide (CO) and ozone (O3), for the European model domain and over a 7-year period, 2006-2012. The method is based on combining the use of satellite observations at the lateral boundary with the use of both satellite and in situ ground observations within the model domain. The LBCs are generated by the global European Monitoring and Evaluation Programme Meteorological Synthesizing Centre - West (EMEP MSC-W) model; they are evaluated at the lateral boundaries by comparison with satellite observations of the Terra-MOPITT (Measurements Of Pollution In The Troposphere) sensor (CO) and the Aura-OMI (Ozone Monitoring Instrument) sensor (O3). The LBCs from the global model lie well within the satellite uncertainties for both CO and O3. The biases increase below 700 hPa for both species. However, the satellite retrievals below this height are strongly influenced by the a priori data; hence, they are less reliable than at, e.g. 500 hPa. CO is, on average, underestimated by the global model, while O3 tends to be overestimated during winter, and underestimated during summer. A regional CTM is run with (a) the validated monthly climatological LBCs from the global model; (b) dynamical LBCs from the global model; and (c) constant LBCs based on in situ ground observations near the domain boundary. The results are validated against independent satellite retrievals from the Aqua-AIRS (Atmospheric InfraRed Sounder) sensor at 500 hPa, and against in situ ground observations from the Global Atmospheric Watch (GAW) network. It is found that (i) the use of

  13. Measurement of heat stress conditions at cow level and comparison to climate conditions at stationary locations inside a dairy barn.

    PubMed

    Schüller, Laura K; Heuwieser, Wolfgang

    2016-08-01

    The objectives of this study were to examine heat stress conditions at cow level and to investigate the relationship to the climate conditions at 5 different stationary locations inside a dairy barn. In addition, we compared the climate conditions at cow level between primiparous and multiparous cows for a period of 1 week after regrouping. The temperature-humidity index (THI) differed significantly between all stationary loggers. The lowest THI was measured at the window logger in the experimental stall and the highest THI was measured at the central logger in the experimental stall. The THI at the mobile cow loggers was 2·33 THI points higher than at the stationary loggers. Furthermore, the mean daily THI was higher at the mobile cow loggers than at the stationary loggers on all experimental days. The THI in the experimental pen was 0·44 THI points lower when the experimental cow group was located inside the milking parlour. The THI measured at the mobile cow loggers was 1·63 THI points higher when the experimental cow group was located inside the milking parlour. However, there was no significant difference for all climate variables between primiparous and multiparous cows. These results indicate, there is a wide range of climate conditions inside a dairy barn and especially areas with a great distance to a fresh air supply have an increased risk for the occurrence of heat stress conditions. Furthermore, the heat stress conditions are even higher at cow level and cows not only influence their climatic environment, but also generate microclimates within different locations inside the barn. Therefore climate conditions should be obtained at cow level to evaluate the heat stress conditions that dairy cows are actually exposed to. PMID:27600964

  14. Optimisation of immobilisation conditions for chick pea β-galactosidase (CpGAL) to alkylamine glass using response surface methodology and its applications in lactose hydrolysis.

    PubMed

    Kishore, Devesh; Kayastha, Arvind M

    2012-10-01

    Response surface methodology was advantageously used to optimally immobilise a β-galactosidase from chick pea onto alkylamine glass using Box-Behnken experimental design, resulting in an overall 91% immobilisation efficiency. Analysis of variance was performed to determine the adequacy and significance of the quadratic model. Immobilised enzyme showed a shift in the optimum pH; however, optimum temperature remained unaffected. Thermal denaturation kinetics demonstrated significant improvement in thermal stability of the enzyme after immobilisation. Galactose competitively inhibits the enzyme in both soluble and immobilised conditions. Lactose in milk whey was hydrolysed at comparatively higher rate than that of milk. Immobilised enzyme showed excellent reusability with retention of more than 82% enzymatic activity after 15 uses. The immobilised enzyme was found to be fairly stable in both dry and wet conditions for three months with retention of more than 80% residual activity. PMID:25005995

  15. Treatment of wastewater effluents from paper-recycling plants by coagulation process and optimization of treatment conditions with response surface methodology

    NASA Astrophysics Data System (ADS)

    Birjandi, Noushin; Younesi, Habibollah; Bahramifar, Nader

    2014-09-01

    In the present study, a coagulation process was used to treat paper-recycling wastewater with alum coupled with poly aluminum chloride (PACl) as coagulants. The effect of each four factors, viz. the dosages of alum and PACl, pH and chemical oxygen demand (COD), on the treatment efficiency was investigated. The influence of these four parameters was described using response surface methodology under central composite design. The efficiency of reducing turbidity, COD and the sludge volume index (SVI) were considered the responses. The optimum conditions for high treatment efficiency of paper-recycling wastewater under experimental conditions were reached with numerical optimization of coagulant doses and pH, with 1,550 mg/l alum and 1,314 mg/l PACl and 9.5, respectively, where the values for reduction of 80.02 % in COD, 83.23 % in turbidity, and 140 ml/g in SVI were obtained.

  16. New Methodology for Evaluating Optimal Pricing for Primary Regulation of Deregulated Power Systems under Steady State Condition

    NASA Astrophysics Data System (ADS)

    Satyaramesh, P. V.; RadhaKrishna, C.

    2013-06-01

    A generalized pricing structure for procurement of power under frequency ancillary service is developed in this paper. It is a frequency linked-price model and suitable for deregulation market environment. This model takes into consideration: governor characteristics and frequency characteristics of generator as additional parameters in load flow method. The main objective of the new approach proposed in this paper is to establish bidding price structure for frequency regulation services in competitive ancillary electrical markets under steady state condition. Lot of literatures are available for calculating the frequency deviations with respect to load changes by using dynamic simulation methods. But in this paper, the model computes the frequency deviations for additional requirements of power under steady state with considering power system network topology. An attempt is also made in this paper to develop optimal bidding price structure for the frequency-regulated systems. It gives a signal to traders or bidders that the power demand can be assessed more accurately much closer to real time and helps participants bid more accurate quantities on day-ahead market. The recent trends of frequency linked-price model existing in Indian power systems issues required for attention are also dealt in this paper. Test calculations have been performed on 30-bus system. The paper also explains adoptability of 33 this model to practical Indian power system. The results presented are analyzed and useful conclusions are drawn.

  17. A new, simple method for the production of meat-curing pigment under optimised conditions using response surface methodology.

    PubMed

    Soltanizadeh, Nafiseh; Kadivar, Mahdi

    2012-12-01

    The production of cured meat pigment using nitrite and ascorbate in acidic conditions was evaluated. HCl, ascorbate and nitrite concentrations were optimised at three levels using the response surface method (RSM). The effects of process variables on the nitrosoheme yield, the wavelength of maximum absorbance (λ(max)), and L*, a* and b* values were evaluated. The response surface equations indicate that variables exerted a significant effect on all dependent factors. The optimum combinations for the reaction were HCl=-0.8, ascorbate=0.46 and nitrite=1.00 as coded values for conversion of 1mM hemin to nitrosoheme, by which a pigment yield of 100%, which was similar to the predicted value of 99.5%, was obtained. Likewise, the other parameters were not significantly different from predicted values as the λ(max), L*, a* and b* values were 558 nm, 47.03, 45.17 and 17.20, respectively. The structure of the pigment was identified using FTIR and ESI/MS.

  18. Biodegradation of cyanide by a new isolated strain under alkaline conditions and optimization by response surface methodology (RSM)

    PubMed Central

    2014-01-01

    Background Biodegradation of free cyanide from industrial wastewaters has been proven as a viable and robust method for treatment of wastewaters containing cyanide. Results Cyanide degrading bacteria were isolated from a wastewater treatment plant for coke-oven-gas condensate by enrichment culture technique. Five strains were able to use cyanide as the sole nitrogen source under alkaline conditions and among them; one strain (C2) was selected for further studies on the basis of the higher efficiency of cyanide degradation. The bacterium was able to tolerate free cyanide at concentrations of up to 500 ppm which makes it a good potentially candidate for the biological treatment of cyanide contaminated residues. Cyanide degradation corresponded with growth and reached a maximum level 96% during the exponential phase. The highest growth rate (1.23 × 108) was obtained on day 4 of the incubation time. Both glucose and fructose were suitable carbon sources for cyanotrophic growth. No growth was detected in media with cyanide as the sole carbon source. Four control factors including, pH, temperature, agitation speed and glucose concentration were optimized according to central composite design in response surface method. Cyanide degradation was optimum at 34.2°C, pH 10.3 and glucose concentration 0.44 (g/l). Conclusions Bacterial species degrade cyanide into less toxic products as they are able to use the cyanide as a nitrogen source, forming ammonia and carbon dioxide as end products. Alkaliphilic bacterial strains screened in this study evidentially showed the potential to possess degradative activities that can be harnessed to remediate cyanide wastes. PMID:24921051

  19. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials

    PubMed Central

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-01-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials. PMID:27194379

  20. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials.

    PubMed

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-01-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials. PMID:27194379

  1. A scuba diving direct sediment sampling methodology on benthic transects in glacial lakes: procedure description, safety measures, and tests results.

    PubMed

    Pardo, Alfonso

    2014-11-01

    This work presents an in situ sediment sampling method on benthic transects, specifically intended for scientific scuba diver teams. It was originally designed and developed to sample benthic surface and subsurface sediments and subaqueous soils in glacial lakes up to a maximum depth of 25 m. Tests were conducted on the Sabocos and Baños tarns (i.e., cirque glacial lakes) in the Spanish Pyrenees. Two 100 m transects, ranging from 24.5 to 0 m of depth in Sabocos and 14 m to 0 m deep in Baños, were conducted. In each test, 10 sediment samples of 1 kg each were successfully collected and transported to the surface. This sampling method proved operative even in low visibility conditions (<2 m). Additional ice diving sampling tests were conducted in Sabocos and Truchas tarns. This sampling methodology can be easily adapted to accomplish underwater sampling campaigns in nonglacial lakes and other continental water or marine environments. PMID:24943883

  2. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials

    NASA Astrophysics Data System (ADS)

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-05-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials.

  3. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    NASA Astrophysics Data System (ADS)

    Jahanshahi, Mohammad R.; Masri, Sami F.

    2013-03-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.

  4. Optimization of Reflux Conditions for Total Flavonoid and Total Phenolic Extraction and Enhanced Antioxidant Capacity in Pandan (Pandanus amaryllifolius Roxb.) Using Response Surface Methodology

    PubMed Central

    Ghasemzadeh, Ali; Jaafar, Hawa Z. E.

    2014-01-01

    Response surface methodology was applied to optimization of the conditions for reflux extraction of Pandan (Pandanus amaryllifolius Roxb.) in order to achieve a high content of total flavonoids (TF), total phenolics (TP), and high antioxidant capacity (AC) in the extracts. Central composite experimental design with three factors and three levels was employed to consider the effects of the operation parameters, including the methanol concentration (MC, 40%–80%), extraction temperature (ET, 40–70°C), and liquid-to-solid ratio (LS ratio, 20–40 mL/g) on the properties of the extracts. Response surface plots showed that increasing these operation parameters induced the responses significantly. The TF content and AC could be maximized when the extraction conditions (MC, ET, and LS ratio) were 78.8%, 69.5°C, and 32.4 mL/g, respectively, whereas the TP content was optimal when these variables were 75.1%, 70°C, and 31.8 mL/g, respectively. Under these optimum conditions, the experimental TF and TP content and AC were 1.78, 6.601 mg/g DW, and 87.38%, respectively. The optimized model was validated by a comparison of the predicted and experimental values. The experimental values were found to be in agreement with the predicted values, indicating the suitability of the model for optimizing the conditions for the reflux extraction of Pandan. PMID:25147852

  5. Demonstration of Extractive Cryocooled Inert Preconcentration with FTIR spectroscopy instrumentation and methodology for autonomous measurements of atmospheric organics

    NASA Astrophysics Data System (ADS)

    Buckely, Patrick I.

    At present researchers exclusively use gas chromatography (GC) systems to monitor multiple volatile organic compounds (VOCs) in either real- or near-real-time. We have designed, developed, and constructed an experimental atmospheric air-quality monitoring system capable of measuring low-concentration VOCs using advanced optical techniques. This system uses a commercial Fourier Transform Infrared spectrometer (FTIR), a commercial long-path gas cell, a commercial acoustic Stirling cryocooler, and a custom cryogen-free cryotrap to autonomously monitor a multi-pollutant suite of VOCs with on-board quality assurance/quality control (QA/QC) calibration. Every four hours, the system records a five minute co-added FTIR interferogram using preconcentrated batch samples which are thermally desorbed from the cryotrap into the gas cell. From this interferogram, the spectral processing algorithm calculates a corresponding absorption spectrum and derives trace gas concentrations using a peak fitting technique to achieve compound-specific detection limits of 6-60 parts per trillion volume (pptv). During the calibration cycle, the system acquires QA/QC measurements made in a similar fashion using high-purity calibration gas bottles. The presented laboratory results show the system is capable of measuring single- and multi-component calibration gas mixtures within the manufacturer's accuracy specifications. In situ canister samples analyzed using gas chromatography with electron capture and flame ionization detection (GC/ECD/FID) presents a narrow background for possible VOC concentrations at the National Space Science and Technology Center (NSSTC) in Huntsville, AL. In situ observations by the FTIR-based system showcase the capabilities of the system to run fully autonomously and analyze a complex atmospheric mixture with a high degree of fidelity. Complex error analysis highlights the shortfalls of this methodology and presents quantitative correction factors for a number of

  6. Measurement of the pro-hormone of brain type natriuretic peptide (proBNP): methodological considerations and pathophysiological relevance.

    PubMed

    Clerico, Aldo; Vittorini, Simona; Passino, Claudio

    2011-08-26

    Recent studies demonstrated that large amounts of the pro-hormone peptide of brain natriuretic peptide (proBNP) can be detected in plasma of healthy subjects and in particular of patients with heart failure. As a result, a great part of B-type natriuretic peptides measured in patients with cardio-vascular disease may be devoid of biological activity. These findings stimulated the set up of specific immunoassay methods for the measurement of the intact proBNP peptide. The aim of this review article is to discuss the methodological characteristics and the possible clinical relevance of specific immunoassay methods for the measurement of the proBNP peptide. From an analytical point of view, a fully automated immunoassay of proBNP has some theoretical advantages (e.g., a more stable molecule with higher molecular weight than the derived peptides) compared to the active hormone BNP. Recent studies supported the concept that the precursor proBNP might be actually considered a circulating prohormone, which can be cleaved by specific plasma proteases in BNP, the active hormone, and NT-proBNP, an inactive peptide. The peripheral processing of circulating proBNP could likely be submitted to regulatory rules, which might be impaired in patients with heart failure, opening new perspectives in the treatment of heart failure (e.g., by studying drugs inducing the cleavage of the prohormone into active BNP). Furthermore, as a future perspective, the specific assay in the same plasma sample of the intact precursor proBNP and of the biologically active peptide BNP, could allow a more accurate estimation of the production/secretion of B-type related peptides from cardiomyocytes and of the global cardiac endocrine function.

  7. Three-dimensional welding residual stresses evaluation based on the eigenstrain methodology via X-ray measurements at the surface

    NASA Astrophysics Data System (ADS)

    Ogawa, Masaru

    2014-12-01

    In order to assure structural integrity for operating welded structures, it is necessary to evaluate crack growth rate and crack propagation direction for each observed crack non-destructively. Here, three dimensional (3D) welding residual stresses must be evaluated to predict crack propagation. Today, X-ray diffraction is used and the ultrasonic method has been proposed as non-destructive method to measure residual stresses. However, it is impossible to determine residual stress distributions in the thickness direction. Although residual stresses through a depth of several tens of millimeters can be evaluated non-destructively by neutron diffraction, it cannot be used as an on-site measurement technique. This is because neutron diffraction is only available in special irradiation facilities. Author pays attention to the bead flush method based on the eigenstrain methodology. In this method, 3D welding residual stresses are calculated by an elastic Finite Element Method (FEM) analysis from eigenstrains which are evaluated by an inverse analysis from released strains by strain gauges in the removal of the reinforcement of the weld. Here, the removal of the excess metal can be regarded as non-destructive treatment because toe of weld which may become crack starters can be eliminated. The effectiveness of the method has been proven for welded plates and pipes even with relatively lower bead height. In actual measurements, stress evaluation accuracy becomes poorer because measured values of strain gauges are affected by processing strains on the machined surface. In the previous studies, the author has developed the bead flush method that is free from the influence of the affecting strains by using residual strains on surface by X-ray diffraction. However, stress evaluation accuracy is not good enough because of relatively poor measurement accuracy of X-ray diffraction. In this study, a method to improve the estimation accuracy of residual stresses in this method is

  8. Intra-rater reliability when using a tympanic thermometer under different self-measurement conditions.

    PubMed

    Yoo, Won-Gyu

    2016-07-01

    [Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum.

  9. Intra-rater reliability when using a tympanic thermometer under different self-measurement conditions

    PubMed Central

    Yoo, Won-gyu

    2016-01-01

    [Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269

  10. Intra-rater reliability when using a tympanic thermometer under different self-measurement conditions.

    PubMed

    Yoo, Won-Gyu

    2016-07-01

    [Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269

  11. Optimization and Validation of High-Performance Chromatographic Condition for Simultaneous Determination of Adapalene and Benzoyl Peroxide by Response Surface Methodology

    PubMed Central

    Huang, Yaw-Bin; Wu, Pao-Chu

    2015-01-01

    The aim of this study was to develop a simple and reliable high-performance chromatographic (HPLC) method for simultaneous analysis of adapalene and benzoyl peroxide in pharmaceutical formulation by response surface methodology (RSM). An optimized mobile phase composed of acetonitrile, tetrahydrofuran and water containing 0.1% acetic acid at a ratio of 25:50:25 by volume was successfully predicted by using RSM. An isocratic separation was achieved by using the condition. Furthermore, the analytical method was validated in terms of specificity, linearity, accuracy and precision in a range of 80% to 120% of the expected concentration. Finally, the method was successfully applied to the analysis of a commercial product. PMID:25793581

  12. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health.

    PubMed

    Aymerich, Marta; Carrion, Carme; Gallo, Pedro; Garcia, Maria; López-Bermejo, Abel; Quesada, Miquel; Ramos, Rafel

    2012-08-01

    Most ex-post evaluations of research funding programs are based on bibliometric methods and, although this approach has been widely used, it only examines one facet of the project's impact, that is, scientific productivity. More comprehensive models of payback assessment of research activities are designed for large-scale projects with extensive funding. The purpose of this study was to design and implement a methodology for the ex-post evaluation of small-scale projects that would take into account both the fulfillment of projects' stated objectives as well as other wider benefits to society as payback measures. We used a two-phase ex-post approach to appraise impact for 173 small-scale projects funded in 2007 and 2008 by a Spanish network center for research in epidemiology and public health. In the internal phase we used a questionnaire to query the principal investigator (PI) on the outcomes as well as actual and potential impact of each project; in the external phase we sent a second questionnaire to external reviewers with the aim of assessing (by peer-review) the performance of each individual project. Overall, 43% of the projects were rated as having completed their objectives "totally", and 40% "considerably". The research activities funded were reported by PIs as socially beneficial their greatest impact being on research capacity (50% of payback to society) and on knowledge translation (above 11%). The method proposed showed a good discriminating ability that makes it possible to measure, reliably, the extent to which a project's objectives were met as well as the degree to which the project contributed to enhance the group's scientific performance and of its social payback.

  13. Optimization of Extraction Condition of Bee Pollen Using Response Surface Methodology: Correlation between Anti-Melanogenesis, Antioxidant Activity, and Phenolic Content.

    PubMed

    Kim, Seon Beom; Jo, Yang Hee; Liu, Qing; Ahn, Jong Hoon; Hong, In Pyo; Han, Sang Mi; Hwang, Bang Yeon; Lee, Mi Kyeong

    2015-11-02

    Bee pollen is flower pollen with nectar and salivary substances of bees and rich in essential components. Bee pollen showed antioxidant and tyrosinase inhibitory activity in our assay system. To maximize the antioxidant and tyrosinase inhibitory activity of bee pollen, extraction conditions, such as extraction solvent, extraction time, and extraction temperature, were optimized using response surface methodology. Regression analysis showed a good fit of this model and yielded the second-order polynomial regression for tyrosinase inhibition and antioxidant activity. Among the extraction variables, extraction solvent greatly affected the activity. The optimal condition was determined as EtOAc concentration in MeOH, 69.6%; temperature, 10.0 °C; and extraction time, 24.2 h, and the tyrosinase inhibitory and antioxidant activity under optimal condition were found to be 57.9% and 49.3%, respectively. Further analysis showed the close correlation between activities and phenolic content, which suggested phenolic compounds are active constituents of bee pollen for tyrosinase inhibition and antioxidant activity. Taken together, these results provide useful information about bee pollen as cosmetic therapeutics to reduce oxidative stress and hyperpigmentation.

  14. Experiential self-referential and selfless processing in mindfulness and mental health: Conceptual model and implicit measurement methodology.

    PubMed

    Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit

    2016-07-01

    We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record PMID:27078181

  15. Mass spectrometry for the measurement of intramyocardial gas tensions: methodology and application to the study of myocardial ischemia.

    PubMed

    Khuri, S F; O'Riordan, J; Flaherty, J T; Brawley, R K; Donahoo, J S; Gott, V L

    1975-01-01

    The methodology for use of the mass spectrometer for the measurement of intramyocardial gas tensions in the canine preparation is described. Baseling studies were carried out initially in 36 animals, and control levels for myocardial oxygen tension and myocardial carbon dioxide tension were 19 mm Hg (S.D. 6 mm Hg) and 43 mm Hg (S.D. 10 mm Hg), respectively. Myocardial oxygen tension was not altered significantly by varying the arterial oxygen tension between 65 and 300 mm Hg. However, myocardial carbon dioxide tension increased linearly with increased arterial carbon dioxide tension. In 15 dogs placed on total cardiopulmonary bypass, a perfusion pressure 40-60 mm lower than the control mean arterial pressure resulted in myocardial ischemia with a decrease in myocardial oxygen tension and an increase in myocardial carbon dioxide tension. A subsequent increase in perfusion pressure to control levels resulted in resolution of ischemia and return of myocardial oxygen and carbon dioxide tensions to their control level. In another series of open-chest dogs on cardiopulmonary bypass, a proximal constriction applied to the left coronary circumflex artery resulted in a marked decrease in myocardial oxygen tensions and a marked increase in myocardial carbon dioxide tensions in the region supplied by the constricted vessel. In yet another series of open-chest dogs, it was found that incremental decreases in coronary flow established by constriction of the circumflex artery resulted in an exponential increase in both myocardial carbon dioxide tensions and ST-segment elevation as determined by a 25-gauge multi-contact plunge electrode placed in the posterior left ventricular wall. It appears that mass spectrometry techniques for evaluating myocardial ischemia have several advantages over myocardial biopsy techniques for assay of ATP and lactate, and also over the technique of coronary sinus lactate determination. PMID:1209001

  16. Thermally assisted mechanical dewatering (TAMD) of suspensions of fine particles: analysis of the influence of the operating conditions using the response surface methodology.

    PubMed

    Mahmoud, Akrama; Fernandez, Aurora; Chituchi, Toma-Mihai; Arlabosse, Patricia

    2008-08-01

    Thermally assisted mechanical dewatering (TAMD) is a new process for energy-efficient liquid/solids separation which enhances conventional-device efficiency. The main idea of this process is to supply a flow of heat in mechanical dewatering processes to favour the reduction of the liquid content. This is not a new idea but the proposed combination, especially the chosen operating conditions (temperature <100 degrees C and pressure <3000 kPa) constitutes an original approach and a significant energy saving since the liquid is kept in liquid state. Response surface methodology was used to evaluate the effects of the processing parameters of TAMD on the final dry solids content, which is a fundamental dewatering parameter and an excellent indicator of the extent of TAMD. In this study, a two-factor central composite rotatable design was used to establish the optimum conditions for the TAMD of suspensions of fine particles. Significant regression models, describing changes on final dry solids content with respect to independent variables, were established with regression coefficients (usually called determination coefficients), R(2), greater than 80%. Experiments were carried out on a laboratory filtration/compression cell, firstly on different compressible materials: synthetic mineral suspensions such as talc and synthetic organic suspensions such as cellulose, and then on industrial materials, such as bentonite sludge provided by Soletanche Bachy Company. Experiment showed that the extent of TAMD for a given material is particularly dependent on their physical and chemical properties but also on processing parameters.

  17. Modelling and optimising of physicochemical features of walnut-oil beverage emulsions by implementation of response surface methodology: effect of preparation conditions on emulsion stability.

    PubMed

    Homayoonfal, Mina; Khodaiyan, Faramarz; Mousavi, Mohammad

    2015-05-01

    The major purpose of this study is to apply response surface methodology to model and optimise processing conditions for the preparation of beverage emulsions with maximum emulsion stability and viscosity, minimum particle size, turbidity loss rate, size index and peroxide value changes. A three-factor, five-level central composite design was conducted to estimate the effects of three independent variables: ultrasonic time (UT, 5-15 min), walnut-oil content (WO, 4-10% (w/w)) and Span 80 content (S80, 0.55-0.8). The results demonstrated the empirical models were satisfactorily (p < 0.0001) fitted to the experimental data. Evaluation of responses by analysis of variance indicated high coefficient determination values. The overall optimisation of preparation conditions was an UT of 14.630 min, WO content of 8.238% (w/w), and S80 content of 0.782% (w/w). Under this optimum region, responses were found to be 219.198, 99.184, 0.008, 0.008, 2.43 and 16.65 for particle size, emulsion stability, turbidity loss rate, size index, viscosity and peroxide value changes, respectively.

  18. Process optimization of deposition conditions of PbS thin films grown by a successive ionic layer adsorption and reaction (SILAR) method using response surface methodology

    NASA Astrophysics Data System (ADS)

    Yücel, Ersin; Yücel, Yasin; Beleli, Buse

    2015-07-01

    In this study, lead sulfide (PbS) thin films were synthesized by a successive ionic layer adsorption and reaction (SILAR) method with different pH, dipping time and dipping cycles. Response surface methodology (RSM) and central composite design (CCD) were successfully used to optimize the PbS films deposition parameters and understand the significance and interaction of the factors affecting the film quality. 5-level-3-factor central composite design was employed to evaluate the effects of the deposition parameters (pH, dipping time and dipping cycles) on the response (the optical band gap of the films). Data obtained from RSM were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation. The optimal conditions for the PbS films deposition have been found to be: pH of 9.1, dipping time of 10 s and dipping cycles of 10 cycles. The predicted band gap of PbS film was 2.13 eV under the optimal conditions. Verification experiment (2.24 eV) confirmed the validity of the predicted model. The film structures were characterized by X-ray diffractometer (XRD). Morphological properties of the films were studied with a scanning electron microscopy (SEM). The optical properties of the films were investigated using a UV-visible spectrophotometer.

  19. Evaluation of optimum conditions for pachyman encapsulated in poly(d,l-lactic acid) nanospheres by response surface methodology and results of a related in vitro study

    PubMed Central

    Zheng, Sisi; Luo, Li; Bo, Ruonan; Liu, Zhenguang; Xing, Jie; Niu, Yale; Hu, Yuanliang; Liu, Jiaguo; Wang, Deyun

    2016-01-01

    This study aimed to optimize the preparation conditions of pachyman (PHY)-loaded poly(d,l-lactic acid) (PLA) (PHYP) nanospheres by response surface methodology, explore their characteristics, and assess their effects on splenic lymphocytes. Double emulsion solvent evaporation was used to synthesize PHYP nanospheres, and the optimal preparation conditions were identified as a concentration of poloxamer 188 (F68) (w/v) of 0.33%, a concentration of PLA of 30 mg/mL, and a ratio of PLA to drug (w/w) of 10.25:1 required to reach the highest encapsulation efficiency, which was calculated to be 59.10%. PHYP had a spherical shape with a smooth surface and uniform size and an evident effect of sustained release and relative stability. Splenic lymphocytes are crucial and multifunctional cells in the immune system, and their immunological properties could be enhanced significantly by PHYP treatment. This study confirmed that PHY encapsulated in PLA nanospheres had comparatively steady properties and exerted obvious immune enhancement. PMID:27729787

  20. Optimization on condition of epigallocatechin-3-gallate (EGCG) nanoliposomes by response surface methodology and cellular uptake studies in Caco-2 cells

    NASA Astrophysics Data System (ADS)

    Luo, Xiaobo; Guan, Rongfa; Chen, Xiaoqiang; Tao, Miao; Ma, Jieqing; Zhao, Jin

    2014-06-01

    The major component in green tea polyphenols, epigallocatechin-3-gallate (EGCG), has been demonstrated to prevent carcinogenesis. To improve the effectiveness of EGCG, liposomes were used as a carrier in this study. Reverse-phase evaporation method besides response surface methodology is a simple, rapid, and beneficial approach for liposome preparation and optimization. The optimal preparation conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 4.00, EGCG concentration of 4.88 mg/mL, Tween 80 concentration of 1.08 mg/mL, and rotary evaporation temperature of 34.51°C. Under these conditions, the experimental encapsulation efficiency and size of EGCG nanoliposomes were 85.79% ± 1.65% and 180 nm ± 4 nm, which were close with the predicted value. The malondialdehyde value and the release test in vitro indicated that the prepared EGCG nanoliposomes were stable and suitable for more widespread application. Furthermore, compared with free EGCG, encapsulation of EGCG enhanced its inhibitory effect on tumor cell viability at higher concentrations.

  1. Dynamic measurement of physical conditions in daily life by body area network sensing system

    NASA Astrophysics Data System (ADS)

    Takayama, S.; Tanaka, T.; Takahashi, N.; Matsuda, Y.; Kariya, K.

    2010-07-01

    This paper shows the measurement system to monitor physical conditions dynamically in dairy life. The measurement system for physical conditions in motion must be wearable and wireless connected. Body area network sensing system (BANSS) is a kind of the system to realize the conditions. BANSS is the system constructed with host system and plural sensing nodes. Sensing node is constructed with sensors, analogue/digital convertor(ADC), peripheral interface component(PIC), memory and near field communication device(NFCD). The NFCD in this system is Zigbee. Zigbee is the most suitable to construct wireless network system easily. BANSS is not only the system to measure physical parameters. BANSS informs current physical conditions and advises to keep suitable physical strength. As an application of BANSS, the system managing heart rate in walking is shown. By using this system, users can exercise in condition of a constant physical strength.

  2. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  3. On the optical measurement of corneal thickness. II. The measuring conditions and sources of error.

    PubMed

    Olsen, T; Nielsen, C B; Ehlers, N

    1980-12-01

    The optical measurement of corneal thickness based on oblique viewing of the optical section of the cornea is complicated by the finite width of the incident slit beam. In this report the theoretical and practical aspects of the effect of the slit width on the thickness reading are analysed. In practice, it was not possible to make slit-width independent thickness readings which were reproducible from one observer to another. In addition, the observed slit-width error was found to vary from one patient to another. The lack of reproducible estimate of the corneal thickness is attributed to difficulties associated with an exact definition of the edges of the visible bands of the optical section, which are determined by biological properties of the cornea as well as perceptive properties of the observer. Although inter-observer errors up to 0.02 mm were found, the intra-observer error amounted to only 0.005-0.006 mm (SD) between consecutive readings. Presumably this high intra-observer reproducibility is the result of the auxiliary pin-lights used. Changes in corneal thickness, measured by the same observer, can therefore be determined with great accuracy.

  4. Human and Methodological Sources of Variability in the Measurement of Urinary 8-Oxo-7,8-dihydro-2′-deoxyguanosine

    PubMed Central

    Møller, Peter; Henriksen, Trine; Mistry, Vilas; Koppen, Gudrun; Rossner, Pavel; Sram, Radim J.; Weimann, Allan; Poulsen, Henrik E.; Nataf, Robert; Andreoli, Roberta; Manini, Paola; Marczylo, Tim; Lam, Patricia; Evans, Mark D.; Kasai, Hiroshi; Kawai, Kazuaki; Li, Yun-Shan; Sakai, Kazuo; Singh, Rajinder; Teichert, Friederike; Farmer, Peter B.; Rozalski, Rafal; Gackowski, Daniel; Siomek, Agnieszka; Saez, Guillermo T.; Cerda, Concha; Broberg, Karin; Lindh, Christian; Hossain, Mohammad Bakhtiar; Haghdoost, Siamak; Hu, Chiung-Wen; Chao, Mu-Rong; Wu, Kuen-Yuh; Orhan, Hilmi; Senduran, Nilufer; Smith, Raymond J.; Santella, Regina M.; Su, Yali; Cortez, Czarina; Yeh, Susan; Olinski, Ryszard; Loft, Steffen

    2013-01-01

    Abstract Aims: Urinary 8-oxo-7,8-dihydro-2′-deoxyguanosine (8-oxodG) is a widely used biomarker of oxidative stress. However, variability between chromatographic and ELISA methods hampers interpretation of data, and this variability may increase should urine composition differ between individuals, leading to assay interference. Furthermore, optimal urine sampling conditions are not well defined. We performed inter-laboratory comparisons of 8-oxodG measurement between mass spectrometric-, electrochemical- and ELISA-based methods, using common within-technique calibrants to analyze 8-oxodG-spiked phosphate-buffered saline and urine samples. We also investigated human subject- and sample collection-related variables, as potential sources of variability. Results: Chromatographic assays showed high agreement across urines from different subjects, whereas ELISAs showed far more inter-laboratory variation and generally overestimated levels, compared to the chromatographic assays. Excretion rates in timed ‘spot’ samples showed strong correlations with 24 h excretion (the ‘gold’ standard) of urinary 8-oxodG (rp 0.67–0.90), although the associations were weaker for 8-oxodG adjusted for creatinine or specific gravity (SG). The within-individual excretion of 8-oxodG varied only moderately between days (CV 17% for 24 h excretion and 20% for first void, creatinine-corrected samples). Innovation: This is the first comprehensive study of both human and methodological factors influencing 8-oxodG measurement, providing key information for future studies with this important biomarker. Conclusion: ELISA variability is greater than chromatographic assay variability, and cannot determine absolute levels of 8-oxodG. Use of standardized calibrants greatly improves intra-technique agreement and, for the chromatographic assays, importantly allows integration of results for pooled analyses. If 24 h samples are not feasible, creatinine- or SG-adjusted first morning samples

  5. Objective measures for predicting speech intelligibility in noisy conditions based on new band-importance functions

    PubMed Central

    Ma, Jianfen; Hu, Yi; Loizou, Philipos C.

    2009-01-01

    The articulation index (AI), speech-transmission index (STI), and coherence-based intelligibility metrics have been evaluated primarily in steady-state noisy conditions and have not been tested extensively in fluctuating noise conditions. The aim of the present work is to evaluate the performance of new speech-based STI measures, modified coherence-based measures, and AI-based measures operating on short-term (30 ms) intervals in realistic noisy conditions. Much emphasis is placed on the design of new band-importance weighting functions which can be used in situations wherein speech is corrupted by fluctuating maskers. The proposed measures were evaluated with intelligibility scores obtained by normal-hearing listeners in 72 noisy conditions involving noise-suppressed speech (consonants and sentences) corrupted by four different maskers (car, babble, train, and street interferences). Of all the measures considered, the modified coherence-based measures and speech-based STI measures incorporating signal-specific band-importance functions yielded the highest correlations (r=0.89–0.94). The modified coherence measure, in particular, that only included vowel∕consonant transitions and weak consonant information yielded the highest correlation (r=0.94) with sentence recognition scores. The results from this study clearly suggest that the traditional AI and STI indices could benefit from the use of the proposed signal- and segment-dependent band-importance functions. PMID:19425678

  6. Methodology and measures for preventing unacceptable flow-accelerated corrosion thinning of pipelines and equipment of NPP power generating units

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.

    2016-10-01

    Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow

  7. Photovoltaic module energy rating methodology development

    SciTech Connect

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  8. PLANE-INTEGRATED OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROMETRY METHODOLOGY FOR ANAEROBIC SWINE LAGOON EMISSION MEASUREMENTS

    EPA Science Inventory

    Emissions of ammonia and methane from an anaerobic lagoon at a swine animal feeding operation were evaluated five times over a period of two years. The plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) methodology was used to transect the plume at ...

  9. Optimization of hydrolysis conditions for the production of glucomanno-oligosaccharides from konjac using β-mannanase by response surface methodology.

    PubMed

    Chen, Junfan; Liu, Desheng; Shi, Bo; Wang, Hai; Cheng, Yongqiang; Zhang, Wenjing

    2013-03-01

    Glucomanno-oligosaccharides (GMO), usually produced from hydrolysis of konjac tubers with a high content of glucomannan, have a positive effect on Bifidobacterium as well as a variety of other physiological activities. Response surface methodology (RSM) was employed to optimize the hydrolysis time, hydrolysis temperature, pH and enzyme to substrate ratio (E/S) to obtain a high GMO yield from konjac tubers. From the signal-factor experiments, it was concluded that the change in the direct reducing sugar (DRS) is consistent with total reducing sugar (TRS) but contrary to the degree of polymerization (DP). DRS was used as an indicator of the content of GMO in the RSM study. The optimum RSM operating conditions were: reaction time of 3.4 h, reaction temperature of 41.0°C, pH of 7.1 and E/S of 0.49. The results suggested that the enzymatic hydrolysis was enhanced by temperature, pH and incubation time. Model validation showed good agreement between experimental results and the predicted responses.

  10. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  11. Quantification of furanic compounds in coated deep-fried products simulating normal preparation and consumption: optimisation of HS-SPME analytical conditions by response surface methodology.

    PubMed

    Pérez-Palacios, T; Petisca, C; Melo, A; Ferreira, I M P L V O

    2012-12-01

    The validation of a method for the simultaneous quantification of furanic compounds in coated deep-fried samples processed and handled as usually consumed is presented. The deep-fried food was grinded using a device that simulates the mastication, and immediately analysed by headspace solid phase microextraction coupled to gas chromatography-mass spectrometry. Parameters affecting the efficiency of HS-SPME procedure were selected by response surface methodology, using a 2(3) full-factorial central composite design. Optimal conditions were achieved using 2g of sample, 3g of NaCl and 40min of absorption time at 37°C. Consistency between predicted and experimented values was observed and quality parameters of the method were established. As a result, furan, 2-furfural, furfuryl alcohol and 2-pentylfuran were, for the first time, simultaneously detected and quantified (5.59, 0.27, 10.48 and 1.77μgg(-1) sample, respectively) in coated deep-fried fish, contributing to a better understanding of the amounts of these compounds in food.

  12. Evaluating of scale-up methodologies of gas-solid spouted beds for coating TRISO nuclear fuel particles using advanced measurement techniques

    NASA Astrophysics Data System (ADS)

    Ali, Neven Y.

    The work focuses on implementing for the first time advanced non-invasive measurement techniques to evaluate the scale-up methodology of gas-solid spouted beds for hydrodynamics similarity that has been reported in the literature based on matching dimensionless groups and the new mechanistic scale up methodology that has been developed in our laboratory based on matching the radial profile of gas holdup since the gas dynamics dictate the hydrodynamics of the gas-solid spouted beds. These techniques are gamma-ray computed tomography (CT) to measure the cross-sectional distribution of the phases' holdups and their radial profiles along the bed height and radioactive particle tracking (RPT) to measure in three-dimension (3D) solids velocity and their turbulent parameters. The measured local parameters and the analysis of the results obtained in this work validate our new methodology of scale up of gas-solid spouted beds by comparing for the similarity the phases' holdups and the dimensionless solids velocities and their turbulent parameters that are non-dimensionalized using the minimum spouting superficial gas velocity. However, the scale-up methodology of gas-solid spouted beds that is based on matching dimensionless groups has not been validated for hydrodynamics similarity with respect to the local parameters such as phases' holdups and dimensionless solids velocities and their turbulent parameters. Unfortunately, this method was validated in the literature by only measuring the global parameters. Thus, this work confirms that validation of the scale-up methods of gas-solid spouted beds for hydrodynamics similarity should reside on measuring and analyzing the local hydrodynamics parameters.

  13. Effects of acetic acid injection and operating conditions on NO emission in a vortexing fluidized bed combustor using response surface methodology

    SciTech Connect

    Fuping Qian; Chiensong Chyang; Weishen Yen

    2009-07-15

    The effects of acetic acid injection and operating conditions on NO emission were investigated in a pilot scale vortexing fluidized bed combustor (VFBC), an integration of circular freeboard and a rectangular combustion chamber. Operating conditions, such as the stoichiometric oxygen in the combustion chamber, the bed temperature and the injecting location of acetic acid, were determined by means of response surface methodology (RSM), which enables the examination of parameters with a moderate number of experiments. In RSM, NO emission concentration after acetic acid injection and NO removal percentage at the exit of the VFBC are used as the objective function. The results show that the bed temperature has a more important effect on the NO emission than the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber. Meanwhile, the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber have a more important effect on the NO removal percentage than the bed temperature. NO emission can be decreased by injecting the acetic acid into the combustion chamber, and NO emission decreases with the height of the acetic acid injecting location above the distributor. On the other hand, NO removal percentage increases with the height of the acetic acid injecting location, and NO emission increases with the stoichiometric oxygen in the combustion chamber and the bed temperature. NO removal percentage increases with the stoichiometric oxygen, and increases first, then decreases with the bed temperature. Also, a higher NO removal percentage could be obtained at 850{sup o}C. 26 refs., 12 figs., 8 tabs.

  14. Methodology for Using 3-Dimensional Sonography to Measure Fetal Adrenal Gland Volumes in Pregnant Women With and Without Early Life Stress.

    PubMed

    Kim, Deborah; Epperson, C Neill; Ewing, Grace; Appleby, Dina; Sammel, Mary D; Wang, Eileen

    2016-09-01

    Fetal adrenal gland volumes on 3-dimensional sonography have been studied as potential predictors of preterm birth. However, no consistent methodology has been published. This article describes the methodology used in a study that is evaluating the effects of maternal early life stress on fetal adrenal growth to allow other researchers to compare methodologies across studies. Fetal volumetric data were obtained in 36 women at 20 to 22 and 28 to 30 weeks' gestation. Two independent examiners measured multiple images of a single fetal adrenal gland from each sonogram. Intra- and inter-rater consistency was examined. In addition, fetal adrenal volumes between male and female fetuses were reported. The intra- and inter-rater reliability was satisfactory when the mean of 3 measurements from each rater was used. At 20 weeks' gestation, male fetuses had larger average adjusted adrenal volumes than female fetuses (mean, 0.897 versus 0.638; P = .004). At 28 weeks' gestation, the fetal weight was more influential in determining values for adjusted fetal adrenal volume (0.672 for male fetuses versus 0.526 for female fetuses; P = .034). This article presents a methodology for assessing fetal adrenal volume using 3-dimensional sonography that can be used by other researchers to provide more consistency across studies. PMID:27562975

  15. Methodology for Using 3-Dimensional Sonography to Measure Fetal Adrenal Gland Volumes in Pregnant Women With and Without Early Life Stress.

    PubMed

    Kim, Deborah; Epperson, C Neill; Ewing, Grace; Appleby, Dina; Sammel, Mary D; Wang, Eileen

    2016-09-01

    Fetal adrenal gland volumes on 3-dimensional sonography have been studied as potential predictors of preterm birth. However, no consistent methodology has been published. This article describes the methodology used in a study that is evaluating the effects of maternal early life stress on fetal adrenal growth to allow other researchers to compare methodologies across studies. Fetal volumetric data were obtained in 36 women at 20 to 22 and 28 to 30 weeks' gestation. Two independent examiners measured multiple images of a single fetal adrenal gland from each sonogram. Intra- and inter-rater consistency was examined. In addition, fetal adrenal volumes between male and female fetuses were reported. The intra- and inter-rater reliability was satisfactory when the mean of 3 measurements from each rater was used. At 20 weeks' gestation, male fetuses had larger average adjusted adrenal volumes than female fetuses (mean, 0.897 versus 0.638; P = .004). At 28 weeks' gestation, the fetal weight was more influential in determining values for adjusted fetal adrenal volume (0.672 for male fetuses versus 0.526 for female fetuses; P = .034). This article presents a methodology for assessing fetal adrenal volume using 3-dimensional sonography that can be used by other researchers to provide more consistency across studies.

  16. Outcome measures for vulval skin conditions: a systematic review of randomized controlled trials.

    PubMed

    Simpson, R C; Thomas, K S; Murphy, R

    2013-09-01

    Symptoms and signs of vulval skin disorders are common. These conditions can have a considerable impact on quality of life, restricting physical activities and causing difficulty in everyday activities and may also affect social, psychosexual and psychological well-being. There are no standardized measures routinely used to assess the impact of vulval disease on daily life. To report outcome measures used in clinically based randomized controlled trials (RCTs) investigating therapeutic interventions in vulval disease. The Medline, EMBASE and CENTRAL databases were searched to identify RCTs of vulval skin conditions written in English. Studies with laboratory tests or survival rates as the primary outcome, or those investigating menopausal symptoms or infections were excluded. Twenty-eight published RCTs were included. The vulval conditions represented were vulvodynia (n = 14), lichen sclerosus (n = 9), vulval intraepithelial neoplasia (n = 2), vulval pruritus (n = 2) and lichen planus (n = 1). The 28 RCTs measured 25 different outcomes, using 49 different scales. The method of outcome assessment was lacking on nine occasions. Only 21% (six of 28) of included trials had a clearly stated primary outcome. Patient-reported outcomes were more commonly reported than clinician-related outcome measures. The most commonly reported patient-rated outcome measure was a reduction in pain (measured 15 times) and an overall improvement in symptoms using a patient global assessment (measured 11 times). The most commonly reported clinician-rated outcome was an overall assessment of the appearance of affected sites (measured 13 times). There were no agreed standard scales used for the global assessments. Only nine of the recorded outcome measure tools were designed to assess vulval disease or sexual functioning, the remainder were general measures. There is heterogeneity in the outcome measures used when reporting therapeutic interventions in vulval disease. This field of

  17. Methodological Adaptations for Reliable Measurement of Radium and Radon Isotopes in Hydrothermal Fluids of Extreme Chemical Diversity in Yellowstone National Park, Wyoming, USA

    NASA Astrophysics Data System (ADS)

    Role, A.; Sims, K. W. W.; Scott, S. R.; Lane-Smith, D. R.

    2015-12-01

    To quantitatively model fluid residence times, water-rock-gas interactions, and fluid flow rates in the Yellowstone (YS) hydrothermal system we are measuring short-lived isotopes of Ra (228Ra, 226Ra, 224Ra, 223Ra) and Rn (222Rn, and 220Rn) in hydrothermal fluids and gases. While these isotopes have been used successfully in investigations of water residence times, mixing, and groundwater discharge in oceanic, coastal and estuarine environments, the well-established techniques for measuring Ra and Rn isotopes were developed for seawater and dilute groundwaters which have near neutral pH, moderate temperatures, and a limited range of chemical composition. Unfortunately, these techniques, as originally developed are not suitable for the extreme range of compositions found in YS waters, which have pH ranging from <1 - 10, Eh -.208 to .700 V, water temperatures from ambient to 93 degree C, and high dissolved CO2 concentrations. Here we report on our refinements of these Ra and Rn methods for the extreme conditions found in YS. Our methodologies are now enabling us to quantitatively isolate Ra from fluids that cover a large range of chemical compositions and conduct in-situ Rn isotope measurements that accommodate variable temperatures and high CO2 (Lane-Smith and Sims, 2013, Acta Geophys. 61). These Ra and Rn measurements are now allowing us to apply simple models to quantify hot spring water residence times and aquifer to surface travel times. (224Ra/223Ra) calculations provide estimates of water-rock reaction zone to discharge zone of 4 to 14 days for Yellowstone hot springs and (224Ra/228Ra) shallow aquifer to surface travel times from 2 to 7 days. Further development of more sophisticated models that take into account water-rock-gas reactions and water mixing (shallow groundwater, surface run-off, etc.) will allow us to estimate the timescales of these processes more accurately and thus provide a heretofore-unknown time component to the YS hydrothermal system.

  18. Online estimation of assimilable nitrogen by electrical conductivity measurement during alcoholic fermentation in enological conditions.

    PubMed

    Colombié, Sophie; Latrille, Eric; Sablayrolles, Jean-Marie

    2007-03-01

    The monitoring of alcoholic fermentation under enological conditions is currently poor due to the lack of sensors for online measurements. Such monitoring is currently limited to the measurement of CO(2) production or changes in density. In this study, we determined the potential value of measuring electrical conductivity. We showed that this measurement is related to the assimilation of nitrogen, which is typically the limiting nutrient, and directly correlated to ammoniacal nitrogen assimilation at any percentage of ammoniacal nitrogen in the medium. We also used electrical conductivity for the very precise monitoring of the kinetics of nitrogen assimilation after the addition of a pulse of diammonium hydrogen phosphate (DAP) during fermentation. The impact of initial conditions (e.g., must composition, grape variety, pH) remains unclear, but the robustness, precision and low price of the sensor used justify further studies of the potential value of measuring electrical conductivity on the pilot and industrial scales. PMID:17434425

  19. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    PubMed Central

    Egert, Amanda M.; Klotz, James L.; McLeod, Kyle R.; Harmon, David L.

    2014-01-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Experiment 2, the steers (n = 8) were pair-fed the basal diet of Experiment 1 and dosed with endophyte-free (E−) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine/kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 h period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of

  20. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    NASA Astrophysics Data System (ADS)

    Egert, Amanda; Klotz, James; McLeod, Kyle; Harmon, David

    2014-10-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Exp. 2, the steers (n = 8) were pair-fed the basal diet of Exp. 1 and dosed with endophyte-free (E-) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine / kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 hour period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of rumen

  1. Methodological study investigating long term laser Doppler measured cerebral blood flow changes in a permanently occluded rat stroke model.

    PubMed

    Eve, David J; Musso, James; Park, Dong-Hyuk; Oliveira, Cathy; Pollock, Kenny; Hope, Andrew; Baradez, Marc-Olivier; Sinden, John D; Sanberg, Paul R

    2009-05-30

    Cerebral blood flow is impaired during middle cerebral artery occlusion in the rat model of stroke. However, the long term effects on cerebral blood flow following occlusion have received little attention. We examined cerebral blood flow in both sides at multiple time points following middle cerebral artery occlusion of the rat. The bilateral cerebral blood flow in young male Sprague Dawley rats was measured at the time of occlusion, as well as 4, 10 and 16 weeks after occlusion. Under the present experimental conditions, the difference between the left and right side's cerebral blood flow was observed to appear to switch in direction in a visual oscillatory fashion over time in the sham-treated group, whereas the occluded animals consistently showed left side dominance. One group of rats was intraparenchymally transplanted with a human neural stem cell line (CTX0E03 cells) known to have benefit in stroke models. Cerebral blood flow in the lesioned side of the cell-treated group was observed to be improved compared to the untreated rats and to demonstrate a similar oscillatory nature as that observed in sham-treated animals. These findings suggest that multiple bilateral monitoring of cerebral blood flow over time can show effects of stem cell transplantation efficiently as well as functional tests in an animal stroke model.

  2. Extended necessary condition for local operations and classical communication: Tight bound for all measurements

    NASA Astrophysics Data System (ADS)

    Cohen, Scott M.

    2015-06-01

    We give a necessary condition that a separable measurement can be implemented by local quantum operations and classical communication (LOCC) in any finite number of rounds of communication, generalizing and strengthening a result obtained previously. That earlier result involved a bound that is tight when the number of measurement operators defining the measurement is relatively small. The present results generalize that bound to one that is tight for any finite number of measurement operators, and we also provide an extension which holds when that number is infinite. We apply these results to the famous example on a 3 ×3 system known as "domino states," which were the first demonstration of nonlocality without entanglement. Our extended necessary condition provides another way of showing that these states cannot be perfectly distinguished by (finite-round) LOCC. It directly shows that this conclusion also holds for their related rotated domino states. We also introduce a class of problems involving the unambiguous discrimination of quantum states, each of which is an example where the states can be optimally discriminated by a separable measurement, but according to our condition, cannot be optimally discriminated by LOCC. These examples nicely illustrate the usefulness of the present results, since our earlier necessary condition, which the present result generalizes, is not strong enough to reach a conclusion in any of these cases.

  3. Effect of Complex Working Conditions on Nurses Who Exert Coercive Measures in Forensic Psychiatric Care.

    PubMed

    Gustafsson, Niclas; Salzmann-Erikson, Martin

    2016-09-01

    Nurses who exert coercive measures on patients within psychiatric care are emotionally affected. However, research on their working conditions and environment is limited. The purpose of the current study was to describe nurses' experiences and thoughts concerning the exertion of coercive measures in forensic psychiatric care. The investigation was a qualitative interview study using unstructured interviews; data were analyzed with inductive content analysis. Results described participants' thoughts and experiences of coercive measures from four main categories: (a) acting against the patients' will, (b) reasoning about ethical justifications, (c) feelings of compassion, and (d) the need for debriefing. The current study illuminates the working conditions of nurses who exert coercive measures in clinical practice with patients who have a long-term relationship with severe symptomatology. The findings are important to further discuss how nurses and leaders can promote a healthier working environment. [Journal of Psychosocial Nursing and Mental Health Services, 54(9), 37-43.]. PMID:27576227

  4. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  5. Statistical Measurements of Contact Conditions of 478 Transport-airplane Landings During Routine Daytime Operations

    NASA Technical Reports Server (NTRS)

    Silsby, Norman S

    1955-01-01

    Statistical measurements of contact conditions have been obtained, by means of a special photographic technique, of 478 landings of present-day transport airplanes made during routine daylight operations in clear air at the Washington National Airport. From the measurements, sinking speeds, rolling velocities, bank angles, and horizontal speeds at the instant before contact have been evaluated and a limited statistical analysis of the results has been made and is reported in this report.

  6. Optimization of Microwave-Assisted Extraction Conditions for Five Major Bioactive Compounds from Flos Sophorae Immaturus (Cultivars of Sophora japonica L.) Using Response Surface Methodology.

    PubMed

    Liu, Jin-Liang; Li, Long-Yun; He, Guang-Hua

    2016-03-02

    Microwave-assisted extraction was applied to extract rutin; quercetin; genistein; kaempferol; and isorhamnetin from Flos Sophorae Immaturus. Six independent variables; namely; solvent type; particle size; extraction frequency; liquid-to-solid ratio; microwave power; and extraction time were examined. Response surface methodology using a central composite design was employed to optimize experimental conditions (liquid-to-solid ratio; microwave power; and extraction time) based on the results of single factor tests to extract the five major components in Flos Sophorae Immaturus. Experimental data were fitted to a second-order polynomial equation using multiple regression analysis. Data were also analyzed using appropriate statistical methods. Optimal extraction conditions were as follows: extraction solvent; 100% methanol; particle size; 100 mesh; extraction frequency; 1; liquid-to-solid ratio; 50:1; microwave power; 287 W; and extraction time; 80 s. A rapid and sensitive ultra-high performance liquid chromatography method coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry (EIS-Q-TOF MS/MS) was developed and validated for the simultaneous determination of rutin; quercetin; genistein; kaempferol; and isorhamnetin in Flos Sophorae Immaturus. Chromatographic separation was accomplished on a Kinetex C18 column (100 mm × 2.1 mm; 2.6 μm) at 40 °C within 5 min. The mobile phase consisted of 0.1% aqueous formic acid and acetonitrile (71:29; v/v). Isocratic elution was carried out at a flow rate of 0.35 mL/min. The constituents of Flos Sophorae Immaturus were simultaneously identified by EIS-Q-TOF MS/MS in multiple reaction monitoring mode. During quantitative analysis; all of the calibration curves showed good linear relationships (R² > 0.999) within the tested ranges; and mean recoveries ranged from 96.0216% to 101.0601%. The precision determined through intra- and inter-day studies showed an RSD% of <2.833%. These results

  7. Optimization of Microwave-Assisted Extraction Conditions for Five Major Bioactive Compounds from Flos Sophorae Immaturus (Cultivars of Sophora japonica L.) Using Response Surface Methodology.

    PubMed

    Liu, Jin-Liang; Li, Long-Yun; He, Guang-Hua

    2016-01-01

    Microwave-assisted extraction was applied to extract rutin; quercetin; genistein; kaempferol; and isorhamnetin from Flos Sophorae Immaturus. Six independent variables; namely; solvent type; particle size; extraction frequency; liquid-to-solid ratio; microwave power; and extraction time were examined. Response surface methodology using a central composite design was employed to optimize experimental conditions (liquid-to-solid ratio; microwave power; and extraction time) based on the results of single factor tests to extract the five major components in Flos Sophorae Immaturus. Experimental data were fitted to a second-order polynomial equation using multiple regression analysis. Data were also analyzed using appropriate statistical methods. Optimal extraction conditions were as follows: extraction solvent; 100% methanol; particle size; 100 mesh; extraction frequency; 1; liquid-to-solid ratio; 50:1; microwave power; 287 W; and extraction time; 80 s. A rapid and sensitive ultra-high performance liquid chromatography method coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry (EIS-Q-TOF MS/MS) was developed and validated for the simultaneous determination of rutin; quercetin; genistein; kaempferol; and isorhamnetin in Flos Sophorae Immaturus. Chromatographic separation was accomplished on a Kinetex C18 column (100 mm × 2.1 mm; 2.6 μm) at 40 °C within 5 min. The mobile phase consisted of 0.1% aqueous formic acid and acetonitrile (71:29; v/v). Isocratic elution was carried out at a flow rate of 0.35 mL/min. The constituents of Flos Sophorae Immaturus were simultaneously identified by EIS-Q-TOF MS/MS in multiple reaction monitoring mode. During quantitative analysis; all of the calibration curves showed good linear relationships (R² > 0.999) within the tested ranges; and mean recoveries ranged from 96.0216% to 101.0601%. The precision determined through intra- and inter-day studies showed an RSD% of <2.833%. These results

  8. Methodological Issues in Mobile Computer-Supported Collaborative Learning (mCSCL): What Methods, What to Measure and When to Measure?

    ERIC Educational Resources Information Center

    Song, Yanjie

    2014-01-01

    This study aims to investigate (1) methods utilized in mobile computer-supported collaborative learning (mCSCL) research which focuses on studying, learning and collaboration mediated by mobile devices; (2) whether these methods have examined mCSCL effectively; (3) when the methods are administered; and (4) what methodological issues exist in…

  9. Methodological Gravitism

    ERIC Educational Resources Information Center

    Zaman, Muhammad

    2011-01-01

    In this paper the author presents the case of the exchange marriage system to delineate a model of methodological gravitism. Such a model is not a deviation from or alteration to the existing qualitative research approaches. I have adopted culturally specific methodology to investigate spouse selection in line with the Grounded Theory Method. This…

  10. Measuring Bathymetry, Runup, and Beach Volume Change during Storms: New Methodology Quantifies Substantial Changes in Cross-Shore Sediment Flux

    NASA Astrophysics Data System (ADS)

    Brodie, K. L.; McNinch, J. E.

    2009-12-01

    Accurate predictions of beach change during storms are contingent upon a correct understanding of wave-driven sediment exchange between the beach and nearshore during high energy conditions. Conventional storm data sets use “pre” (often weeks to months prior) and “post” (often many days after the storm in calm conditions) collections of beach topography and nearshore bathymetry to characterize the effects of the storm. These data have led to a common theory for wave-driven event response of the nearshore system, wherein bars and shorelines are smoothed and straightened by strong alongshore currents into two-dimensional, linear forms. Post-storm, the shoreline accretes, bars migrate onshore, and three-dimensional shapes begin to build as low-energy swell returns. Unfortunately, these approaches have left us with a knowledge gap of the extent and timing of erosion and accretion during storms, arguably the most important information both for scientists trying to model storm damage or inundation, and homeowners trying to manage their properties. This work presents the first spatially extensive (10 km alongshore) and temporally high-resolution (dt = 12 hours) quantitative data set of beach volume and nearshore bathymetry evolution during a Nor’easter on North Carolina’s Outer Banks. During the Nor’easter, significant wave height peaked at 3.4 m, and was greater than 2 m for 37 hours, as measured by the Duck FRF 8 m array. Data were collected using CLARIS: Coastal Lidar and Radar Imaging System, a mobile system that couples simultaneous observations of beach topography from a Riegl laser scanner and nearshore bathymetry (out to ~1 km offshore) from X-Band radar-derived celerity measurements (BASIR). The merging of foreshore lidar elevations with 6-min averages of radar-derived swash runup also enables mapping of maximum-runup elevations alongshore during the surveys. Results show that during the storm, neither the shoreline nor nearshore bathymetry returned

  11. Use of acoustic velocity methodology and remote sensing techniques to measure unsteady flow on the lower Yazoo River in Mississippi

    USGS Publications Warehouse

    Turnipseed, D. Phil; Cooper, Lance M.; Davis, Angela A.

    1998-01-01

    Methodologies have been developed for computing continuous discharge during varied, non-uniform low and medium flows on the Yazoo River at the U.S. Geological Survey streamgage below Steele Bayou near Long Lake, Mississippi, using acoustic signal processing and conventional streamgaging techniques. Procedures were also developed to compute locations of discharges during future high flow events when the stream reach is subject to hi-directional and reverse flow caused by rising stages on the Mississippi River using a combination of acoustic equipment and remote sensing technology. A description of the study area is presented. Selected results of these methods are presented for the period from March through September 1997.

  12. Making implicit measures of associations with snack foods more negative through evaluative conditioning.

    PubMed

    Lebens, Herbert; Roefs, Anne; Martijn, Carolien; Houben, Katrijn; Nederkoorn, Chantal; Jansen, Anita

    2011-12-01

    The present study examined whether implicit measures of associations with snack foods and food consumer behaviour could be changed through a picture-picture evaluative conditioning procedure. In the experimental condition (n=41), female participants completed a conditioning procedure in which pictures of snack foods were paired with images of negatively valenced female body shapes, and pictures of fruits were paired with images of positively valenced body shapes. In a control condition (n=44), snack and fruit stimuli were randomly paired with positively and negatively valenced body shapes. Implicit measures of associations with high-fat snack foods were obtained by using a positive and a negative unipolar single category Implicit Association Test (sc-IAT). A virtual supermarket task was used to assess food consumer behaviour. Results indicated that participants in the experimental condition held a less positive association with high-fat foods on the positive sc-IAT and a more negative association with these foods on the negative sc-IAT as compared to control participants. Opposed to our hypothesis, no behavioural differences were found between the groups. These results imply that this form of associative learning can produce shifts in implicit measures of food evaluations, though behavioural effects were absent.

  13. Measurement of dielectric properties of pumpable food materials under static and continuous flow conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous flow microwave sterilization is an emerging technology which has the potential to replace the conventional heating processes for viscous and pumpable food products. Dielectric properties of pumpable food products were measured by a new approach (under continuous flow conditions) at a temp...

  14. Native biofilm cultured under controllable condition and used in mediated method for BOD measurement.

    PubMed

    Liu, Ling; Deng, Liu; Yong, Daming; Dong, Shaojun

    2011-05-15

    In this article, we developed a native biofilm (NBF) bioreactor used for biochemical oxygen demand mediated method (BOD(Med)). There were two innovations differed from previous BOD(Med) assay. Firstly, the immobilization of microorganisms was adopted in BOD(Med). Secondly, the NBF was introduced for BOD measurement. The NBF bioreactor has been characterized by optical microscopy. A culture condition of NBF with 24h, 35°C and pH 7 was optimized. Furthermore, a measuring condition with 35°C, pH 7 and 55 mM ferricyanide in 1h incubation were optimized. Based on the optimized condition, the real wastewater samples from local sewage treatment plant had been measured. Performances of the NBFs proposed at different culture conditions were recorded for 110 d, and the results indicated that long-term storage stability was obtained. With the proposed method, an uncontaminated native microbial source solution can be obtained from a wastewater treatment plant. In this way, we can ensure that the microbial species of all in the NBF are same as that in the target to be measured.

  15. Measurement of dielectric properties of pumpable food materials under static and continuous flow conditions.

    PubMed

    Kumar, P; Coronel, P; Simunovic, J; Truong, V D; Sandeep, K P

    2007-05-01

    Continuous flow microwave sterilization is an emerging technology that has the potential to replace the conventional heating processes for viscous and pumpable food products. Dielectric properties of pumpable food products were measured by a new approach (under continuous flow conditions) at a temperature range of 20 to 130 degrees C and compared with those measured by the conventional approach (under static conditions). The food products chosen for this study were skim milk, green pea puree, carrot puree, and salsa con queso. Second-order polynomial correlations for the dependence of dielectric properties at 915 MHz of the food products on temperature were developed. Dielectric properties measured under static and continuous flow conditions were similar for homogeneous food products such as skim milk and vegetable puree, but they were significantly different for salsa con queso, which is a multiphase food product. The results from this study suggest that, for a multiphase product, dielectric properties measured under continuous flow conditions should be used for designing a continuous flow microwave heating system.

  16. Measuring various sizes of H-reflex while monitoring the stimulus condition.

    PubMed

    Hiraoka, Koichi

    2002-11-01

    The purpose of this study was to assess the usefulness of a new technique that measured various sizes of the soleus H-reflex, while monitoring the stimulus condition. Eight healthy volunteers participated in this experiment. In the new technique, an above-motor-threshold conditioning stimulus was given to the tibial nerve 10-12 ms after a below-motor-threshold test stimulus. The conditioning stimulus evoked a direct M-wave, which was followed by a test-stimulus-evoked H-reflex. This reflex was followed by a conditioning stimulus-evoked H-reflex. The amount of the voluntary-contraction-induced facilitation of the H-reflex was similar for both the new technique and conventional technique, in which an above-motor-threshold test stimulus was given without a conditioning stimulus. Using the new technique, we found that the amount of facilitation increased linearly with the size of the test H-reflex. This technique allows us to evoke various sizes of H-reflex while monitoring a stimulus condition, and is useful for measuring H-reflexes during voluntary movement.

  17. Testing methodologies

    SciTech Connect

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  18. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON).

  19. Simultaneous retrieval of atmospheric CO2 and light path modification from space-based spectroscopic observations of greenhouse gases: methodology and application to GOSAT measurements over TCCON sites.

    PubMed

    Oshchepkov, Sergey; Bril, Andrey; Yokota, Tatsuya; Yoshida, Yukio; Blumenstock, Thomas; Deutscher, Nicholas M; Dohe, Susanne; Macatangay, Ronald; Morino, Isamu; Notholt, Justus; Rettinger, Markus; Petri, Christof; Schneider, Matthias; Sussman, Ralf; Uchino, Osamu; Velazco, Voltaire; Wunch, Debra; Belikov, Dmitry

    2013-02-20

    This paper presents an improved photon path length probability density function method that permits simultaneous retrievals of column-average greenhouse gas mole fractions and light path modifications through the atmosphere when processing high-resolution radiance spectra acquired from space. We primarily describe the methodology and retrieval setup and then apply them to the processing of spectra measured by the Greenhouse gases Observing SATellite (GOSAT). We have demonstrated substantial improvements of the data processing with simultaneous carbon dioxide and light path retrievals and reasonable agreement of the satellite-based retrievals against ground-based Fourier transform spectrometer measurements provided by the Total Carbon Column Observing Network (TCCON). PMID:23435008

  20. Investigation of Seepage Meter Measurements in Steady Flow and Wave Conditions.

    PubMed

    Russoniello, Christopher J; Michael, Holly A

    2015-01-01

    Water exchange between surface water and groundwater can modulate or generate ecologically important fluxes of solutes across the sediment-water interface. Seepage meters can directly measure fluid flux, but mechanical resistance and surface water dynamics may lead to inaccurate measurements. Tank experiments were conducted to determine effects of mechanical resistance on measurement efficiency and occurrence of directional asymmetry that could lead to erroneous net flux measurements. Seepage meter efficiency was high (average of 93%) and consistent for inflow and outflow under steady flow conditions. Wave effects on seepage meter measurements were investigated in a wave flume. Seepage meter net flux measurements averaged 0.08 cm/h-greater than the expected net-zero flux, but significantly less than theoretical wave-driven unidirectional discharge or recharge. Calculations of unidirectional flux from pressure measurements (Darcy flux) and theory matched well for a ratio of wave length to water depth less than 5, but not when this ratio was greater. Both were higher than seepage meter measurements of unidirectional flux made with one-way valves. Discharge averaged 23% greater than recharge in both seepage meter measurements and Darcy calculations of unidirectional flux. Removal of the collection bag reduced this net discharge. The presence of a seepage meter reduced the amplitude of pressure signals at the bed and resulted in a nearly uniform pressure distribution beneath the seepage meter. These results show that seepage meters may provide accurate measurements of both discharge and recharge under steady flow conditions and illustrate the potential measurement errors associated with dynamic wave environments.

  1. Quality-of-Life Measures in Children With Neurological Conditions: Pediatric Neuro-QOL

    PubMed Central

    Lai, Jin-Shei; Nowinski, Cindy; Victorson, David; Bode, Rita; Podrabsky, Tracy; McKinney, Natalie; Straube, Don; Holmes, Gregory L.; McDonald, Craig M.; Henricson, Erik; Abresch, R. Ted; Moy, Claudia S.; Cella, David

    2013-01-01

    Background A comprehensive, reliable, and valid measurement system is needed to monitor changes in children with neurological conditions who experience lifelong functional limitations. Objective This article describes the development and psychometric properties of the pediatric version of the Quality of Life in Neurological Disorders (Neuro-QOL) measurement system. Methods The pediatric Neuro-QOL consists of generic and targeted measures. Literature review, focus groups, individual interviews, cognitive interviews of children and consensus meetings were used to identify and finalize relevant domains and item content. Testing was conducted on 1018 children aged 10 to 17 years drawn from the US general population for generic measures and 171 similarly aged children with muscular dystrophy or epilepsy for targeted measures. Dimensionality was evaluated using factor analytic methods. For unidimensional domains, item parameters were estimated using item response theory models. Measures with acceptable fit indices were calibrated as item banks; those without acceptable fit indices were treated as summary scales. Results Ten measures were developed: 8 generic or targeted banks (anxiety, depression, anger, interaction with peers, fatigue, pain, applied cognition, and stigma) and 2 generic scales (upper and lower extremity function). The banks reliably (r > 0.90) measured 63.2% to 100% of the children tested. Conclusions The pediatric Neuro-QOL is a comprehensive measurement system with acceptable psychometric properties that could be used in computerized adaptive testing. The next step is to validate these measures in various clinical populations. PMID:21788436

  2. Direct and High-Resolution Measurements of Retardation and Transport in Whole Rock Samples under Unsaturated Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Wang, J.

    2001-12-01

    Evaluation of chemical sorption and transport is very important in the investigations of contaminant remediation, risk assessment, and waste disposal (e.g., the potential high-level nuclear waste repository at Yucca Mountain, Nevada). Characterization of transport parameters for whole rock samples has typically been performed in batch systems with arbitrary grain sizes and a high water/rock ratio. Measurement of these parameters under conditions more representative of fractured rocks in situ provides a better understanding of the processes occurring there. The effective Kd approach has been commonly employed to quantify the extent of contaminant-medium-fluid interactions. Unrepresentative Kd values will lead to unrealistic assessments of contaminant transport. Experimentally determined Kd values are predominantly obtained from batch experiments under saturated and well-mixed conditions. Batch-sorption experiments can be problematic because: (1) saturated conditions with large waterrock ratios are not representative of the in situ vadose condition, and (2) crushed rock samples are used, with the sample size (in the range of microns to sub-millimeters) chosen more or less arbitrarily and mainly for experimental convenience, and (3) for weakly sorbing contaminants, a batch-sorption approach can yield variable and even negative Kd values, because of the inherent methodology of calculating the Kd values by subtracting two large numbers (i.e., initial and final aqueous concentration). In this work, we use an unsaturated transport-sorption approach to quantify the sorption behavior of contaminants and evaluate the applicability of the conventional batch-sorption approach in unsaturated rock. Transient experiments are designed to investigate water imbibition and chemical transport into the rock sample (with size in the centimeter range) by contacting one end of a sample with water containing chemical tracers. Capillary-driven imbibition transports chemicals farther away

  3. Measurements of Electrical and Thermal Conductivity of Iron Under Earth's Core Conditions

    NASA Astrophysics Data System (ADS)

    Ohta, K.; Kuwayama, Y.; Shimizu, K.; Yagi, T.; Hirose, K.; Ohishi, Y.

    2014-12-01

    Secular cooling of the Earth's core induces the convection of the conductive liquid outer core, which generates the geomagnetic field, and the growth of the solid inner core. Since iron is the primary component of the Earth's core, the electrical and thermal conductivity of iron in both solid and liquid states are key pieces of information for estimating the transport properties of the core. We performed electrical and thermal conductivity measurements on iron under core conditions in a laser-heated diamond anvil cell. Our electrical conductivity measurements on iron clearly show resistivity saturation phenomena in iron under high pressure and high temperature conditions as predicted in a recent laboratory-based model for the core conductivity (Gomi et al., 2013). Direct measurements of thermal diffusivity of iron have been also preformed at high pressures by using the pulsed light heating thermoreflectance technique, which enable us to confirm the validity of the Wiedemann-Franz law toward transition metal under high pressure.

  4. Performance degradation of a typical twin engine commuter type aircraft in measured natural icing conditions

    NASA Technical Reports Server (NTRS)

    Ranaudo, R. J.; Mikkelsen, K. L.; Mcknight, R. C.; Perkins, P. J., Jr.

    1984-01-01

    The performance of an aircraft in various measured icing conditions was investigated. Icing parameters such as liquid water content, temperature, cloud droplet sizes and distributions were measured continuously while in icing. Flight data were reduced to provide plots of the aircraft drag polars and lift curves (CL vs. alpha) for the measured ""iced'' condition as referenced to the uniced aircraft. These data were also reduced to provide plots of thrust horsepower required vs. single engine power available to show how icing affects engine out capability. It is found that performance degradation is primarily influenced by the amount and shape of the accumulated ice. Glaze icing caused the greatest aerodynamic performance penalties in terms of increased drag and reduction in lift while aerodynamic penalties due to rime icing were significantly lower.

  5. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    PubMed Central

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential Ecorr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706

  6. Methodology for assessing the probability of corrosion in concrete structures on the basis of half-cell potential and concrete resistivity measurements.

    PubMed

    Sadowski, Lukasz

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  7. Measurement of distributed strain and temperature based on higher order and higher mode Bragg conditions

    NASA Technical Reports Server (NTRS)

    Sirkis, James S. (Inventor); Sivanesan, Ponniah (Inventor); Venkat, Venki S. (Inventor)

    2001-01-01

    A Bragg grating sensor for measuring distributed strain and temperature at the same time comprises an optical fiber having a single mode operating wavelength region and below a cutoff wavelength of the fiber having a multimode operating wavelength region. A saturated, higher order Bragg grating having first and second order Bragg conditions is fabricated in the optical fiber. The first order of Bragg resonance wavelength of the Bragg grating is within the single mode operating wavelength region of the optical fiber and the second order of Bragg resonance wavelength is below the cutoff wavelength of the fiber within the multimode operating wavelength region. The reflectivities of the saturated Bragg grating at the first and second order Bragg conditions are less than two orders of magnitude of one another. In use, the first and second order Bragg conditions are simultaneously created in the sensor at the respective wavelengths and a signal from the sensor is demodulated with respect to each of the wavelengths corresponding to the first and second order Bragg conditions. Two Bragg conditions have different responsivities to strain and temperature, thus allowing two equations for axial strain and temperature to be found in terms of the measure shifts in the primary and second order Bragg wavelengths. This system of equations can be solved for strain and temperature.

  8. Development of a measurement system for the online inspection of microstructured surfaces in harsh industrial conditions

    NASA Astrophysics Data System (ADS)

    Mueller, Thomas; Langmann, Benjamin; Reithmeier, Eduard

    2014-05-01

    Microscopic imaging techniques are usually applied for the inspection of microstructured surfaces. These techniques require clean measurement conditions. Soilings, e.g. dust or splashing liquids, can disturb the measurement process or even damage instruments. Since these soilings occur in the majority of manufacturing processes, microscopic inspection usually must be carried out in a separate laboratory. We present a measurement system which allows for a microscopic inspection and a 3D reconstruction of microstructured surfaces in harsh industrial conditions. The measurement system also enables precise positioning, e.g. of a grinding wheel, with an accuracy of 5 μm. The main component of the measurement system is a CCD camera with a high-magnification telecentric lens. By means of this camera it is even possible to measure structures with dimensions in the range of 30 to 50 μm. The camera and the lens are integrated into a waterproof and dustproof enclosure. The inspection window of the enclosure has an air curtain which serves as a splash guard. The workpiece illumination is crucial in order to obtain good measurement results. The measuring system includes high-power LEDs which are integrated in a waterproof enclosure. The measurement system also includes a laser with a specially designed lens system to form an extremely narrow light section on the workpiece surface. It is possible to obtain a line width of 25 μm. This line and the camera with the high-magnification telecentric lens are used to perform a laser triangulation of the microstructured surface. This paper describes the system as well as the development and evaluation of the software for the automatic positioning of the workpiece and the automatic three-dimensional surface analysis.

  9. Analysis of practical sound transmission class (STC) measurements performed under a variety of field conditions

    NASA Astrophysics Data System (ADS)

    Conroy, James

    2003-10-01

    Sound transmission class (STC) measurements were made using a simplified version of the procedure presented in ASTM E336-96 and the results compared to STC values found in the literature. STC is used to provide a single number speech privacy rating for a building partition. It is calculated by measuring the transmission loss of a partition and applying a curve fitting procedure. While the standards provide a highly detailed data collection and processing procedure, they give limited insight into the type of response and deviation that might be expected from measurements made along a partition or when comparing results from similar partitions under different field conditions. Likewise, while there is discussion in the literature on STC, with some attempt to relate values to human perception and provide ratings for common partitions there is a generally limited discussion involving data collected under real conditions. The partitions in three different office rooms were tested and the results described. In particular, the effect of speaker type and location, source type, room geometry, microphone placement, the spatial variation and repeatability of measurements on a particular partition, and a comparison of results from similar partitions under different field conditions were analyzed and discussed in detail.

  10. Conditional asymmetric linkage disequilibrium (ALD): extending the biallelic r2 measure.

    PubMed

    Thomson, Glenys; Single, Richard M

    2014-09-01

    For multiallelic loci, standard measures of linkage disequilibrium provide an incomplete description of the correlation of variation at two loci, especially when there are different numbers of alleles at the two loci. We have developed a complementary pair of conditional asymmetric linkage disequilibrium (ALD) measures. Since these measures do not assume symmetry, they more accurately describe the correlation between two loci and can identify heterogeneity in genetic variation not captured by other symmetric measures. For biallelic loci the ALD are symmetric and equivalent to the correlation coefficient r. The ALD measures are particularly relevant for disease-association studies to identify cases in which an analysis can be stratified by one of more loci. A stratified analysis can aid in detecting primary disease-predisposing genes and additional disease genes in a genetic region. The ALD measures are also informative for detecting selection acting independently on loci in high linkage disequilibrium or on specific amino acids within genes. For SNP data, the ALD statistics provide a measure of linkage disequilibrium on the same scale for comparisons among SNPs, among SNPs and more polymorphic loci, among haplotype blocks of SNPs, and for fine mapping of disease genes. The ALD measures, combined with haplotype-specific homozygosity, will be increasingly useful as next-generation sequencing methods identify additional allelic variation throughout the genome.

  11. Relationship between ultrasound measurements of body fat reserves and body condition score in female donkeys.

    PubMed

    Quaresma, M; Payan-Carreira, R; Silva, S R

    2013-08-01

    Several methods have been developed to monitor body fat reserves of farm animals and body condition scoring (BCS) is generally assumed to be the most practical. Objective methods, such as real time ultrasonography (RTU), are accepted methods for measuring fat reserves in several farm species but there is no published information about the use of RTU to monitor body fat reserves in donkeys. The aim of the present study was to determine the relationship between RTU measurements and BCS in female donkeys (jennies) (n=16) with a BCS of 3-7 on a 9 point scale. Ultrasound images were captured using an Aloka 500-V scanner equipped with a 7.5 MHz probe and subcutaneous fat (SF, range: 1.0-14.0mm) and thoracic wall tissue (TD, range: 5.6-21.4mm) depths measurements were determined. A significant correlation was found between BCS and all RTU measurements (0.65measurements and BCS and between log transformed RTU measurements and log transformed BCS. All equations with variables transformed into a logarithmic scale gave better coefficients of determination (0.42measurements have a logarithmic relationship with BCS and that RTU combined with image analysis permits accurate fat and tissue depths measurements to monitor fat reserves in jennies.

  12. Reference place conditioning procedure with cocaine: increased sensitivity for measuring associatively motivated choice behavior in rats.

    PubMed

    Reichel, Carmela M; Wilkinson, Jamie L; Bevins, Rick A

    2010-07-01

    Place conditioning is widely used to study the conditioned rewarding effects of drugs. In the standard version, one reward (cocaine) is compared with no reward (saline). A modified variant of this task, 'reference-conditioning' procedure, compares two potentially rewarding stimuli (high vs. low cocaine dose). There has been little research on the utility of this procedure. Experiment 1 used the standard protocol with saline administered before confinement to the reference compartment of a place conditioning chamber. On alternate days, saline, 2.5, 5, 7.5, 10, or 20 mg/kg cocaine was administered before confinement to the opposite compartment. In experiments 2 and 3, reference-compartment saline was replaced with 5 and 7.5 mg/kg cocaine, respectively. Relative to saline, 7.5-20 mg/kg cocaine had comparable conditioned rewarding effects (i.e. similar increase in time in paired compartment). When cocaine replaced saline, there was competition at doses lower than 7.5 mg/kg. Rats that received 7.5 versus 2.5 mg/kg spent similar time in each compartment, indicating competition. Competition was not seen with 5 versus 20 mg/kg; preference was for the 20 mg/kg compartment. Experiment 4 showed that the competition at 2.5 mg/kg was not due to reward sensitization. The reference-conditioning procedure has increased the sensitivity for measuring associatively motivated choice behavior.

  13. Measurement of a tree growth condition by the hetero-core optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Uchida, Hoshito; Akita, Shohei; Nishiyama, Michiko; Kumekawa, Norikazu; Watanabe, Kazuhiro

    2011-04-01

    Condition and growth of trees are considered to be important in monitoring global circulation with heat and water, additionally growth of trees are affected by CO2 and air pollutants. On the other hand, since growth of plants is affected by surrounding climates, it is expected that real-time monitoring of crop plants growing makes possible quantitative agricultural management. This study proposed methods in measuring tree growth using hetero-core optical fiber sensors which are suitable for long-term, remote and real-time monitoring in wide area due to their features such as independence from temperature fluctuation and weather condition in addition to advantages of an optical fiber. Two types of sensors were used for that purpose. One of them was a dendrometer which measured radial changes of a tree stem and the other was elastic sensor which was to measure growth of smaller tree such as crop plant. In our experiment, it was demonstrated that the dendrometer was capable of measuring the differences of tree growing trend in period of different seasons such as growing rates 2.08 mm between spring and summer and 0.21 mm between autumn and winter, respectively. Additionally, this study had proposed the method of measuring crop plant growing by the elastic sensor because of its compact and light design and monotonious changes in optical loss to the amount of expansion and contraction.

  14. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  15. Laboratory measurements of microwave absorption from gaseous atmospheric constituents under conditions for the outer planets

    NASA Technical Reports Server (NTRS)

    Steffes, Paul G.

    1986-01-01

    Quite often the interpretive work on the microwave and millimeter-wave absorption profiles, which are inferred from radio occultation measurements or radio astronomical observations of the outer planets, employs theoretically-derived absorption coefficients to account for contributions to the observed opacity from gaseous constituents. Variations of the actual absorption coefficients from those which are theoretically derived, especially under the environmental conditions characteristic of the outer planets, can result in significant errors in the inferred abundances of the absorbing constituents. The recognition of the need to make laboratory measurements of the absorptivity of gases such as NH3, CH4, and H2O in a predominantly H2 atmosphere, under temperature and pressure conditions simulating the outer planets' atmospheres, and at wavelengths corresponding to both radio occultation and radio astronomical observations, has led to the development of a facility capable of making such measurements at Georgia Tech. The laboratory measurement system, the measurement techniques, and the proposed experimental regimen for Winter 1985 are described.

  16. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  17. Electrode size and boundary condition independent measurement of the effective piezoelectric coefficient of thin films

    SciTech Connect

    Stewart, M.; Lepadatu, S.; McCartney, L. N.; Cain, M. G.; Wright, L.; Crain, J.; Newns, D. M.; Martyna, G. J.

    2015-02-01

    The determination of the piezoelectric coefficient of thin films using interferometry is hindered by bending contributions. Using finite element analysis (FEA) simulations, we show that the Lefki and Dormans approximations using either single or double-beam measurements cannot be used with finite top electrode sizes. We introduce a novel method for characterising piezoelectric thin films which uses a differential measurement over the discontinuity at the electrode edge as an internal reference, thereby eliminating bending contributions. This step height is shown to be electrode size and boundary condition independent. An analytical expression is derived which gives good agreement with FEA predictions of the step height.

  18. Note: Electrical resolution during conductive atomic force microscopy measurements under different environmental conditions and contact forces

    SciTech Connect

    Lanza, M.; Porti, M.; Nafria, M.; Aymerich, X.; Whittaker, E.; Hamilton, B.

    2010-10-15

    Conductive atomic force microscopy experiments on gate dielectrics in air, nitrogen, and UHV have been compared to evaluate the impact of the environment on topography and electrical measurements. In current images, an increase of the lateral resolution and a reduction of the conductivity were observed in N{sub 2} and, especially, in UHV (where current depends also on the contact force). Both effects were related to the reduction/elimination of the water layer between the tip and the sample in N{sub 2}/UHV. Therefore, since current measurements are very sensitive to environmental conditions, these factors must be taken into consideration when comparisons between several experiments are performed.

  19. Limits and conditions of applicability of the experimental measurement methods of liquid flow rates through pipes

    NASA Astrophysics Data System (ADS)

    Botezatu, N. P.

    1980-04-01

    Various flow measurement methods and their applications are reviewed in order to establish the best methods for rationalization and optimization of water consumption in a large industrial system. The methods discussed include volumetric and gravimetric techniques, Pitot and Venturi tubes, electromagnetic and ultrasonic flowmeters, counters, rotameters, and anemometers, as well as the use of classical and radioactive tracers. A comparative analysis of various methods and experimental results indicate that the method of radioactive tracers is the only universal method of measurement of fluid flows through pipes and channels for fluids of any physicochemical properties under any conditions.

  20. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    SciTech Connect

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-11-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON{sup TM} thermoplastic mask and the Type-S{sup TM} head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors ({sigma}{sub sub} and {sigma}{sub sub}). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins

  1. Curriculum-Based Measurement of Math Problem Solving: A Methodology and Rationale for Establishing Equivalence of Scores

    ERIC Educational Resources Information Center

    Montague, Marjorie; Penfield, Randall D.; Enders, Craig; Huang, Jia

    2010-01-01

    The purpose of this article is to discuss curriculum-based measurement (CBM) as it is currently utilized in research and practice and to propose a new approach for developing measures to monitor the academic progress of students longitudinally. To accomplish this, we first describe CBM and provide several exemplars of CBM in reading and…

  2. Methodological Issues in the Validation of Implicit Measures: Comment on De Houwer, Teige-Mocigemba, Spruyt, and Moors (2009)

    ERIC Educational Resources Information Center

    Gawronski, Bertram; LeBel, Etienne P.; Peters, Kurt R.; Banse, Rainer

    2009-01-01

    J. De Houwer, S. Teige-Mocigemba, A. Spruyt, and A. Moors's normative analysis of implicit measures provides an excellent clarification of several conceptual ambiguities surrounding the validation and use of implicit measures. The current comment discusses an important, yet unacknowledged, implication of J. De Houwer et al.'s analysis, namely,…

  3. Measurement of exhaust emissions from two J-58 engines at simulated supersonic cruise flight conditions

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1976-01-01

    Emissions of total oxides of nitrogen, unburned hydrocarbons, carbon monoxide, and carbon dioxide from two J-58 afterburning turbojet engines at simulated high-altitude flight conditions are reported. Test conditions included flight speeds from Mach 2 to 3 at altitudes from 16 to 23 km. For each flight condition, exhaust measurements were made for four or five power levels from maximum power without afterburning through maximum afterburning. The data show that exhaust emissions vary with flight speed, altitude, power level, and radial position across the exhaust. Oxides of nitrogen (NOX) emissions decreased with increasing altitude, and increased with increasing flight speed. NOX emission indices with afterburning were less than half the value without afterburning. Carbon monoxide and hydrocarbon emissions increased with increasing altitude, and decreased with increasing flight speed. Emissions of these species were substantially higher with afterburning than without.

  4. Measurement of exhaust emissions from two J-58 engines at simulated supersonic cruise flight conditions

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1976-01-01

    Emissions of total oxides of nitrogen, unburned hydrocarbons, carbon monoxide, and carbon dioxide from two J-58 afterburning turbojet engines at simulated high-altitude flight conditions are reported. Test conditions included flight speeds from Mach 2 to 3 at altitudes from 16 to 23 km. For each flight condition, exhaust measurements were made for four or five power levels from maximum power without afterburning through maximum afterburning. The data show that exhaust emissions vary with flight speed, altitude, power level, and radial position across the exhaust. Oxides of nitrogen emissions decreased with increasing altitude and increased with increasing flight speed. NO(x) emission indices with afterburning were less than half the value without afterburning. Carbon monoxide and hydrocarbon emissions increased with increasing altitude and decreased with increasing flight speed. Emissions of these species were substantially higher with afterburning than without.

  5. Addressing the “It Is Just Placebo” Pitfall in CAM: Methodology of a Project to Develop Patient-Reported Measures of Nonspecific Factors in Healing

    PubMed Central

    Greco, Carol M.; Glick, Ronald M.; Morone, Natalia E.; Schneider, Michael J.

    2013-01-01

    CAM therapies are often dismissed as “no better than placebo;” however, this belief may be overcome through careful analysis of nonspecific factors in healing. To improve trial methodology, we propose that CAM (and conventional) RCTs should evaluate and adjust for the effects of intrapersonal, interpersonal, and environmental factors on outcomes. However, measurement of these is challenging, and there are no brief, precise instruments that are suitable for widespread use in trials and clinical settings. This paper describes the methodology of a project to develop a set of patient-reported instruments that will quantify the nonspecific or “placebo” effects that are in fact specific and active ingredients in healing. The project uses the rigorous instrument-development methodology of the NIH-PROMIS initiative. The methods include (1) integration of patients' and clinicians' opinions with existing literature; (2) development of relevant items; (3) calibration of items on large samples; (4) classical test theory and modern psychometric methods to select the most useful items; (5) development of computerized adaptive tests (CATs) that maximize information while minimizing patient burden; and (6) initial validation studies. The instruments will have the potential to revolutionize clinical trials in both CAM and conventional medicine through quantifying contextual factors that contribute to healing. PMID:24454501

  6. Laser spectroscopic real time measurements of methanogenic activity under simulated Martian subsurface analog conditions

    NASA Astrophysics Data System (ADS)

    Schirmack, Janosch; Böhm, Michael; Brauer, Chris; Löhmannsröben, Hans-Gerd; de Vera, Jean-Pierre; Möhlmann, Diedrich; Wagner, Dirk

    2014-08-01

    On Earth, chemolithoautothrophic and anaerobic microorganisms such as methanogenic archaea are regarded as model organisms for possible subsurface life on Mars. For this reason, the methanogenic strain Methanosarcina soligelidi (formerly called Methanosarcina spec. SMA-21), isolated from permafrost-affected soil in northeast Siberia, has been tested under Martian thermo-physical conditions. In previous studies under simulated Martian conditions, high survival rates of these microorganisms were observed. In our study we present a method to measure methane production as a first attempt to study metabolic activity of methanogenic archaea during simulated conditions approaching conditions of Mars-like environments. To determine methanogenic activity, a measurement technique which is capable to measure the produced methane concentration with high precision and with high temporal resolution is needed. Although there are several methods to detect methane, only a few fulfill all the needed requirements to work within simulated extraterrestrial environments. We have chosen laser spectroscopy, which is a non-destructive technique that measures the methane concentration without sample taking and also can be run continuously. In our simulation, we detected methane production at temperatures down to -5 °C, which would be found on Mars either temporarily in the shallow subsurface or continually in the deep subsurface. The pressure of 50 kPa which we used in our experiments, corresponds to the expected pressure in the Martian near subsurface. Our new device proved to be fully functional and the results indicate that the possible existence of methanogenic archaea in Martian subsurface habitats cannot be ruled out.

  7. Scheme for approximate conditional teleportation of an unknown atomic state without the Bell-state measurement

    SciTech Connect

    Zheng Shibiao

    2004-06-01

    We propose a scheme for approximately and conditionally teleporting an unknown atomic state in cavity QED. Our scheme does not involve the Bell-state measurement and thus an additional atom is unnecessary. Only two atoms and one single-mode cavity are required. The scheme may be used to teleport the state of a cavity mode to another mode using a single atom. The idea may also be used to teleport the state of a trapped ion.

  8. Particle extinction measured at ambient conditions with differential optical absorption spectroscopy. 2. Closure study.

    PubMed

    Müller, Thomas; Müller, Detlef; Dubois, René

    2006-04-01

    Spectral particle extinction coefficients of atmospheric aerosols were measured with, to the best of our knowledge, a newly designed differential optical absorption spectroscopy (DOAS) instrument. A closure study was carried out on the basis of optical and microphysical aerosol properties obtained from nephelometer, particle soot/absorption photometer, hygroscopic tandem differential mobility analyzer, twin differential mobility particle sizer, aerodynamic particle sizer, and Berner impactors. The data were collected at the urban site of Leipzig during a period of 10 days in March 2000. The performance test also includes a comparison of the optical properties measured with DOAS to particle optical properties calculated with a Mie-scattering code. The computations take into account dry and ambient particle conditions. Under dry particle conditions the linear regression and the correlation coefficient for particle extinction are 0.95 and 0.90, respectively. At ambient conditions these parameters are 0.89 and 0.97, respectively. An inversion algorithm was used to retrieve microphysical particle properties from the extinction coefficients measured with DOAS. We found excellent agreement within the retrieval uncertainties.

  9. Highly resolved measurements of defect evolution under heated-and-shocked conditions

    SciTech Connect

    Lanier, N. E.; Workman, J.; Holmes, R. L.; Graham, P.; Moore, A.

    2007-05-15

    One of the principal advantages of a double-shell capsule design is the potential for ignition without requiring cryogenic implosions. These designs compress deuterium fuel by transferring kinetic energy from a laser-ablated outer shell to an inner shell by means of a nearly elastic symmetric collision. However, prior to this collision the inner shell experiences varying levels of preheat such that any nonuniformities can evolve significantly. It is the condition of these perturbations at the time the collision-induced shock compresses the inner shell that ultimately dictates capsule performance. With this in mind, a series of experiments have been performed on the OMEGA laser facility [R. T. Boehly et al., Opt. Comm. 133, 495 (1997)] that produce highly resolved measurements of defect evolution under heated-and-shocked conditions. Tin L-shell radiation is used to heat a layered package of epoxy and foam. The epoxy can be engineered with a variety of surface perturbations or defects. As the system evolves, a strong shock can be introduced with the subsequent hydrodynamic behavior imaged on calibrated film via x-ray radiography. This technique allows density variations of the evolving system to be quantitatively measured. This paper summarizes the hydrodynamic behavior of rectangular gaps under heated conditions with detailed experimental measurements of their residual density perturbations. Moreover, the impact of these residual density perturbations on shock deformation and material flow is discussed.

  10. A new mediator method for BOD measurement under non-deaerated condition.

    PubMed

    Liu, Ling; Shang, Li; Liu, Chang; Liu, Changyu; Zhang, Bailin; Dong, Shaojun

    2010-06-15

    Monitoring biochemical oxygen demand (BOD) by mediator method (BOD(Med)) has been developed for recent years and deaerated condition was generally adopted to avoid the effect of oxygen, but the deaerated condition was unfavorable in practical applications. Herein, we first proposed another way to explore non-deaerated BOD(Med) (called NDA-BOD(Med)) method utilizing ferricyanide, which was reduced by Escherichia coli upon catalyzing organic substrate to produce ferrocyanide. We attempted to explain the feasibility of NDA-BOD(Med) by the two aspects. Firstly, the obtained biodegradation efficiencies of the bacteria under the deaerated and non-deaerated conditions were similar, and the concentration of O(2) (0.25mM at 8mg/L O(2)) is 1-2 order of magnitude lower than that of mediator commonly used (55mM ferricyanide), so the effect of O(2) to measurements could be neglected. Secondly, the relationship between the artificial and the natural electron acceptor was investigated, and it was found that the oxygen consumption in the NDA-BOD(Med) measurement was mainly contributed to endogenous values. Furthermore, the performance of present NDA-BOD(Med) was reported, and this method was optimized for measuring the low-concentration samples, synthetic wastewater and real polluted wastewater. The NDA-BOD(Med) provides a simple and efficient way in rapid BOD determinations, especially advantageous for in situ monitoring of water system.

  11. Ground-truthing evoked potential measurements against behavioral conditioning in the goldfish, Carassius auratus

    NASA Astrophysics Data System (ADS)

    Hill, Randy J.; Mann, David A.

    2005-04-01

    Auditory evoked potentials (AEPs) have become commonly used to measure hearing thresholds in fish. However, it is uncertain how well AEP thresholds match behavioral hearing thresholds and what effect variability in electrode placement has on AEPs. In the first experiment, the effect of electrode placement on AEPs was determined by simultaneously recording AEPs from four locations on each of 12 goldfish, Carassius auratus. In the second experiment, the hearing sensitivity of 12 goldfish was measured using both classical conditioning and AEP's in the same setup. For behavioral conditioning, the fish were trained to reduce their respiration rate in response to a 5 s sound presentation paired with a brief shock. A modified staircase method was used in which 20 reversals were completed for each frequency, and threshold levels were determined by averaging the last 12 reversals. Once the behavioral audiogram was completed, the AEP measurements were made without moving the fish. The recording electrode was located subdermally over the medulla, and was inserted prior to classical conditioning to minimize handling of animal. The same sound stimuli (pulsed tones) were presented and the resultant evoked potentials were recorded for 1000-6000 averages. AEP input-output functions were then compared to the behavioral audiogram to compare techniques for estimating behavioral thresholds from AEP data.

  12. Particle extinction measured at ambient conditions with differential optical absorption spectroscopy. 2. Closure study.

    PubMed

    Müller, Thomas; Müller, Detlef; Dubois, René

    2006-04-01

    Spectral particle extinction coefficients of atmospheric aerosols were measured with, to the best of our knowledge, a newly designed differential optical absorption spectroscopy (DOAS) instrument. A closure study was carried out on the basis of optical and microphysical aerosol properties obtained from nephelometer, particle soot/absorption photometer, hygroscopic tandem differential mobility analyzer, twin differential mobility particle sizer, aerodynamic particle sizer, and Berner impactors. The data were collected at the urban site of Leipzig during a period of 10 days in March 2000. The performance test also includes a comparison of the optical properties measured with DOAS to particle optical properties calculated with a Mie-scattering code. The computations take into account dry and ambient particle conditions. Under dry particle conditions the linear regression and the correlation coefficient for particle extinction are 0.95 and 0.90, respectively. At ambient conditions these parameters are 0.89 and 0.97, respectively. An inversion algorithm was used to retrieve microphysical particle properties from the extinction coefficients measured with DOAS. We found excellent agreement within the retrieval uncertainties. PMID:16607998

  13. Measuring performance in off-patent drug markets: a methodological framework and empirical evidence from twelve EU Member States.

    PubMed

    Kanavos, Panos

    2014-11-01

    This paper develops a methodological framework to help evaluate the performance of generic pharmaceutical policies post-patent expiry or after loss of exclusivity in non-tendering settings, comprising five indicators (generic availability, time delay to and speed of generic entry, number of generic competitors, price developments, and generic volume share evolution) and proposes a series of metrics to evaluate performance. The paper subsequently tests this framework across twelve EU Member States (MS) by using IMS data on 101 patent expired molecules over the 1998-2010 period. Results indicate that significant variation exists in generic market entry, price competition and generic penetration across the study countries. Size of a geographical market is not a predictor of generic market entry intensity or price decline. Regardless of geographic or product market size, many off patent molecules lack generic competitors two years after loss of exclusivity. The ranges in each of the five proposed indicators suggest, first, that there are numerous factors--including institutional ones--contributing to the success of generic entry, price decline and market penetration and, second, MS should seek a combination of supply and demand-side policies in order to maximise cost-savings from generics. Overall, there seems to be considerable potential for faster generic entry, uptake and greater generic competition, particularly for molecules at the lower end of the market. PMID:25201433

  14. Service User- and Carer-Reported Measures of Involvement in Mental Health Care Planning: Methodological Quality and Acceptability to Users

    PubMed Central

    Gibbons, Chris J.; Bee, Penny E.; Walker, Lauren; Price, Owen; Lovell, Karina

    2014-01-01

    Background: Increasing service user and carer involvement in mental health care planning is a key healthcare priority but one that is difficult to achieve in practice. To better understand and measure user and carer involvement, it is crucial to have measurement questionnaires that are both psychometrically robust and acceptable to the end user. Methods: We conducted a systematic review using the terms “care plan$,” “mental health,” “user perspective$,” and “user participation” and their linguistic variants as search terms. Databases were searched from inception to November 2012, with an update search at the end of September 2014. We included any articles that described the development, validation or use of a user and/or carer-reported outcome measures of involvement in mental health care planning. We assessed the psychometric quality of each instrument using the “Evaluating the Measurement of Patient-Reported Outcomes” (EMPRO) criteria. Acceptability of each instrument was assessed using novel criteria developed in consultation with a mental health service user and carer consultation group. Results: We identified eleven papers describing the use, development, and/or validation of nine user/carer-reported outcome measures. Psychometric properties were sparsely reported and the questionnaires met few service user/carer-nominated attributes for acceptability. Where reported, basic psychometric statistics were of good quality, indicating that some measures may perform well if subjected to more rigorous psychometric tests. The majority were deemed to be too long for use in practice. Discussion: Multiple instruments are available to measure user/carer involvement in mental health care planning but are either of poor quality or poorly described. Existing measures cannot be considered psychometrically robust by modern standards, and cannot currently be recommended for use. Our review has identified an important knowledge gap, and an urgent need to

  15. The uncertainty of mass discharge measurements using pumping methods under simplified conditions.

    PubMed

    Chen, Xiaosong; Brooks, Michael C; Wood, A Lynn

    2014-01-01

    Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures used into sequential, concurrent, and tandem circulating well categories. Recent work has investigated the uncertainty of point measurement methods, and to a lesser extent, pumping methods. However, the focus of this study was a direct comparison of uncertainty between the various pumping method approaches that have been used, as well as a comparison of uncertainty between pumping and point measurement methods. Mass discharge measurement error was investigated using a Monte Carlo modeling analysis as a function of the contaminant plume position and width, and as a function of the pumping conditions used in the different pumping tests. Results indicated that for the conditions investigated, uncertainty in mass discharge estimates based on pumping methods was 1.3 to 16 times less than point measurement method uncertainty, and that a sequential pumping approach resulted in 5 to 12 times less uncertainty than the concurrent pumping or tandem circulating well approaches. Uncertainty was also investigated as a function of the plume width relative to well spacing. For a given well spacing, uncertainty decreased for all methods as the plume width increased, and comparable levels of uncertainty between point measurement and pumping methods were obtained when three wells were distributed across the plume. A hybrid pumping technique in which alternate wells were pumped concurrently in two separate campaigns yielded similar uncertainty to the sequential pumping approach. This suggests that the hybrid approach can be used to capitalize on the advantages of sequential pumping yet minimize the overall test duration.

  16. Brief inhalation method to measure cerebral oxygen extraction fraction with PET: Accuracy determination under pathologic conditions

    SciTech Connect

    Altman, D.I.; Lich, L.L.; Powers, W.J. )

    1991-09-01

    The initial validation of the brief inhalation method to measure cerebral oxygen extraction fraction (OEF) with positron emission tomography (PET) was performed in non-human primates with predominantly normal cerebral oxygen metabolism (CMRO2). Sensitivity analysis by computer simulation, however, indicated that this method may be subject to increasing error as CMRO2 decreases. Accuracy of the method under pathologic conditions of reduced CMRO2 has not been determined. Since reduced CMRO2 values are observed frequently in newborn infants and in regions of ischemia and infarction in adults, we determined the accuracy of the brief inhalation method in non-human primates by comparing OEF measured with PET to OEF measured by arteriovenous oxygen difference (A-VO2) under pathologic conditions of reduced CMRO2 (0.27-2.68 ml 100g-1 min-1). A regression equation of OEF (PET) = 1.07 {times} OEF (A-VO2) + 0.017 (r = 0.99, n = 12) was obtained. The absolute error in oxygen extraction measured with PET was small (mean 0.03 {plus minus} 0.04, range -0.03 to 0.12) and was independent of cerebral blood flow, cerebral blood volume, CMRO2, or OEF. The percent error was higher (19 {plus minus} 37), particularly when OEF is below 0.15. These data indicate that the brief inhalation method can be used for measurement of cerebral oxygen extraction and cerebral oxygen metabolism under pathologic conditions of reduced cerebral oxygen metabolism, with these limitations borne in mind.

  17. Experimental Methodology for Determining Turbomachinery Blade Damping Using Magnetic Bearing Excitation and Non-Contacting Optical Measurements

    NASA Technical Reports Server (NTRS)

    Provenza, Andrew J.; Duffy, Kirsten P.

    2010-01-01

    Experiments to determine the effects of turbomachinery fan blade damping concepts such as passively shunted piezoelectric materials on blade response are ongoing at the NASA Glenn Research Center. A vertical rotor is suspended and excited with active magnetic bearings (AMBs) usually in a vacuum chamber to eliminate aerodynamic forces. Electromagnetic rotor excitation is superimposed onto rotor PD-controlled support and can be fixed to either a stationary or rotating frame of reference. The rotor speed is controlled with an air turbine system. Blade vibrations are measured using optical probes as part of a Non-Contacting Stress Measurement System (NSMS). Damping is calculated from these measurements. It can be difficult to get accurate damping measurements using this experimental setup and some of the details of how to obtain quality results are seemingly nontrivial. The intent of this paper is to present those details.

  18. Investigation of Rayleigh-Taylor turbulence and mixing using direct numerical simulation with experimentally-measured initial conditions. I. Comparison to experimental data

    SciTech Connect

    Mueschke, N; Schilling, O

    2008-07-23

    A 1152 x 760 x 1280 direct numerical simulation (DNS) using initial conditions, geometry, and physical parameters chosen to approximate those of a transitional, small Atwood number Rayleigh-Taylor mixing experiment [Mueschke, Andrews and Schilling, J. Fluid Mech. 567, 27 (2006)] is presented. The density and velocity fluctuations measured just off of the splitter plate in this buoyantly unstable water channel experiment were parameterized to provide physically-realistic, anisotropic initial conditions for the DNS. The methodology for parameterizing the measured data and numerically implementing the resulting perturbation spectra in the simulation is discussed in detail. The DNS model of the experiment is then validated by comparing quantities from the simulation to experimental measurements. In particular, large-scale quantities (such as the bubble front penetration hb and the mixing layer growth parameter {alpha}{sub b}), higher-order statistics (such as velocity variances and the molecular mixing parameter {theta}), and vertical velocity and density variance spectra from the DNS are shown to be in favorable agreement with the experimental data. Differences between the quantities obtained from the DNS and from experimental measurements are related to limitations in the dynamic range of scales resolved in the simulation and other idealizations of the simulation model. This work demonstrates that a parameterization of experimentally-measured initial conditions can yield simulation data that quantitatively agrees well with experimentally-measured low- and higher-order statistics in a Rayleigh-Taylor mixing layer. This study also provides resolution and initial conditions implementation requirements needed to simulate a physical Rayleigh-Taylor mixing experiment. In Part II [Mueschke and Schilling, Phys. Fluids (2008)], other quantities not measured in the experiment are obtained from the DNS and discussed, such as the integral- and Taylor-scale Reynolds numbers

  19. System for measuring oxygen consumption rates of mammalian cells in static culture under hypoxic conditions.

    PubMed

    Kagawa, Yuki; Miyahara, Hirotaka; Ota, Yuri; Tsuneda, Satoshi

    2016-01-01

    Estimating the oxygen consumption rates (OCRs) of mammalian cells in hypoxic environments is essential for designing and developing a three-dimensional (3-D) cell culture system. However, OCR measurements under hypoxic conditions are infrequently reported in the literature. Here, we developed a system for measuring OCRs at low oxygen levels. The system injects nitrogen gas into the environment and measures the oxygen concentration by an optical oxygen microsensor that consumes no oxygen. The developed system was applied to HepG2 cells in static culture. Specifically, we measured the spatial profiles of the local dissolved oxygen concentration in the medium, then estimated the OCRs of the cells. The OCRs, and also the pericellular oxygen concentrations, decreased nonlinearly as the oxygen partial pressure in the environment decreased from 19% to 1%. The OCRs also depended on the culture period and the matrix used for coating the dish surface. Using this system, we can precisely estimate the OCRs of various cell types under environments that mimic 3-D culture conditions, contributing crucial data for an efficient 3-D culture system design. PMID:26558344

  20. Time and Space Resolved Heat Transfer Measurements Under Nucleate Bubbles with Constant Heat Flux Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Hussey, Sam W.; Yee, Glenda F.; Kim, Jungho

    2003-01-01

    Investigations into single bubble pool boiling phenomena are often complicated by the difficulties in obtaining time and space resolved information in the bubble region. This usually occurs because the heaters and diagnostics used to measure heat transfer data are often on the order of, or larger than, the bubble characteristic length or region of influence. This has contributed to the development of many different and sometimes contradictory models of pool boiling phenomena and dominant heat transfer mechanisms. Recent investigations by Yaddanapyddi and Kim and Demiray and Kim have obtained time and space resolved heat transfer information at the bubble/heater interface under constant temperature conditions using a novel micro-heater array (10x10 array, each heater 100 microns on a side) that is semi-transparent and doubles as a measurement sensor. By using active feedback to maintain a state of constant temperature at the heater surface, they showed that the area of influence of bubbles generated in FC-72 was much smaller than predicted by standard models and that micro-conduction/micro-convection due to re-wetting dominated heat transfer effects. This study seeks to expand on the previous work by making time and space resolved measurements under bubbles nucleating on a micro-heater array operated under constant heat flux conditions. In the planned investigation, wall temperature measurements made under a single bubble nucleation site will be synchronized with high-speed video to allow analysis of the bubble energy removal from the wall.

  1. Measurement of interaction forces between lignin and cellulose as a function of aqueous electrolyte solution conditions.

    PubMed

    Notley, Shannon M; Norgren, Magnus

    2006-12-19

    The interaction between a lignin film and a cellulose sphere has been measured using the colloidal probe force technique as a function of aqueous electrolyte solution conditions. The lignin film was first studied for its roughness and stability using atomic force microscopy imaging and quartz crystal microbalance measurements, respectively. The film was found to be smooth and stable in the pH range of 3.5-9 and in ionic strengths up to and including 0.01 M. This range of ionic strength and pH was hence used to measure the surface force profiles between lignin and cellulose. Under these solution conditions, the measured forces behaved according to DLVO theory. The force-distance curves could be fitted between the limits of constant charge and constant potential, and the surface potential of the lignin films was determined as a function of pH. At a pH greater than 9.5, a short range steric repulsion was observed, indicating that the film was swelling to a large extent but did not dissolve. Thus, lignin films prepared in this manner are suitable for a range of surface force studies.

  2. Measurement of total risk of spontaneous abortion: the virtue of conditional risk estimation.

    PubMed

    Modvig, J; Schmidt, L; Damsgaard, M T

    1990-12-01

    The concepts, methods, and problems of measuring spontaneous abortion risk are reviewed. The problems touched on include the process of pregnancy verification, the changes in risk by gestational age and maternal age, and the presence of induced abortions. Methods used in studies of spontaneous abortion risk include biochemical assays as well as life table technique, although the latter appears in two different forms. The consequences of using either of these are discussed. It is concluded that no study design so far is appropriate for measuring the total risk of spontaneous abortion from early conception to the end of the 27th week. It is proposed that pregnancy may be considered to consist of two or three specific periods and that different study designs should concentrate on measuring the conditional risk within each period. A careful estimate using this principle leads to an estimate of total risk of spontaneous abortion of 0.33.

  3. Electromigration and potentiometry measurements of single-crystalline Ag nanowires under UHV conditions.

    PubMed

    Kaspers, M R; Bernhart, A M; Meyer Zu Heringdorf, F-J; Dumpich, G; Möller, R

    2009-07-01

    We report on in situ electromigration and potentiometry measurements on single-crystalline Ag nanowires under ultra-high vacuum (UHV) conditions, using a four-probe scanning tunnelling microscope (STM). The Ag nanowires are grown in place by self-organization on a 4° vicinal Si(001) surface. Two of the four available STM tips are used to contact the nanowire. The positioning of the tips is controlled by a scanning electron microscope (SEM). Potentiometry measurements on an Ag nanowire were carried out using a third tip to determine the resistance per length. During electromigration measurements current densities of up to 1 × 10(8) A cm(-2) could be achieved. We use artificially created notches in the wire to initiate electromigration and to control the location of the electromigration process. At the position of the notch, electromigration sets in and is observed quasi-continuously by the SEM.

  4. Measurement of 2D vector magnetic properties under the distorted flux density conditions

    NASA Astrophysics Data System (ADS)

    Urata, Shinya; Todaka, Takashi; Enokizono, Masato; Maeda, Yoshitaka; Shimoji, Hiroyasu

    2006-09-01

    Under distorted flux density condition, it is very difficult to evaluate the field intensity, because there is no criterion for the measurement. In the linear approximation, the measured field intensity waveform (MFI) is compared with the linear synthesis of field intensity waveform (LSFI) in each frequency, and it is shown that they are not in good agreement at higher induction. In this paper, we examined the 2D vector magnetic properties excited by distorted flux density, which consists of the 1st (fundamental frequency: 50 Hz), 3rd, and 5th harmonics. Improved linear synthesis of the field intensity waveform (ILSFI) is proposed as a new estimation method of the field intensity, instead of the conventional linear synthesis of field intensity waveform (LSFI). The usefulness of the proposed ILSFI is demonstrated in the comparison with the measured results.

  5. Measurement and modeling of CO2 diffusion coefficient in Saline Aquifer at reservoir conditions

    NASA Astrophysics Data System (ADS)

    Azin, Reza; Mahmoudy, Mohamad; Raad, Seyed Mostafa Jafari; Osfouri, Shahriar

    2013-12-01

    Storage of CO2 in deep saline aquifers is a promising techniques to mitigate global warming and reduce greenhouse gases (GHG). Correct measurement of diffusivity is essential for predicting rate of transfer and cumulative amount of trapped gas. Little information is available on diffusion of GHG in saline aquifers. In this study, diffusivity of CO2 into a saline aquifer taken from oil field was measured and modeled. Equilibrium concentration of CO2 at gas-liquid interface was determined using Henry's law. Experimental measurements were reported at temperature and pressure ranges of 32-50°C and 5900-6900 kPa, respectively. Results show that diffusivity of CO2 varies between 3.52-5.98×10-9 m2/s for 5900 kPa and 5.33-6.16×10-9 m2/s for 6900 kPa initial pressure. Also, it was found that both pressure and temperature have a positive impact on the measures of diffusion coefficient. Liquid swelling due to gas dissolution and variations in gas compressibility factor as a result of pressure decay was found negligible. Measured diffusivities were used model the physical model and develop concentration profile of dissolved gas in the liquid phase. Results of this study provide unique measures of CO2 diffusion coefficient in saline aquifer at high pressure and temperature conditions, which can be applied in full-field studies of carbon capture and sequestration projects.

  6. Particle extinction measured at ambient conditions with differential optical absorption spectroscopy. 1. system setup and characterization.

    PubMed

    Müller, Thomas; Müller, Detlef; Dubois, René

    2005-03-20

    We describe an instrument for measuring the particle extinction coefficient at ambient conditions in the spectral range from 270 to 1000 nm. It is based on a differential optical absorption spectroscopy (DOAS) system, which was originally used for measuring trace-gas concentrations of atmospheric absorbers in the ultraviolet-visible wavelength range. One obtains the particle extinction spectrum by measuring the total atmospheric extinction and subtracting trace-gas absorption and Rayleigh scattering. The instrument consists of two nested Newton-type telescopes, which are simultaneously used for emitting and detecting light, and two arrays of retroreflectors at the ends of the two light paths. The design of this new instrument solves crucial problems usually encountered in the design of such instruments. The telescope is actively repositioned during the measurement cycle. Particle extinction is simultaneously measured at several wavelengths by the use of two grating spectrometers. Optical turbulence causes lateral movement of the spot of light in the receiver telescope. Monitoring of the return signals with a diode permits correction for this effect. Phase-sensitive detection efficiently suppresses background signals from the atmosphere as well as from the instrument itself. The performance of the instrument was tested during a measurement period of 3 months from January to March 2000. The instrument ran without significant interruption during that period. A mean accuracy of 0.032 km(-1) was found for the extinction coefficient for an 11-day period in March. PMID:15813269

  7. Emerging technologies to measure neighborhood conditions in public health: implications for interventions and next steps.

    PubMed

    Schootman, M; Nelson, E J; Werner, K; Shacham, E; Elliott, M; Ratnapradipa, K; Lian, M; McVay, A

    2016-06-23

    Adverse neighborhood conditions play an important role beyond individual characteristics. There is increasing interest in identifying specific characteristics of the social and built environments adversely affecting health outcomes. Most research has assessed aspects of such exposures via self-reported instruments or census data. Potential threats in the local environment may be subject to short-term changes that can only be measured with more nimble technology. The advent of new technologies may offer new opportunities to obtain geospatial data about neighborhoods that may circumvent the limitations of traditional data sources. This overview describes the utility, validity and reliability of selected emerging technologies to measure neighborhood conditions for public health applications. It also describes next steps for future research and opportunities for interventions. The paper presents an overview of the literature on measurement of the built and social environment in public health (Google Street View, webcams, crowdsourcing, remote sensing, social media, unmanned aerial vehicles, and lifespace) and location-based interventions. Emerging technologies such as Google Street View, social media, drones, webcams, and crowdsourcing may serve as effective and inexpensive tools to measure the ever-changing environment. Georeferenced social media responses may help identify where to target intervention activities, but also to passively evaluate their effectiveness. Future studies should measure exposure across key time points during the life-course as part of the exposome paradigm and integrate various types of data sources to measure environmental contexts. By harnessing these technologies, public health research can not only monitor populations and the environment, but intervene using novel strategies to improve the public health.

  8. Emerging technologies to measure neighborhood conditions in public health: implications for interventions and next steps.

    PubMed

    Schootman, M; Nelson, E J; Werner, K; Shacham, E; Elliott, M; Ratnapradipa, K; Lian, M; McVay, A

    2016-01-01

    Adverse neighborhood conditions play an important role beyond individual characteristics. There is increasing interest in identifying specific characteristics of the social and built environments adversely affecting health outcomes. Most research has assessed aspects of such exposures via self-reported instruments or census data. Potential threats in the local environment may be subject to short-term changes that can only be measured with more nimble technology. The advent of new technologies may offer new opportunities to obtain geospatial data about neighborhoods that may circumvent the limitations of traditional data sources. This overview describes the utility, validity and reliability of selected emerging technologies to measure neighborhood conditions for public health applications. It also describes next steps for future research and opportunities for interventions. The paper presents an overview of the literature on measurement of the built and social environment in public health (Google Street View, webcams, crowdsourcing, remote sensing, social media, unmanned aerial vehicles, and lifespace) and location-based interventions. Emerging technologies such as Google Street View, social media, drones, webcams, and crowdsourcing may serve as effective and inexpensive tools to measure the ever-changing environment. Georeferenced social media responses may help identify where to target intervention activities, but also to passively evaluate their effectiveness. Future studies should measure exposure across key time points during the life-course as part of the exposome paradigm and integrate various types of data sources to measure environmental contexts. By harnessing these technologies, public health research can not only monitor populations and the environment, but intervene using novel strategies to improve the public health. PMID:27339260

  9. Examination of how neighborhood definition influences measurements of youths' access to tobacco retailers: a methodological note on spatial misclassification.

    PubMed

    Duncan, Dustin T; Kawachi, Ichiro; Subramanian, S V; Aldstadt, Jared; Melly, Steven J; Williams, David R

    2014-02-01

    Measurements of neighborhood exposures likely vary depending on the definition of "neighborhood" selected. This study examined the extent to which neighborhood definition influences findings regarding spatial accessibility to tobacco retailers among youth. We defined spatial accessibility to tobacco retailers (i.e., tobacco retail density, closest tobacco retailer, and average distance to the closest 5 tobacco retailers) on the basis of circular and network buffers of 400 m and 800 m, census block groups, and census tracts by using residential addresses from the 2008 Boston Youth Survey Geospatial Dataset (n = 1,292). Friedman tests (to compare overall differences in neighborhood definitions) were applied. There were differences in measurements of youths' access to tobacco retailers according to the selected neighborhood definitions, and these were marked for the 2 spatial proximity measures (both P < 0.01 for all differences). For example, the median average distance to the closest 5 tobacco retailers was 381.50 m when using specific home addresses, 414.00 m when using census block groups, and 482.50 m when using census tracts, illustrating how neighborhood definition influences the measurement of spatial accessibility to tobacco retailers. These analyses suggest that, whenever possible, egocentric neighborhood definitions should be used. The use of larger administrative neighborhood definitions can bias exposure estimates for proximity measures.

  10. Method for simultaneous 90Sr and 137Cs in-vivo measurements of small animals and other environmental media developed for the conditions of the Chernobyl exclusion zone.

    PubMed

    Bondarkov, Mikhail D; Maksimenko, Andrey M; Gaschak, Sergey P; Zheltonozhsky, Viktor A; Jannik, G Timothy; Farfán, Eduardo B

    2011-10-01

    To perform in vivo simultaneous measurements of the 90Sr and 137Cs content in the bodies of animals living in the Chernobyl Exclusion Zone (ChEZ), an appropriate method and equipment were developed and installed in a mobile gamma beta spectrometry laboratory. This technique was designed for animals of relatively small sizes (up to 50 g). The 90Sr content is measured by a beta spectrometer with a 0.1-mm-thick scintillation plastic detector. The spectrum processing takes into account the fact that the measured object is "thick-layered" and contains a comparable quantity of 137Cs, which is a characteristic condition of the ChEZ. The 137Cs content is measured by a NaI scintillation detector that is part of the combined gamma beta spectrometry system. For environmental research performed in the ChEZ, the advantages of this method and equipment (rapid measurements, capability to measure live animals directly in their habitat, and the capability of simultaneous 90Sr and 137Cs measurements) far outweigh the existing limitations (considerations must be made for background radiation and the animal size, skeletal shape, and body mass). The accuracy of these in vivo measurements is shown to be consistent with standard spectrometric and radiochemical methods. Apart from the in vivo measurements, the proposed methodology, after a very simple upgrade that is also described in this paper, works even more accurately with samples of other media, such as soil and plants.

  11. Method for simultaneous 90Sr and 137Cs in-vivo measurements of small animals and other environmental media developed for the conditions of the Chernobyl exclusion zone.

    PubMed

    Bondarkov, Mikhail D; Maksimenko, Andrey M; Gaschak, Sergey P; Zheltonozhsky, Viktor A; Jannik, G Timothy; Farfán, Eduardo B

    2011-10-01

    To perform in vivo simultaneous measurements of the 90Sr and 137Cs content in the bodies of animals living in the Chernobyl Exclusion Zone (ChEZ), an appropriate method and equipment were developed and installed in a mobile gamma beta spectrometry laboratory. This technique was designed for animals of relatively small sizes (up to 50 g). The 90Sr content is measured by a beta spectrometer with a 0.1-mm-thick scintillation plastic detector. The spectrum processing takes into account the fact that the measured object is "thick-layered" and contains a comparable quantity of 137Cs, which is a characteristic condition of the ChEZ. The 137Cs content is measured by a NaI scintillation detector that is part of the combined gamma beta spectrometry system. For environmental research performed in the ChEZ, the advantages of this method and equipment (rapid measurements, capability to measure live animals directly in their habitat, and the capability of simultaneous 90Sr and 137Cs measurements) far outweigh the existing limitations (considerations must be made for background radiation and the animal size, skeletal shape, and body mass). The accuracy of these in vivo measurements is shown to be consistent with standard spectrometric and radiochemical methods. Apart from the in vivo measurements, the proposed methodology, after a very simple upgrade that is also described in this paper, works even more accurately with samples of other media, such as soil and plants. PMID:21878764

  12. METHOD FOR SIMULTANEOUS 90SR AND 137CS IN-VIVO MEASUREMENTS OF SMALL ANIMALS AND OTHER ENVIRONMENTAL MEDIA DEVELOPED FOR THE CONDITIONS OF THE CHERNOBYL EXCLUSION ZONE

    SciTech Connect

    Farfan, E.; Jannik, T.

    2011-10-01

    To perform in vivo simultaneous measurements of the {sup 90}Sr and {sup 137}Cs content in the bodies of animals living in the Chernobyl Exclusion Zone (ChEZ), an appropriate method and equipment were developed and installed in a mobile gamma beta spectrometry laboratory. This technique was designed for animals of relatively small sizes (up to 50 g). The {sup 90}Sr content is measured by a beta spectrometer with a 0.1 mm thick scintillation plastic detector. The spectrum processing takes into account the fact that the measured object is 'thick-layered' and contains a comparable quantity of {sup 137}Cs, which is a characteristic condition of the ChEZ. The {sup 137}Cs content is measured by a NaI scintillation detector that is part of the combined gamma beta spectrometry system. For environmental research performed in the ChEZ, the advantages of this method and equipment (rapid measurements, capability to measure live animals directly in their habitat, and the capability of simultaneous {sup 90}Sr and {sup 137}Cs measurements) far outweigh the existing limitations (considerations must be made for background radiation and the animal size, skeletal shape and body mass). The accuracy of these in vivo measurements is shown to be consistent with standard spectrometric and radiochemical methods. Apart from the in vivo measurements, the proposed methodology, after a very simple upgrade that is also described in the article, works even more accurately with samples of other media, such as soil and plants.

  13. Roundhouse (RND) Mountain Top Research Site: Measurements and Uncertainties for Winter Alpine Weather Conditions

    NASA Astrophysics Data System (ADS)

    Gultepe, I.; Isaac, G. A.; Joe, P.; Kucera, P. A.; Theriault, J. M.; Fisico, T.

    2014-01-01

    The objective of this work is to better understand and summarize the mountain meteorological observations collected during the Science of Nowcasting Winter Weather for the Vancouver 2010 Olympics and Paralympics (SNOW-V10) project that was supported by the Fog Remote Sensing and Modeling (FRAM) project. The Roundhouse (RND) meteorological station was located 1,856 m above sea level that is subject to the winter extreme weather conditions. Below this site, there were three additional observation sites at 1,640, 1,320, and 774 m. These four stations provided some or all the following measurements at 1 min resolution: precipitation rate (PR) and amount, cloud/fog microphysics, 3D wind speed (horizontal wind speed, U h; vertical air velocity, w a), visibility (Vis), infrared (IR) and shortwave (SW) radiative fluxes, temperature ( T) and relative humidity with respect to water (RHw), and aerosol observations. In this work, comparisons are made to assess the uncertainties and variability for the measurements of Vis, RHw, T, PR, and wind for various winter weather conditions. The ground-based cloud imaging probe (GCIP) measurements of snow particles using a profiling microwave radiometer (PMWR) data have also been shown to assess the icing conditions. Overall, the conclusions suggest that uncertainties in the measurements of Vis, PR, T, and RH can be as large as 50, >60, 50, and >20 %, respectively, and these numbers may increase depending on U h, T, Vis, and PR magnitude. Variability of observations along the Whistler Mountain slope (~500 m) suggested that to verify the models, model space resolution should be better than 100 m and time scales better than 1 min. It is also concluded that differences between observed and model based parameters are strongly related to a model's capability of accurate prediction of liquid water content (LWC), PR, and RHw over complex topography.

  14. The AmeriFlux Network of Long-Term CO{sub 2} Flux Measurement Stations: Methodology and Intercomparability

    SciTech Connect

    Hollinger, D. Y.; Evans, R. S.

    2003-05-20

    A portable flux measurement system has been used within the AmeriFlux network of CO{sub 2} flux measurement stations to enhance the comparability of data collected across the network. No systematic biases were observed in a comparison between portable system and site H, LE, or CO{sub 2} flux values although there were biases observed between the portable system and site measurement of air temperature and PPFD. Analysis suggests that if values from two stations differ by greater than 26% for H, 35% for LE, and 32% for CO{sub 2} flux they are likely to be significant. Methods for improving the intercomparability of the network are also discussed.

  15. Results of attenuation measurements for optical wireless channels under dense fog conditions regarding different wavelengths

    NASA Astrophysics Data System (ADS)

    Flecker, B.; Gebhart, M.; Leitgeb, E.; Sheikh Muhammad, S.; Chlestil, C.

    2006-08-01

    Free Space Optics (FSO) has gained considerable importance in this decade of demand for high bandwidth transmission capabilities. FSO can provide the last mile solution, but the availability and reliability issues concerned with it can not be ignored, and requires thorough investigations. In this work, we present our results about light attenuation at 950 and 850 nm wavelengths in continental city fog conditions with peak values up to 130 dB/km and compare them with attenuation under dense maritime conditions with peak values up to 480 dB/km. Dense fog is the most severe limiting factor in terrestrial optical wireless applications and light propagation in fog has properties in the spatial, spectral and the time domain, which are of importance to free-space optic data communication. In 2004 (within a short term scientific mission of COST 270) measurements of very dense maritime fog and low clouds were made in the mountains of La Turbie, close to the coast of southern France. Using the same equipment, the measurements were continued for the conditions of the continental city of Graz, Austria. This campaign was done in the winter months from 2004 to 2005 and 2005 to 2006 and allows us to compare fog properties for different environments, and the impact of snow fall. We provide detail analysis of a fog and a snow event for better understanding of their attenuation behavior.

  16. Measuring and modeling maize evapotranspiration under plastic film-mulching condition

    NASA Astrophysics Data System (ADS)

    Li, Sien; Kang, Shaozhong; Zhang, Lu; Ortega-Farias, Samuel; Li, Fusheng; Du, Taisheng; Tong, Ling; Wang, Sufen; Ingman, Mark; Guo, Weihua

    2013-10-01

    Plastic film-mulching techniques have been widely used over a variety of agricultural crops for saving water and improving yield. Accurate estimation of crop evapotranspiration (ET) under the film-mulching condition is critical for optimizing crop water management. After taking the mulching effect on soil evaporation (Es) into account, our study adjusted the original Shuttleworth-Wallace model (MSW) in estimating maize ET and Es under the film-mulching condition. Maize ET and Es respectively measured by eddy covariance and micro-lysimeter methods during 2007 and 2008 were used to validate the performance of the Penman-Monteith (PM), the original Shuttleworth-Wallace (SW) and the MSW models in arid northwest China. Results indicate that all three models significantly overestimated ET during the initial crop stage in the both years, which may be due to the underestimation of canopy resistance induced by the Jarvis model for the drought stress in the stage. For the entire experimental period, the SW model overestimated half-hourly maize ET by 17% compared with the eddy covariance method (ETEC) and overestimated daily Es by 241% compared with the micro-lysimeter measurements (EL), while the PM model only underestimated daily maize ET by 6%, and the MSW model only underestimated half-hourly maize ET by 2% and Es by 7% during the whole period. Thus the PM and MSW models significantly improved the accuracy against the original SW model and can be used to estimate ET and Es under the film-mulching condition.

  17. Measurement of Survival Time in Brachionus Rotifers: Synchronization of Maternal Conditions.

    PubMed

    Kaneko, Gen; Yoshinaga, Tatsuki; Gribble, Kristin E; Welch, David M; Ushio, Hideki

    2016-01-01

    Rotifers are microscopic cosmopolitan zooplankton used as models in ecotoxicological and aging studies due to their several advantages such as short lifespan, ease of culture, and parthenogenesis that enables clonal culture. However, caution is required when measuring their survival time as it is affected by maternal age and maternal feeding conditions. Here we provide a protocol for powerful and reproducible measurement of the survival time in Brachionus rotifers following a careful synchronization of culture conditions over several generations. Empirically, poor synchronization results in early mortality and a gradual decrease in survival rate, thus resulting in weak statistical power. Indeed, under such conditions, calorie restriction (CR) failed to significantly extend the lifespan of B. plicatilis although CR-induced longevity has been demonstrated with well-synchronized rotifer samples in past and present studies. This protocol is probably useful for other invertebrate models, including the fruitfly Drosophila melanogaster and the nematode Caenorhabditis elegans, because maternal age effects have also been reported in these species. PMID:27500471

  18. Measurement of thermal regain in duct systems located in partially conditioned buffer spaces. Informal report

    SciTech Connect

    Andrews, J.W.

    1994-07-01

    Thermal losses from duct systems have been shown to be a significant fraction of the heat or cooling energy delivered by the space-conditioning equipment. However, when the ducts are located in a partially conditioned buffer space such as a basement, a portion of these losses are effectively regained through system interactions with the building. This paper presents two methods of measuring this regain effect. One is based on the relative thermal resistances between the conditioned space and the buffer space, on the one hand, and between the buffer space and the outside, on the other. The second method is based on a measured drop in the buffer-space temperature when steps are taken to reduce the duct losses. The second method is compared with results of an extensive research project that are published in a major professional society handbook. The thermal regain fraction using the drop in basement temperature was found to be 0.68, while that obtained from an analysis of the system performance data, without using the basement temperature, was 0.59.

  19. Characterization of Process Conditions in Industrial Stainless Steelmaking Electric Arc Furnace Using Optical Emission Spectrum Measurements

    NASA Astrophysics Data System (ADS)

    Aula, Matti; Leppänen, Ahti; Roininen, Juha; Heikkinen, Eetu-Pekka; Vallo, Kimmo; Fabritius, Timo; Huttula, Marko

    2014-06-01

    Emission spectroscopy is a potential method for gaining information on electric arc furnace (EAF) process conditions. Previous studies published in literature on industrial EAF emission spectra have focused on a smaller scales and DC arc furnaces. In this study emission spectrum measurements were conducted for 140t AC stainless steelmaking EAF at Outokumpu Stainless Oy, Tornio Works, Finland. Four basic types of emission spectra were obtained during the EAF process cycle. The first one is obscured by scrap steel, the second is dominated by thermal radiation of the slag, the third is dominated by alkali peaks and sodium D-lines and the fourth is characterized by multiple atomic emission peaks. The atomic emission peaks were identified by comparing them to the NIST database for atomic emission lines and previous laboratory measurements on EAF slag emission spectra. The comparison shows that the optic emission of an arc is dominated by slag components. Plasma conditions were analyzed by deriving plasma temperature from optical emissions of Ca I lines. The analysis suggests that accurate information on plasma conditions can be gained from outer plasma having a plasma temperature below 7000 K (6727 °C).

  20. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally...

  1. Stability and intra-tester reliability of an in vivo measurement of thoracic axial rotation using an innovative methodology.

    PubMed

    Heneghan, Nicola R; Hall, Alison; Hollands, Mark; Balanos, George M

    2009-08-01

    The aim of this study was to evaluate the stability and intra-tester reliability of an innovative approach to measure active thoracic spine axial rotation. Ultrasound imaging of a thoracic vertebra in conjunction with Polhemus motion analysis of the transducer was used to measure axial thoracic spine rotation in a functional position. The range of motion in a convenience sample of asymptomatic subjects (n=24) was calculated across ten repetitions of a single trial to evaluate stability. The protocol was repeated the same day and 7-10 days later to provide data for within and between day intra-tester reliability. Mean total range of axial rotation was 85.15 degrees across a single trial with SD=14.8, CV=17.4, SEM=3.04. SEM ranged 0.63-3.37 for individual subjects and 2.60-3.64 across repetitions. Stability of performance occurred at repetitions 2-4. Intra-tester reliability (ICC(2,1)) was excellent within day (0.89-0.98) and good/excellent between days (0.720.94). Bland-Altman plots however suggest that agreement may range from 0 to 10% for within day measures and from 0 to 15% for between day measures. Whether this combined approach has sufficient precision and accuracy as a clinical research tool has yet to be fully evaluated. PMID:19046655

  2. Measuring social inequality with quantitative methodology: Analytical estimates and empirical data analysis by Gini and k indices

    NASA Astrophysics Data System (ADS)

    Inoue, Jun-ichi; Ghosh, Asim; Chatterjee, Arnab; Chakrabarti, Bikas K.

    2015-07-01

    Social inequality manifested across different strata of human existence can be quantified in several ways. Here we compute non-entropic measures of inequality such as Lorenz curve, Gini index and the recently introduced k index analytically from known distribution functions. We characterize the distribution functions of different quantities such as votes, journal citations, city size, etc. with suitable fits, compute their inequality measures and compare with the analytical results. A single analytic function is often not sufficient to fit the entire range of the probability distribution of the empirical data, and fit better to two distinct functions with a single crossover point. Here we provide general formulas to calculate these inequality measures for the above cases. We attempt to specify the crossover point by minimizing the gap between empirical and analytical evaluations of measures. Regarding the k index as an 'extra dimension', both the lower and upper bounds of the Gini index are obtained as a function of the k index. This type of inequality relations among inequality indices might help us to check the validity of empirical and analytical evaluations of those indices.

  3. Goal Oriented Teaching Exercise (G.O.T.E.): Methodology for Measuring the Effects of Teaching Strategies.

    ERIC Educational Resources Information Center

    Hill, Russell A.; And Others

    To test the effectiveness of the Goal Oriented Teaching Exercise (GOTE), a six-day unit for measuring the effects of teaching strategies, four junior high school teachers received a teachers manual, information on instructional goals and subject content, and sample test questions all keyed to a content grid (formed by six content topics and two…

  4. Evaluation of alternative approaches for measuring n-octanol/water partition coefficients for methodologically challenging chemicals (MCCs)

    EPA Science Inventory

    Measurements of n-octanol/water partition coefficients (KOW) for highly hydrophobic chemicals, i.e., greater than 108, are extremely difficult and are rarely made, in part because the vanishingly small concentrations in the water phase require extraordinary analytical sensitivity...

  5. Wave-current interactions in deep water conditions: field measurements and analyses

    NASA Astrophysics Data System (ADS)

    Rougier, Gilles; Rey, Vincent; Molcard, Anne

    2015-04-01

    The study of wave - current interaction has drawn interest in oceanography, ocean engineering, maritime navigation and for tides or waves power device design. In the context of the hydrodynamics study along the French Mediterranean coast, a current profiler was deployed near Toulon at the south of the "Port Cros" island. This coastal zone is characterized by a steep slope, the water depth varying from tens meters to several thousand meters over few kilometers from the coast. An ambient current, the "Northern Current", coming from the Ligurian sea (area of Genoa, Italy) and following the coast up to Toulon, is present all over the year. Its mean surface velocity is of about 0.30 m/s, its flow rate of about 1.5 Sv. The region is exposed to two dominating winds: the Mistral, coming from North-West, and Eastern winds. Both generate swell and/or wind waves in either following or opposing current conditions with respect to the Northern Current. A current profiler equipped with a wave tracking system (ACPD workhorse from RDI) was deployed from July to October 2014 in deep water conditions (depth of about 500m). The mooring system allowed the ADCP to measure the current profile from the sea surface down to 25m depth, which corresponds more or less to the depth of influence of waves of periods up to 10s. The collected data include energetic wave conditions in either following or opposing current conditions. The current intensity and its vertical profiles have shown a significant temporal variability according to the meteorological conditions. Effects of the wave conditions on the current properties are discussed. ACKNOWLEDGEMENTS This work was supported by the program BOMBYX and the ANR grant No ANR-13-ASTR-0007.

  6. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential

  7. Field testing and adaptation of a methodology to measure "in-stream" values in the Tongue River, northern Great Plains (NGP) region

    USGS Publications Warehouse

    Bovee, Ken D.; Gore, James A.; Silverman, Arnold J.

    1978-01-01

    A comprehensive, multi-component in-stream flow methodology was developed and field tested in the Tongue River in southeastern Montana. The methodology incorporates a sensitivity for the flow requirements of a wide variety of in-stream uses, and the flexibility to adjust flows to accommodate seasonal and sub-seasonal changes in the flow requirements for different areas. In addition, the methodology provides the means to accurately determine the magnitude of the water requirement for each in-stream use. The methodology can be a powerful water management tool in that it provides the flexibility and accuracy necessary in water use negotiations and evaluation of trade-offs. In contrast to most traditional methodologies, in-stream flow requirements were determined by additive independent methodologies developed for: 1) fisheries, including spawning, rearing, and food production; 2) sediment transport; 3) the mitigation of adverse impacts of ice; and 4) evapotranspiration losses. Since each flow requirement varied in important throughout the year, the consideration of a single in-stream use as a basis for a flow recommendation is inadequate. The study shows that the base flow requirement for spawning shovelnose sturgeon was 13.0 m3/sec. During the same period of the year, the flow required to initiate the scour of sediment from pools is 18.0 m3/sec, with increased scour efficiency occurring at flows between 20.0 and 25.0 m3/sec. An over-winter flow of 2.83 m3/sec. would result in the loss of approximately 80% of the riffle areas to encroachment by surface ice. At the base flow for insect production, approximately 60% of the riffle area is lost to ice. Serious damage to the channel could be incurred from ice jams during the spring break-up period. A flow of 12.0 m3/sec. is recommended to alleviate this problem. Extensive ice jams would be expected at the base rearing and food production levels. The base rearing flow may be profoundly influenced by the loss of streamflow

  8. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  9. The Deflection Plate Analyzer: A Technique for Space Plasma Measurements Under Highly Disturbed Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Dutton, Ken; Martinez, Nelson; Smith, Dennis; Stone, Nobie H.

    2004-01-01

    A technique has been developed to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The present method is an extension of the capabilities of the Differential Ion Flux Probe (DIFP) to include a mass measurement that does not include either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This reduces the complexity and expense of instrument fabrication, testing, and integration of flight hardware as compared to classical mass analyzers. The new instrument design is called the Deflection Plate Analyzer (DPA) and can deconvolve multiple ion streams and analyze each stream for ion flux intensity (density), velocity (including direction of motion), mass, and temperature (or energy distribution). The basic functionality of the DPA is discussed. The performance characteristics of a flight instrument as built for an electrodynamic tether mission, the Propulsive Small Expendable Deployer System (ProSEDS), and the instrument s role in measuring key experimental conditions are also discussed.

  10. Discriminative fear conditioning to context expressed by multiple measures of fear in the rat.

    PubMed

    Antoniadis, E A; McDonald, R J

    1999-05-01

    There has been a renewed interest in the neural basis of fear conditioning to context. These current approaches are accompanied by some limitations including the use of short testing windows, non-discriminative paradigms, and unitary fear response assessment. In an attempt to circumvent these limitations, a discriminative context procedure assessing multiple response measures of fear was used in the present study. Conditioning consisted of three training sessions and each session consisted of 2 days. On day one, the animals were placed in the paired context and received three foot shocks. On the other day, they were placed in the unpaired chamber in the absence of any aversive event. Animals were tested after each training session and the response measures of fear recorded included: preference, freezing, heart rate, ultrasonic vocalizations, defecation, body temperature, urination and locomotion. The results suggest that behavioral, as well as physiological changes evoked by fearful stimuli become associated with the context in which the aversive event occurred. In general these findings also suggest that there are different learning parameters for the measures of fear examined in this paradigm.

  11. A method to measure winter precipitation and sublimation under global warming conditions

    NASA Astrophysics Data System (ADS)

    Herndl, Markus; Slawitsch, Veronika; von Unold, Georg

    2016-04-01

    Winter precipitation and snow sublimation are fundamental components of the alpine moisture budget. Much work has been done in the study of these processes and its important contribution to the annual water balance. Due to the above-average sensitivity of the alpine region to climate change, a change in the importance and magnitude of these water balance parameters can be expected. To determine these effects, a lysimeter-facility enclosed in an open-field climate manipulation experiment was established in 2015 at AREC Raumberg-Gumpenstein which is able to measure winter precipitation and sublimation under global warming conditions. In this facility, six monolithic lysimeters are equipped with a snow cover monitoring system, which separates the snow cover above the lysimeter automatically from the surrounding snow cover. Three of those lysimeters were exposed to a +3°C scenario and three lysimeters to ambient conditions. Weight data are recorded every minute and therefore it is possible to get high-resolution information about the water balance parameter in winter. First results over two snow event periods showed that the system can measure very accurately winter precipitation and sublimation especially in comparison with other measurement systems and usually used models. Also first trends confirm that higher winter temperatures may affect snow water equivalent and snow cover duration. With more data during the next years using this method, it is possible to quantify the influence of global warming on water balance parameters during the winter periods.

  12. Measurements in Transitional Boundary Layers Under High Free-Stream Turbulence and Strong Acceleration Conditions

    NASA Technical Reports Server (NTRS)

    Volino, Ralph J.; Simon, Terrence W.

    1995-01-01

    Measurements from transitional, heated boundary layers along a concave-curved test wall are presented and discussed. A boundary layer subject to low free-stream turbulence intensity (FSTI), which contains stationary streamwise (Gortler) vortices, is documented. The low FSTI measurements are followed by measurements in boundary layers subject to high (initially 8%) free-stream turbulence intensity and moderate to strong streamwise acceleration. Conditions were chosen to simulate those present on the downstream half of the pressure side of a gas turbine airfoil. Mean flow characteristics as well as turbulence statistics, including the turbulent shear stress, turbulent heat flux, and turbulent Prandtl number, are documented. A technique called "octant analysis" is introduced and applied to several cases from the literature as well as to data from the present study. Spectral analysis was applied to describe the effects of turbulence scales of different sizes during transition. To the authors'knowledge, this is the first detailed documentation of boundary layer transition under such high free-stream turbulence conditions.

  13. Embedded optical probes for simultaneous pressure and temperature measurement of materials in extreme conditions

    NASA Astrophysics Data System (ADS)

    Sandberg, R. L.; Rodriguez, G.; Gibson, L. L.; Dattelbaum, D. M.; Stevens, G. D.; Grover, M.; Lalone, B. M.; Udd, E.

    2014-05-01

    We present recent efforts at Los Alamos National Laboratory (LANL) to develop sensors for simultaneous, in situ pressure and temperature measurements under dynamic conditions by using an all-optical fiber-based approach. While similar tests have been done previously in deflagration-to-detonation tests (DDT), where pressure and temperature were measured to 82 kbar and 400°C simultaneously, here we demonstrate the use of embedded fiber grating sensors to obtain high temporal resolution, in situ pressure measurements in inert materials. We present two experimental demonstrations of pressure measurements: (1) under precise shock loading from a gas-gun driven plate impact and (2) under high explosive driven shock in a water filled vessel. The system capitalizes on existing telecom components and fast transient digitizing recording technology. It operates as a relatively inexpensive embedded probe (single-mode 1550 nm fiber-based Bragg grating) that provides a continuous fast pressure record during shock and/or detonation. By applying well-controlled shock wave pressure profiles to these inert materials, we study the dynamic pressure response of embedded fiber Bragg gratings to extract pressure amplitude of the shock wave and compare our results with particle velocity wave profiles measured simultaneously.

  14. Comparative Analysis of Power Quality Instruments in Measuring Power under Distorted Conditions

    NASA Astrophysics Data System (ADS)

    Belchior, Fernando; Galvão, Thiago Moura; Ribeiro, Paulo Fernando; Silveira, Paulo Márcio

    2015-10-01

    This paper aims to evaluate the performance of commercial power quality (PQ) instruments. By using a high precision programmable voltage and current source, five meters from different manufacturers are analyzed and compared. At first, three-phase voltage signals are applied to those PQ instruments, considering harmonic distortion, voltage dip, voltage swell, as well as unbalanced voltages. These events are measured, compared and evaluated considering the different instruments. In addition, voltage and current signals are applied under different conditions, with and without harmonic and unbalances, in order to obtain the measurements of electrical power. After analyzing the data generated in all these tests, efforts are focused on establishing a relationship between the accuracy of the measurements. Considering the percentages of errors, a score is assigned to each instrument. The tests have shown "excellent" and "good" results regarding measuring active and apparent power, as well as power factor. However, for non-active power, almost all instruments had lower performance. This work is relevant considering that Brazil is deploying standardization in terms of power quality measurements aiming towards regulatory procedures and index limits.

  15. On eddy accumulation with limited conditional sampling to measure air-surface exchange

    SciTech Connect

    Wesely, M.L.; Hart, R.L.

    1994-01-01

    An analysis of turbulence data collected at a height of 12.3 m above grasslands was carried out to illustrate some of the limitations and possible improvements in methods to compute vertical fluxes of trace substances by the eddy accumulation technique with conditional sampling. The empirical coefficient used in the technique has a slight dependence on atmospheric stability, which can be minimized by using a threshold vertical velocity equal to approximately 0.75{sigma}{sub w}, below which chemical sampling is suspended. This protocol results in a smaller chemical sample but increases the differences in concentrations by approximately 70%. For effective conditional sampling when mass is being accumulated in a trap or reservoir, the time of sampling during updrafts versus downdrafts should be measured and used to adjust estimates of the mean concentrations.

  16. Measurement of Richtmyer–Meshkov mode coupling under steady shock conditions and at high energy density

    SciTech Connect

    Di Stefano, Carlos A.; Malamud, G.; Kuranz, C. C.; Klein, S. R.; Drake, R. P.

    2015-10-19

    Here, we present experiments observing Richtmyer–Meshkov mode coupling and bubble competition in a system arising from well-characterized initial conditions and driven by a strong (Mach ~ 8) shock. These measurements and the analysis method developed to interpret them provide an important step toward the possibility of observing self-similarity under such conditions, as well as a general platform for performing and analyzing hydrodynamic instability experiments. A key feature of these experiments is that the shock is sustained sufficiently long that this nonlinear behavior occurs without decay of the shock velocity or other hydrodynamic properties of the system, which facilitates analysis and allows the results to be used in the study of analytic models.

  17. Measurement of Richtmyer–Meshkov mode coupling under steady shock conditions and at high energy density

    DOE PAGES

    Di Stefano, Carlos A.; Malamud, G.; Kuranz, C. C.; Klein, S. R.; Drake, R. P.

    2015-10-19

    Here, we present experiments observing Richtmyer–Meshkov mode coupling and bubble competition in a system arising from well-characterized initial conditions and driven by a strong (Mach ~ 8) shock. These measurements and the analysis method developed to interpret them provide an important step toward the possibility of observing self-similarity under such conditions, as well as a general platform for performing and analyzing hydrodynamic instability experiments. A key feature of these experiments is that the shock is sustained sufficiently long that this nonlinear behavior occurs without decay of the shock velocity or other hydrodynamic properties of the system, which facilitates analysis andmore » allows the results to be used in the study of analytic models.« less

  18. Functional and anatomical measures for outflow boundary conditions in atherosclerotic coronary bifurcations.

    PubMed

    Schrauwen, Jelle T C; Coenen, Adriaan; Kurata, Akira; Wentzel, Jolanda J; van der Steen, Antonius F W; Nieman, Koen; Gijsen, Frank J H

    2016-07-26

    The aim of this research was finding the influence of anatomy-based and functional-based outflow boundary conditions for computational fluid dynamics (CFD) on fractional flow reserve (FFR) and wall shear stress (WSS) in mildly diseased coronary bifurcations. For 10 patient-specific bifurcations three simulations were set up with different outflow conditions, while the inflow was kept constant. First, the outflow conditions were based on the diameter of the outlets. Second, they were based on the volume estimates of the myocardium that depended on the outlets. Third, they were based on a myocardial flow measure derived from computed tomography perfusion imaging (CTP). The difference in outflow ratio between the perfusion-based and the diameter-based approach was -7 p.p. [-14 p.p.:7 p.p.] (median percentage point and interquartiles), and between the perfusion-based and volume-based this was -2 p.p. [-2 p.p.:1 p.p.]. Despite of these differences the computed FFRs matched very well. A quantitative analysis of the WSS results showed very high correlations between the methods with an r(2) ranging from 0.90 to 1.00. But despite the high correlations the diameter-based and volume-based approach generally underestimated the WSS compared to the perfusion-based approach. These differences disappeared after normalization. We demonstrated the potential of CTP for setting patient-specific boundary conditions for atherosclerotic coronary bifurcations. FFR and normalized WSS were unaffected by the variations in outflow ratios. In order to compute absolute WSS a functional measure to set the outflow ratio might be of added value in this type of vessels.

  19. A methodology for combining multiple commercial data sources to improve measurement of the food and alcohol environment: applications of geographical information systems.

    PubMed

    Mendez, Dara D; Duell, Jessica; Reiser, Sarah; Martin, Deborah; Gradeck, Robert; Fabio, Anthony

    2014-11-01

    Commercial data sources have been increasingly used to measure and locate community resources. We describe a methodology for combining and comparing the differences in commercial data of the food and alcohol environment. We used commercial data from two commercial databases (InfoUSA and Dun&Bradstreet) for 2003 and 2009 to obtain information on food and alcohol establishments and developed a matching process using computer algorithms and manual review by applying ArcGIS to geocode addresses, standard industrial classification and North American industry classification taxonomy for type of establishment and establishment name. We constructed population and area-based density measures (e.g. grocery stores) and assessed differences across data sources and used ArcGIS to map the densities. The matching process resulted in 8,705 and 7,078 unique establishments for 2003 and 2009, respectively. There were more establishments captured in the combined dataset than relying on one data source alone, and the additional establishments captured ranged from 1,255 to 2,752 in 2009. The correlations for the density measures between the two data sources was highest for alcohol outlets (r = 0.75 and 0.79 for per capita and area, respectively) and lowest for grocery stores/supermarkets (r = 0.32 for both). This process for applying geographical information systems to combine multiple commercial data sources and develop measures of the food and alcohol environment captured more establishments than relying on one data source alone. This replicable methodology was found to be useful for understanding the food and alcohol environment when local or public data are limited.

  20. ACCF/AHA methodology for the development of quality measures for cardiovascular technology: a report of the American College of Cardiology Foundation/American Heart Association Task Force on Performance Measures.

    PubMed

    Bonow, Robert O; Douglas, Pamela S; Buxton, Alfred E; Cohen, David J; Curtis, Jeptha P; Delong, Elizabeth; Drozda, Joseph P; Ferguson, T Bruce; Heidenreich, Paul A; Hendel, Robert C; Masoudi, Frederick A; Peterson, Eric D; Taylor, Allen J

    2011-09-27

    Consistent with the growing national focus on healthcare quality, the American College of Cardiology Foundation (ACCF) and the American Heart Association (AHA) have taken a leadership role over the past decade in developing measures of the quality of cardiovascular care by convening a joint ACCF/AHA Task Force on Performance Measures. The Task Force is charged with identifying the clinical topics appropriate for the development of performance measures and with assembling writing committees composed of clinical and methodological experts in collaboration with appropriate subspecialty societies. The Task Force has also created methodology documents that offer guidance in the development of process, outcome, composite, and efficiency measures. Cardiovascular performance measures using existing ACCF/AHA methodology are based on Class I or Class III guidelines recommendations, usually with Level A evidence. These performance measures, based on evidence-based ACCF/AHA guidelines, remain the most rigorous quality measures for both internal quality improvement and public reporting. However, many of the tools for diagnosis and treatment of cardiovascular disease involve advanced technologies, such as cardiac imaging, for which there are often no underlying guideline documents. Because these technologies affect the quality of cardiovascular care and also have the potential to contribute to cardiovascular health expenditures, there is a need for more critical assessment of the use of technology, including the development of quality and performance measures in areas in which guideline recommendations are absent. The evaluation of quality in the use of cardiovascular technologies requires consideration of multiple parameters that differ from other healthcare processes. The present document describes methodology for development of 2 new classes of quality measures in these situations, appropriate use measures and structure/safety measures. Appropriate use measures are based on

  1. Ice Accretion Measurements on an Airfoil and Wedge in Mixed-Phase Conditions

    NASA Technical Reports Server (NTRS)

    Struk, Peter; Bartkus, Tadas; Tsao, Jen-Ching; Currie, Tom; Fuleki, Dan

    2015-01-01

    This paper describes ice accretion measurements from experiments conducted at the National Research Council (NRC) of Canada's Research Altitude Test Facility during 2012. Due to numerous engine power loss events associated with high altitude convective weather, potential ice accretion within an engine due to ice crystal ingestion is being investigated collaboratively by NASA and NRC. These investigations examine the physical mechanisms of ice accretion on surfaces exposed to ice crystal and mixed phase conditions, similar to those believed to exist in core compressor regions of jet engines. A further objective of these tests is to examine scaling effects since altitude appears to play a key role in this icing process.

  2. Measurements of heat fluxes and soil moisture patterns in the field conditions

    NASA Astrophysics Data System (ADS)

    Sanda, M.; Snehota, M.; Haase, T.; Wild, J.

    2011-12-01

    New combined thermal and soil moisture unit coded TMS2 is presented. It is a prototype designed on good experience with TMS1. The device combines three thermometers for use approximately at -10, 0 and +15 cm relative to soil surface when installed vertically. Soil moisture measurement is performed based on time domain transmission (TDT) principle for the full range of soil moisture. Presented new version incorporates lifetime power supply for approximately 5 year operation and life time permanent data storage (0.5 mil logs) if values are acquired every 10 minutes. Lifetime operation log accompanies lifetime data storage with lockable data blocks. Data are retrieved by contact portable pocket collector. Both vertical/surface or buriable/subsurface installation is possible thanks to additional communication interface on demand. Original model TMS1, proved durability in harsh outdoor environment with good functioning in wet conditions withstanding mechanical destruction. Extended testing in the sandstone area of the National Park Bohemian Switzerland, Czech Republic is performed since 2009 by the Institute of Botany of the ASCR. Results of long-term measurement at hundreds of localities are successfully used for i) evaluation of species-specific environmental requirements (for different species of plants, bryophytes and fungi) and ii) extrapolation of microclimatic conditions over large areas of rugged sandstone relief with assistance of accurate, LiDAR based, digital terrain model. TMS1 units are also applied for continuous measurement of temperature and moisture of coarse woody debris, which serves as an important substrate for establishment and growth of seedlings and is thus crucial for natural regeneration of many forest ecosystems. The TMS1 sensors have been tested and calibrated in soil laboratories of Czech Technical University in Prague for three soil materials: arenic cambisol, podzol and quartz sand, showing good linearity and minor influence of the

  3. Use Conditions and Efficiency Measurements of DC Power Optimizers for Photovoltaic Systems: Preprint

    SciTech Connect

    Deline, C.; MacAlpine, S.

    2013-10-01

    No consensus standard exists for estimating annual conversion efficiency of DC-DC converters or power optimizers in photovoltaic (PV) applications. The performance benefits of PV power electronics including per-panel DC-DC converters depend in large part on the operating conditions of the PV system, along with the performance characteristics of the power optimizer itself. This work presents acase study of three system configurations that take advantage of the capabilities of DC power optimizers. Measured conversion efficiencies of DC-DC converters are applied to these scenarios to determine the annual weighted operating efficiency. A simplified general method of reporting weighted efficiency is given, based on the California Energy Commission's CEC efficiency rating and severalinput / output voltage ratios. Efficiency measurements of commercial power optimizer products are presented using the new performance metric, along with a description of the limitations of the approach.

  4. Specific absorption rate determination of magnetic nanoparticles through hyperthermia measurements in non-adiabatic conditions

    NASA Astrophysics Data System (ADS)

    Coïsson, M.; Barrera, G.; Celegato, F.; Martino, L.; Vinai, F.; Martino, P.; Ferraro, G.; Tiberto, P.

    2016-10-01

    An experimental setup for magnetic hyperthermia operating in non-adiabatic conditions is described. A thermodynamic model that takes into account the heat exchanged by the sample with the surrounding environment is developed. A suitable calibration procedure is proposed that allows the experimental validation of the model. Specific absorption rate can then be accurately determined just from the measurement of the sample temperature at the equilibrium steady state. The setup and the measurement procedure represent a simplification with respect to other systems requiring calorimeters or crucial corrections for heat flow. Two families of magnetic nanoparticles, one superparamagnetic and one characterised by larger sizes and static hysteresis, have been characterised as a function of field intensity, and specific absorption rate and intrinsic loss power have been obtained.

  5. Measuring coupling forces woodcutters exert on saws in real working conditions.

    PubMed

    Malinowska-Borowska, Jolanta; Harazin, Barbara; Zieliński, Grzegorz

    2012-01-01

    Prolonged exposure to hand-arm vibration (HAV) generated by chainsaws can cause HAV syndrome, i.e., disorders in the upper extremities of forestry workers. Progress of HAV syndrome depends on the intensity of mechanical vibration transmitted throughout the body, which is directly proportional to coupling forces applied by the woodcutter to a vibrating tool. This study aimed to establish a method of measuring coupling forces exerted by chainsaw workers in real working conditions. Coupling forces exerted by workers with their right and left hands were measured with a hydro-electronic force meter. Wood hardness, the type of chainsaw and the kind of forest operation, i.e., felling, cross-cutting or limbing, were considered.

  6. Mass changes in NSTX Surface Layers with Li Conditioning as Measured by Quartz Microbalances

    SciTech Connect

    C.H. Skinner, H.W. Kugel, A. L. Roquemore, PS. Krstic and A. Beste

    2008-06-09

    Dynamic retention, lithium deposition, and the stability of thick deposited layers were measured by three quartz crystal microbalances (QMB) deployed in plasma shadowed areas at the upper and lower divertor and outboard midplane in the National Spherical Torus Experiment (NSTX). Deposition of 185 {micro}/g/cm{sup 2} over 3 months in 2007 was measured by a QMB at the lower divertor while a QMB on the upper divertor, that was shadowed from the evaporator, received an order of magnitude less deposition. During helium glow discharge conditioning both neutral gas collisions and the ionization and subsequent drift of Li{sup +} interrupted the lithium deposition on the lower divertor. We present calculations of the relevant mean free paths. Occasionally strong variations in the QMB frequency were observed of thick lithium films suggesting relaxation of mechanical stress and/or flaking or peeling of the deposited layers.

  7. Corneal Biomechanical Properties in Different Ocular Conditions and New Measurement Techniques

    PubMed Central

    Garcia-Porta, Nery; Salgado-Borges, Jose; Parafita-Mato, Manuel; González-Méijome, Jose Manuel

    2014-01-01

    Several refractive and therapeutic treatments as well as several ocular or systemic diseases might induce changes in the mechanical resistance of the cornea. Furthermore, intraocular pressure measurement, one of the most used clinical tools, is also highly dependent on this characteristic. Corneal biomechanical properties can be measured now in the clinical setting with different instruments. In the present work, we review the potential role of the biomechanical properties of the cornea in different fields of ophthalmology and visual science in light of the definitions of the fundamental properties of matter and the results obtained from the different instruments available. The body of literature published so far provides an insight into how the corneal mechanical properties change in different sight-threatening ocular conditions and after different surgical procedures. The future in this field is very promising with several new technologies being applied to the analysis of the corneal biomechanical properties. PMID:24729900

  8. Measurement of the Critical Distance Parameter Against Icing Conditions on a NACA 0012 Swept Wing Tip

    NASA Technical Reports Server (NTRS)

    Vargas, Mario; Kreeger, Richard E.

    2011-01-01

    This work presents the results of three experiments, one conducted in the Icing Research Tunnel (IRT) at NASA Glenn Research Center and two in the Goodrich Icing Wind Tunnel (IWT). The experiments were designed to measure the critical distance parameter on a NACA 0012 Swept Wing Tip at sweep angles of 45deg, 30deg, and 15deg. A time sequence imaging technique (TSIT) was used to obtain real time close-up imaging data during the first 2 min of the ice accretion formation. The time sequence photographic data was used to measure the critical distance at each icing condition and to study how it develops in real time. The effect on the critical distance of liquid water content, drop size, total temperature, and velocity was studied. The results were interpreted using a simple energy balance on a roughness element

  9. The 'CommTech' Methodology: A Demand-Driven Approach to Efficient, Productive and Measurable Technology Transfer

    NASA Technical Reports Server (NTRS)

    Horsham, Gray A. P.

    1998-01-01

    Market research sources were used to initially gather primary technological problems and needs data from non-aerospace companies in targeted industry sectors. The company-supplied information served as input data to activate or start-up an internal, phased match-making process. This process was based on technical-level relationship exploration followed by business-level agreement negotiations, and culminated with project management and execution. Space Act Agreements represented near-term outputs. Company product or process commercialization derived from Lewis support and measurable economic effects represented far-term outputs.

  10. The influence of exogenous conditions on mobile measured gamma-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Dierke, C.; Werban, U.; Dietrich, P.

    2012-12-01

    In the past, gamma ray measurements have been used for geological surveys and exploration using airborne and borehole logging systems. For these applications, the relationships between the measured physical parameter - the concentration of natural gamma emitters 40K, 238U and 232Th - and geological origin or sedimentary developments are well described. Based on these applications and knowledge in combination with adjusted sensor systems, gamma ray measurements are used to derive soil parameters to create detailed soil maps e.g., in digital soil mapping (DSM) and monitoring of soils. Therefore, not only qualitative but also quantitative comparability is necessary. Grain size distribution, type of clay minerals and organic matter content are soil parameters which directly influence the gamma ray emitter concentration. Additionally, the measured concentration is influenced by endogenous processes like soil moisture variation due to raining events, foggy weather conditions, or erosion and deposition of material. A time series of gamma ray measurements was used to observe changes in gamma ray concentration on a floodplain area in Central Germany. The study area is characterised by high variations in grain size distribution and occurrence of flooding events. For the survey, we used a 4l NaI(Tl) detector with GPS connection mounted on a sledge, which is towed across the field sites by a four-wheel-vehicle. The comparison of data from different time steps shows similar structures with minor variation between the data ranges and shape of structures. However, the data measured during different soil moisture contents differ in absolute value. An average increase of soil moisture of 36% leads to a decrease of Th (by 20%), K (by 29%), and U (by 41%). These differences can be explained by higher attenuation of radiation during higher soil moisture content. The different changes in nuclide concentration will also lead to varying ratios. We will present our experiences concerning

  11. Glycaemic index methodology.

    PubMed

    Brouns, F; Bjorck, I; Frayn, K N; Gibbs, A L; Lang, V; Slama, G; Wolever, T M S

    2005-06-01

    The glycaemic index (GI) concept was originally introduced to classify different sources of carbohydrate (CHO)-rich foods, usually having an energy content of >80 % from CHO, to their effect on post-meal glycaemia. It was assumed to apply to foods that primarily deliver available CHO, causing hyperglycaemia. Low-GI foods were classified as being digested and absorbed slowly and high-GI foods as being rapidly digested and absorbed, resulting in different glycaemic responses. Low-GI foods were found to induce benefits on certain risk factors for CVD and diabetes. Accordingly it has been proposed that GI classification of foods and drinks could be useful to help consumers make 'healthy food choices' within specific food groups. Classification of foods according to their impact on blood glucose responses requires a standardised way of measuring such responses. The present review discusses the most relevant methodological considerations and highlights specific recommendations regarding number of subjects, sex, subject status, inclusion and exclusion criteria, pre-test conditions, CHO test dose, blood sampling procedures, sampling times, test randomisation and calculation of glycaemic response area under the curve. All together, these technical recommendations will help to implement or reinforce measurement of GI in laboratories and help to ensure quality of results. Since there is current international interest in alternative ways of expressing glycaemic responses to foods, some of these methods are discussed.

  12. Transient Measurements Under Simulated Mantle Conditions - Simultaneous DTF-Ultrasonic Interferometry, X-Radiography, XRD

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.; Schilling, F. R.; Lathe, C.; Wunder, B.

    2004-12-01

    The interpretation of seismic data from the Earth's deep interior requires measurements of the physical properties of Earth materials under experimental simulated mantle conditions. Elastic wave velocity measurement by ultrasonic interferometry is an important tool for the determination of the elastic properties in multi-anvil devices. Whereas the classical sweep method is very time-consuming, the ultrasonic data transfer function technique (DTF), simultaneously generating all the frequencies used in the experiment, first described by Li et al. (2002), requires just few seconds to save the response of the system. The success of the technique substantially depends on the excitation function and the resolution used for saving the DTF (Mueller et al., 2004a). Background discussion as well as high pressure AƒA_A,A¿A,A 1/2 high temperature results demonstrate how to optimize the technique. All Ultrasonic interferometry allows highly precise travel time measurement at a sample enclosed in a high-pressure multi-anvil device. But under high pressure conditions the influence of sample deformation on the frequencies for destructive and constructive interference used for the evaluation of the elastic properties might be stronger than that from the shift of the elastic moduli. Consequently ultrasonic interferometry requires the exact sample length measurement under in situ conditions. X-ray imaging using brillant synchrotron radiation, called X-radiography, produces grey-scale images of the sample under in situ conditions by converting the X-ray image to an optical one by a CE-YAG-crystal. Saving the optical image by a CCD-camera after redirection by a mirrow, also requires few seconds. To derive the sample length, the different brightness of sample, buffer rod and reflector at the electronic image is evaluated (Mueller et al., 2004b). Contrary to XRD measurements, imaging the sample by X-rays requires a beam diameter larger than the sample length. Therefore the fixed

  13. Fast sorption measurements of volatile organic compounds on building materials: Part 1 – Methodology developed for field applications

    PubMed Central

    Rizk, M.; Verriele, M.; Dusanter, S.; Schoemaecker, C.; Le Calve, S.; Locoge, N.

    2016-01-01

    A Proton Transfer Reaction-Mass Spectrometer (PTR-MS) has been coupled to the outlet of a Field and Laboratory Emission Cell (FLEC), to measure volatile organic compounds (VOC) concentration during a sorption experiments (Rizk et al., this issue) [1]. The limits of detection of the PTR-MS for three VOCs are presented for different time resolution (2, 10 and 20 s). The mass transfer coefficient was calculated in the FLEC cavity for the different flow rates. The concentration profile obtained from a sorption experiment performed on a gypsum board and a vinyl flooring are also presented in comparison with the profile obtained for a Pyrex glass used as a material that do not present any sorption behavior (no sink). Finally, the correlation between the concentration of VOCs adsorbed on the surface of the gypsum board at equilibrium (Cse) and the concentration of VOCs Ce measured in the gas phase at equilibrium is presented for benzene, C8 aromatics and toluene. PMID:26937475

  14. Fast sorption measurements of volatile organic compounds on building materials: Part 1 - Methodology developed for field applications.

    PubMed

    Rizk, M; Verriele, M; Dusanter, S; Schoemaecker, C; Le Calve, S; Locoge, N

    2016-03-01

    A Proton Transfer Reaction-Mass Spectrometer (PTR-MS) has been coupled to the outlet of a Field and Laboratory Emission Cell (FLEC), to measure volatile organic compounds (VOC) concentration during a sorption experiments (Rizk et al., this issue) [1]. The limits of detection of the PTR-MS for three VOCs are presented for different time resolution (2, 10 and 20 s). The mass transfer coefficient was calculated in the FLEC cavity for the different flow rates. The concentration profile obtained from a sorption experiment performed on a gypsum board and a vinyl flooring are also presented in comparison with the profile obtained for a Pyrex glass used as a material that do not present any sorption behavior (no sink). Finally, the correlation between the concentration of VOCs adsorbed on the surface of the gypsum board at equilibrium (Cse ) and the concentration of VOCs Ce measured in the gas phase at equilibrium is presented for benzene, C8 aromatics and toluene. PMID:26937475

  15. Embedded optical probes for simultaneous pressure and temperature measurement of materials in extreme conditions

    NASA Astrophysics Data System (ADS)

    Sandberg, Richard; Rodriguez, George; Gibson, Lee; Dattelbaum, Dana; Udd, Eric

    2013-06-01

    We present a new technique for simultaneous, in situ pressure and temperature measurements under dynamic conditions by using an all-optical fiber-based approach. While similar tests have been done previously in deflagration-to-detonation tests (DDT), where pressure and temperature were measured to 82 kbar and 400 °C simultaneously, here we demonstrate the use of embedded fiber grating sensors to obtain high temporal resolution in situ pressure measurements in inert materials under precise shock loading from a gas-gun driven plate impact. The system capitalizes on existing telecom components and fast transient digitizing recording technology. It operates as a relatively inexpensive embedded probe (single-mode 1550 nm fiber-based Bragg grating - FBG) that provides a continuous fast pressure record during shock and/or detonation. Fiber Bragg grating sensors have predictable thermal and mechanical response properties with pressure spectrally shifting the reflectance peak at λ = 1550 nm to the blue and temperature shifting the peak to the red. By applying well-controlled steady shock wave pressure profiles to soft materials such as PMMA, we study the dynamic pressure response of embedded fiber Bragg gratings to extract pressure amplitude of the shock wave and compare our results with in situ particle velocity wave profiles measured simultaneously.

  16. Direct measurement of thermal conductivity in solid iron at planetary core conditions.

    PubMed

    Konôpková, Zuzana; McWilliams, R Stewart; Gómez-Pérez, Natalia; Goncharov, Alexander F

    2016-06-01

    The conduction of heat through minerals and melts at extreme pressures and temperatures is of central importance to the evolution and dynamics of planets. In the cooling Earth's core, the thermal conductivity of iron alloys defines the adiabatic heat flux and therefore the thermal and compositional energy available to support the production of Earth's magnetic field via dynamo action. Attempts to describe thermal transport in Earth's core have been problematic, with predictions of high thermal conductivity at odds with traditional geophysical models and direct evidence for a primordial magnetic field in the rock record. Measurements of core heat transport are needed to resolve this difference. Here we present direct measurements of the thermal conductivity of solid iron at pressure and temperature conditions relevant to the cores of Mercury-sized to Earth-sized planets, using a dynamically laser-heated diamond-anvil cell. Our measurements place the thermal conductivity of Earth's core near the low end of previous estimates, at 18-44 watts per metre per kelvin. The result is in agreement with palaeomagnetic measurements indicating that Earth's geodynamo has persisted since the beginning of Earth's history, and allows for a solid inner core as old as the dynamo. PMID:27251283

  17. New portable instrument for the measurement of thermal conductivity in gas process conditions

    NASA Astrophysics Data System (ADS)

    Queirós, C. S. G. P.; Lourenço, M. J. V.; Vieira, S. I.; Serra, J. M.; Nieto de Castro, C. A.

    2016-06-01

    The development of high temperature gas sensors for the monitoring and determination of thermophysical properties of complex process mixtures at high temperatures faces several problems, related with the materials compatibility, active sensing parts sensitivity, and lifetime. Ceramic/thin metal films based sensors, previously developed for the determination of thermal conductivity of molten materials up to 1200 °C, were redesigned, constructed, and applied for thermal conductivity measuring sensors. Platinum resistance thermometers were also developed using the same technology, to be used in the temperature measurement, which were also constructed and tested. A new data acquisition system for the thermal conductivity sensors, based on a linearization of the transient hot-strip model, including a portable electronic bridge for the measurement of the thermal conductivity in gas process conditions was also developed. The equipment is capable of measuring the thermal conductivity of gaseous phases with an accuracy of 2%-5% up to 840 °C (95% confidence level). The development of sensors up to 1200 °C, present at the core of the combustion chambers, will be done in a near future.

  18. Direct measurement of thermal conductivity in solid iron at planetary core conditions

    NASA Astrophysics Data System (ADS)

    Konôpková, Zuzana; McWilliams, R. Stewart; Gómez-Pérez, Natalia; Goncharov, Alexander F.

    2016-06-01

    The conduction of heat through minerals and melts at extreme pressures and temperatures is of central importance to the evolution and dynamics of planets. In the cooling Earth’s core, the thermal conductivity of iron alloys defines the adiabatic heat flux and therefore the thermal and compositional energy available to support the production of Earth’s magnetic field via dynamo action. Attempts to describe thermal transport in Earth’s core have been problematic, with predictions of high thermal conductivity at odds with traditional geophysical models and direct evidence for a primordial magnetic field in the rock record. Measurements of core heat transport are needed to resolve this difference. Here we present direct measurements of the thermal conductivity of solid iron at pressure and temperature conditions relevant to the cores of Mercury-sized to Earth-sized planets, using a dynamically laser-heated diamond-anvil cell. Our measurements place the thermal conductivity of Earth’s core near the low end of previous estimates, at 18-44 watts per metre per kelvin. The result is in agreement with palaeomagnetic measurements indicating that Earth’s geodynamo has persisted since the beginning of Earth’s history, and allows for a solid inner core as old as the dynamo.

  19. Direct measurement of thermal conductivity in solid iron at planetary core conditions

    NASA Astrophysics Data System (ADS)

    Konôpková, Zuzana; McWilliams, R. Stewart; Gómez-Pérez, Natalia; Goncharov, Alexander F.

    2016-06-01

    The conduction of heat through minerals and melts at extreme pressures and temperatures is of central importance to the evolution and dynamics of planets. In the cooling Earth’s core, the thermal conductivity of iron alloys defines the adiabatic heat flux and therefore the thermal and compositional energy available to support the production of Earth’s magnetic field via dynamo action. Attempts to describe thermal transport in Earth’s core have been problematic, with predictions of high thermal conductivity at odds with traditional geophysical models and direct evidence for a primordial magnetic field in the rock record. Measurements of core heat transport are needed to resolve this difference. Here we present direct measurements of the thermal conductivity of solid iron at pressure and temperature conditions relevant to the cores of Mercury-sized to Earth-sized planets, using a dynamically laser-heated diamond-anvil cell. Our measurements place the thermal conductivity of Earth’s core near the low end of previous estimates, at 18–44 watts per metre per kelvin. The result is in agreement with palaeomagnetic measurements indicating that Earth’s geodynamo has persisted since the beginning of Earth’s history, and allows for a solid inner core as old as the dynamo.

  20. New portable instrument for the measurement of thermal conductivity in gas process conditions.

    PubMed

    Queirós, C S G P; Lourenço, M J V; Vieira, S I; Serra, J M; Nieto de Castro, C A

    2016-06-01

    The development of high temperature gas sensors for the monitoring and determination of thermophysical properties of complex process mixtures at high temperatures faces several problems, related with the materials compatibility, active sensing parts sensitivity, and lifetime. Ceramic/thin metal films based sensors, previously developed for the determination of thermal conductivity of molten materials up to 1200 °C, were redesigned, constructed, and applied for thermal conductivity measuring sensors. Platinum resistance thermometers were also developed using the same technology, to be used in the temperature measurement, which were also constructed and tested. A new data acquisition system for the thermal conductivity sensors, based on a linearization of the transient hot-strip model, including a portable electronic bridge for the measurement of the thermal conductivity in gas process conditions was also developed. The equipment is capable of measuring the thermal conductivity of gaseous phases with an accuracy of 2%-5% up to 840 °C (95% confidence level). The development of sensors up to 1200 °C, present at the core of the combustion chambers, will be done in a near future.