Science.gov

Sample records for conditions measurement methodology

  1. Methodological Quality of National Guidelines for Pediatric Inpatient Conditions

    PubMed Central

    Hester, Gabrielle; Nelson, Katherine; Mahant, Sanjay; Eresuma, Emily; Keren, Ron; Srivastava, Rajendu

    2014-01-01

    Background Guidelines help inform standardization of care for quality improvement (QI). The Pediatric Research in Inpatient Settings (PRIS) network published a prioritization list of inpatient conditions with high prevalence, cost, and variation in resource utilization across children’s hospitals. The methodological quality of guidelines for priority conditions is unknown. Objective To rate the methodological quality of national guidelines for 20 priority pediatric inpatient conditions. Design We searched sources including PubMed for national guidelines published 2002–2012. Guidelines specific to one organism, test or treatment, or institution were excluded. Guidelines were rated by two raters using a validated tool (AGREE II) with an overall rating on a 7-point scale (7–highest). Inter-rater reliability was measured with a weighted kappa coefficient. Results 17 guidelines met inclusion criteria for 13 conditions, 7 conditions yielded no relevant national guidelines. The highest methodological quality guidelines were for asthma, tonsillectomy, and bronchiolitis (mean overall rating 7, 6.5 and 6.5 respectively); the lowest were for sickle cell disease (2 guidelines) and dental caries (mean overall rating 4, 3.5, and 3 respectively). The overall weighted kappa was 0.83 (95% confidence interval 0.78–0.87). Conclusions We identified a group of moderate to high methodological quality national guidelines for priority pediatric inpatient conditions. Hospitals should consider these guidelines to inform QI initiatives. PMID:24677729

  2. DEVELOPMENT OF MEASUREMENT METHODOLOGY FOR EVALUATING FUGITIVE PARTICULATE EMISSIONS

    EPA Science Inventory

    A measurement methodology to evaluate fugitive particulate emissions was developed and demonstrated. The project focused on the application of the lidar (laser radar) technique under field conditions, but in circumstances that simplified and controlled the variables of the genera...

  3. Structural health monitoring methodology for aircraft condition-based maintenance

    NASA Astrophysics Data System (ADS)

    Saniger, Jordi; Reithler, Livier; Guedra-Degeorges, Didier; Takeda, Nobuo; Dupuis, Jean Pierre

    2001-06-01

    Reducing maintenance costs while keeping a constant level of safety is a major issue for Air Forces and airlines. The long term perspective is to implement condition based maintenance to guarantee a constant safety level while decreasing maintenance costs. On this purpose, the development of a generalized Structural Health Monitoring System (SHMS) is needed. The objective of such a system is to localize the damages and to assess their severity, with enough accuracy to allow low cost corrective actions. The present paper describes a SHMS based on acoustic emission technology. This choice was driven by its reliability and wide use in the aerospace industry. The described SHMS uses a new learning methodology which relies on the generation of artificial acoustic emission events on the structure and an acoustic emission sensor network. The calibrated acoustic emission events picked up by the sensors constitute the knowledge set that the system relies on. With this methodology, the anisotropy of composite structures is taken into account, thus avoiding the major cause of errors of classical localization methods. Moreover, it is adaptive to different structures as it does not rely on any particular model but on measured data. The acquired data is processed and the event's location and corrected amplitude are computed. The methodology has been demonstrated and experimental tests on elementary samples presented a degree of accuracy of 1cm.

  4. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  5. PIMM: A Performance Improvement Measurement Methodology

    SciTech Connect

    Not Available

    1994-05-15

    This report presents a Performance Improvement Measurement Methodology (PIMM) for measuring and reporting the mission performance for organizational elements of the U.S. Department of Energy to comply with the Chief Financial Officer`s Act (CFOA) of 1990 and the Government Performance and Results Act (GPRA) of 1993. The PIMM is illustrated by application to the Morgantown Energy Technology Center (METC), a Research, Development and Demonstration (RD&D) field center of the Office of Fossil Energy, along with limited applications to the Strategic Petroleum Reserve Office and the Office of Fossil Energy. METC is now implementing the first year of a pilot project under GPRA using the PIMM. The PIMM process is applicable to all elements of the Department; organizations may customize measurements to their specific missions. The PIMM has four aspects: (1) an achievement measurement that applies to any organizational element, (2) key indicators that apply to institutional elements, (3) a risk reduction measurement that applies to all RD&D elements and to elements with long-term activities leading to risk-associated outcomes, and (4) a cost performance evaluation. Key Indicators show how close the institution is to attaining long range goals. Risk reduction analysis is especially relevant to RD&D. Product risk is defined as the chance that the product of new technology will not meet the requirements of the customer. RD&D is conducted to reduce technology risks to acceptable levels. The PIMM provides a profile to track risk reduction as RD&D proceeds. Cost performance evaluations provide a measurement of the expected costs of outcomes relative to their actual costs.

  6. Measuring Program Outcomes: Using Retrospective Pretest Methodology.

    ERIC Educational Resources Information Center

    Pratt, Clara C.; McGuigan, William M.; Katsev, Aphra R.

    2000-01-01

    Used longitudinal data from 307 mothers of firstborn infants participating in a home-visitation, child abuse prevention program in a retrospective pretest methodology. Results shows that when response shift bias was present, the retrospective pretest methodology produced a more legitimate assessment of program outcomes than did the traditional…

  7. Relative Hazard and Risk Measure Calculation Methodology

    SciTech Connect

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.

    2004-03-20

    The relative hazard (RH) and risk measure (RM) methodology and computer code is a health risk-based tool designed to allow managers and environmental decision makers the opportunity to readily consider human health risks (i.e., public and worker risks) in their screening-level analysis of alternative cleanup strategies. Environmental management decisions involve consideration of costs, schedules, regulatory requirements, health hazards, and risks. The RH-RM tool is a risk-based environmental management decision tool that allows managers the ability to predict and track health hazards and risks over time as they change in relation to mitigation and cleanup actions. Analysis of the hazards and risks associated with planned mitigation and cleanup actions provides a baseline against which alternative strategies can be compared. This new tool allows managers to explore “what if scenarios,” to better understand the impact of alternative mitigation and cleanup actions (i.e., alternatives to the planned actions) on health hazards and risks. This new tool allows managers to screen alternatives on the basis of human health risk and compare the results with cost and other factors pertinent to the decision. Once an alternative or a narrow set of alternatives are selected, it will then be more cost-effective to perform the detailed risk analysis necessary for programmatic and regulatory acceptance of the selected alternative. The RH-RM code has been integrated into the PNNL developed Framework for Risk Analysis In Multimedia Environmental Systems (FRAMES) to allow the input and output data of the RH-RM code to be readily shared with the more comprehensive risk analysis models, such as the PNNL developed Multimedia Environmental Pollutant Assessment System (MEPAS) model.

  8. Methodology and Process for Condition Assessment at Existing Hydropower Plants

    SciTech Connect

    Zhang, Qin Fen; Smith, Brennan T; Cones, Marvin; March, Patrick; Dham, Rajesh; Spray, Michael

    2012-01-01

    Hydropower Advancement Project was initiated by the U.S. Department of Energy Office of Energy Efficiency and Renewable Energy to develop and implement a systematic process with a standard methodology to identify the opportunities of performance improvement at existing hydropower facilities and to predict and trend the overall condition and improvement opportunity within the U.S. hydropower fleet. The concept of performance for the HAP focuses on water use efficiency how well a plant or individual unit converts potential energy to electrical energy over a long-term averaging period of a year or more. The performance improvement involves not only optimization of plant dispatch and scheduling but also enhancement of efficiency and availability through advanced technology and asset upgrades, and thus requires inspection and condition assessment for equipment, control system, and other generating assets. This paper discusses the standard methodology and process for condition assessment of approximately 50 nationwide facilities, including sampling techniques to ensure valid expansion of the 50 assessment results to the entire hydropower fleet. The application and refining process and the results from three demonstration assessments are also presented in this paper.

  9. Quantum Measurement and Initial Conditions

    NASA Astrophysics Data System (ADS)

    Stoica, Ovidiu Cristinel

    2016-03-01

    Quantum measurement finds the observed system in a collapsed state, rather than in the state predicted by the Schrödinger equation. Yet there is a relatively spread opinion that the wavefunction collapse can be explained by unitary evolution (for instance in the decoherence approach, if we take into account the environment). In this article it is proven a mathematical result which severely restricts the initial conditions for which measurements have definite outcomes, if pure unitary evolution is assumed. This no-go theorem remains true even if we take the environment into account. The result does not forbid a unitary description of the measurement process, it only shows that such a description is possible only for very restricted initial conditions. The existence of such restrictions of the initial conditions can be understood in the four-dimensional block universe perspective, as a requirement of global self-consistency of the solutions of the Schrödinger equation.

  10. Methodological Challenges in Measuring Child Maltreatment

    ERIC Educational Resources Information Center

    Fallon, Barbara; Trocme, Nico; Fluke, John; MacLaurin, Bruce; Tonmyr, Lil; Yuan, Ying-Ying

    2010-01-01

    Objective: This article reviewed the different surveillance systems used to monitor the extent of reported child maltreatment in North America. Methods: Key measurement and definitional differences between the surveillance systems are detailed and their potential impact on the measurement of the rate of victimization. The infrastructure…

  11. Wastewater Sampling Methodologies and Flow Measurement Techniques.

    ERIC Educational Resources Information Center

    Harris, Daniel J.; Keffer, William J.

    This document provides a ready source of information about water/wastewater sampling activities using various commercial sampling and flow measurement devices. The report consolidates the findings and summarizes the activities, experiences, sampling methods, and field measurement techniques conducted by the Environmental Protection Agency (EPA),…

  12. A methodological review of resilience measurement scales

    PubMed Central

    2011-01-01

    Background The evaluation of interventions and policies designed to promote resilience, and research to understand the determinants and associations, require reliable and valid measures to ensure data quality. This paper systematically reviews the psychometric rigour of resilience measurement scales developed for use in general and clinical populations. Methods Eight electronic abstract databases and the internet were searched and reference lists of all identified papers were hand searched. The focus was to identify peer reviewed journal articles where resilience was a key focus and/or is assessed. Two authors independently extracted data and performed a quality assessment of the scale psychometric properties. Results Nineteen resilience measures were reviewed; four of these were refinements of the original measure. All the measures had some missing information regarding the psychometric properties. Overall, the Connor-Davidson Resilience Scale, the Resilience Scale for Adults and the Brief Resilience Scale received the best psychometric ratings. The conceptual and theoretical adequacy of a number of the scales was questionable. Conclusion We found no current 'gold standard' amongst 15 measures of resilience. A number of the scales are in the early stages of development, and all require further validation work. Given increasing interest in resilience from major international funders, key policy makers and practice, researchers are urged to report relevant validation statistics when using the measures. PMID:21294858

  13. Experimental comparison of exchange bias measurement methodologies

    SciTech Connect

    Hovorka, Ondrej; Berger, Andreas; Friedman, Gary

    2007-05-01

    Measurements performed on all-ferromagnetic bilayer systems and supported by model calculation results are used to compare different exchange bias characterization methods. We demonstrate that the accuracy of the conventional two-point technique based on measuring the sum of the coercive fields depends on the symmetry properties of hysteresis loops. On the other hand, the recently proposed center of mass method yields results independent of the hysteresis loop type and coincides with the two-point measurement only if the loops are symmetric. Our experimental and simulation results clearly demonstrate a strong correlation between loop asymmetry and the difference between these methods.

  14. New seeding methodology for gas concentration measurements.

    PubMed

    Chan, Qing N; Medwell, Paul R; Dally, Bassam B; Alwahabi, Zeyad T; Nathan, Graham J

    2012-07-01

    This paper presents the first demonstration of the pulsed laser ablation technique to seed a laminar non-reacting gaseous jet at atmospheric pressure. The focused, second harmonic from a pulsed Nd : YAG laser is used to ablate a neutral indium rod at atmospheric pressure and temperature. The ablation products generated with the new seeding method are used to seed the jet, as a marker of the scalar field. The neutral indium atoms so generated are found to be stable and survive a convection time of the order of tens of seconds before entering the interrogation region. The measurements of planar laser-induced fluorescence (PLIF) with indium and laser nephelometry measurements with the ablation products are both reported. The resulting average and root mean square (RMS) of the measurements are found to agree reasonably well although some differences are found. The results show that the pulsed laser ablation method has potential to provide scalar measurement for mixing studies. PMID:22710315

  15. Diffusions conditioned on occupation measures

    NASA Astrophysics Data System (ADS)

    Angeletti, Florian; Touchette, Hugo

    2016-02-01

    A Markov process fluctuating away from its typical behavior can be represented in the long-time limit by another Markov process, called the effective or driven process, having the same stationary states as the original process conditioned on the fluctuation observed. We construct here this driven process for diffusions spending an atypical fraction of their evolution in some region of state space, corresponding mathematically to stochastic differential equations conditioned on occupation measures. As an illustration, we consider the Langevin equation conditioned on staying for a fraction of time in different intervals of the real line, including the positive half-line which leads to a generalization of the Brownian meander problem. Other applications related to quasi-stationary distributions, metastable states, noisy chemical reactions, queues, and random walks are discussed.

  16. Toward a new methodology for measuring the threshold Shields number

    NASA Astrophysics Data System (ADS)

    Rousseau, Gauthier; Dhont, Blaise; Ancey, Christophe

    2016-04-01

    A number of bedload transport equations involve the threshold Shields number (corresponding to the threshold of incipient motion for particles resting on the streambed). Different methods have been developed for determining this threshold Shields number; they usually assume that the initial streambed is plane prior to sediment transport. Yet, there are many instances in real-world scenarios, in which the initial streambed is not free of bed forms. We are interested in developing a new methodology for determining the threshold of incipient motion in gravel-bed streams in which smooth bed forms (e.g., anti-dunes) develop. Experiments were conducted in a 10-cm wide, 2.5-m long flume, whose initial inclination was 3%. Flows were supercritical and fully turbulent. The flume was supplied with water and sediment at fixed rates. As bed forms developed and migrated, and sediment transport rates exhibited wide fluctuations, measurements had to be taken over long times (typically 10 hr). Using a high-speed camera, we recorded the instantaneous bed load transport rate at the outlet of the flume by taking top-view images. In parallel, we measured the evolution of the bed slope, water depth, and shear stress by filming through a lateral window of the flume. These measurements allowed for the estimation of the space and time-averaged slope, from which we deduced the space and time-averaged Shields number under incipient bed load transport conditions. In our experiments, the threshold Shields number was strongly dependent on streambed morphology. Experiments are under way to determine whether taking the space and time average of incipient motion experiments leads to a more robust definition of the threshold Shields number. If so, this new methodology will perform better than existing approaches at measuring the threshold Shields number.

  17. Evaluation of UT Wall Thickness Measurements and Measurement Methodology

    SciTech Connect

    Weier, Dennis R.; Pardini, Allan F.

    2007-10-01

    CH2M HILL has requested that PNNL examine the ultrasonic methodology utilized in the inspection of the Hanford double shell waste tanks. Specifically, PNNL is to evaluate the UT process variability and capability to detect changes in wall thickness and to document the UT operator's techniques and methodology in the determination of the reported minimum and average UT data and how it compares to the raw (unanalyzed) UT data.

  18. Methodological aspects of EEG and body dynamics measurements during motion

    PubMed Central

    Reis, Pedro M. R.; Hebenstreit, Felix; Gabsteiger, Florian; von Tscharner, Vinzenz; Lochmann, Matthias

    2014-01-01

    EEG involves the recording, analysis, and interpretation of voltages recorded on the human scalp which originate from brain gray matter. EEG is one of the most popular methods of studying and understanding the processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements that are performed in response to the environment. However, there are methodological difficulties which can occur when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions on how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics, and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determinating real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks. PMID:24715858

  19. Methodological aspects of EEG and body dynamics measurements during motion.

    PubMed

    Reis, Pedro M R; Hebenstreit, Felix; Gabsteiger, Florian; von Tscharner, Vinzenz; Lochmann, Matthias

    2014-01-01

    EEG involves the recording, analysis, and interpretation of voltages recorded on the human scalp which originate from brain gray matter. EEG is one of the most popular methods of studying and understanding the processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements that are performed in response to the environment. However, there are methodological difficulties which can occur when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions on how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics, and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determinating real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks. PMID:24715858

  20. DEVELOPMENT OF PETROLUEM RESIDUA SOLUBILITY MEASUREMENT METHODOLOGY

    SciTech Connect

    Per Redelius

    2006-03-01

    In the present study an existing spectrophotometry system was upgraded to provide high-resolution ultraviolet (UV), visible (Vis), and near infrared (NIR) analyses of test solutions to measure the relative solubilities of petroleum residua dissolved in eighteen test solvents. Test solutions were prepared by dissolving ten percent petroleum residue in a given test solvent, agitating the mixture, followed by filtration and/or centrifugation to remove insoluble materials. These solutions were finally diluted with a good solvent resulting in a supernatant solution that was analyzed by spectrophotometry to quantify the degree of dissolution of a particular residue in the suite of test solvents that were selected. Results obtained from this approach were compared with spot-test data (to be discussed) obtained from the cosponsor.

  1. The Piston Compressor: The Methodology of the Real-Time Condition Monitoring

    NASA Astrophysics Data System (ADS)

    Naumenko, A. P.; Kostyukov, V. N.

    2012-05-01

    The methodology of a diagnostic signal processing, a function chart of the monitoring system are considered in the article. The methodology of monitoring and diagnosing is based on measurement of indirect processes' parameters (vibroacoustic oscillations) therefore no more than five sensors is established on the cylinder, measurement of direct structural and thermodynamic parameters is envisioned as well. The structure and principle of expert system's functioning of decision-making is given. Algorithm of automatic expert system includes the calculation diagnostic attributes values based on their normative values, formation sets of diagnostic attributes that correspond to individual classes to malfunction, formation of expert system messages. The scheme of a real-time condition monitoring system for piston compressors is considered. The system have consistently-parallel structure of information-measuring equipment, which allows to measure the vibroacoustic signal for condition monitoring of reciprocating compressors and modes of its work. Besides, the system allows to measure parameters of other physical processes, for example, system can measure and use for monitoring and statements of the diagnosis the pressure in decreasing spaces (the indicator diagram), the inlet pressure and flowing pressure of each cylinder, inlet and delivery temperature of gas, valves temperature, position of a rod, leakage through compression packing and others.

  2. Measuring the Quality of Publications: New Methodology and Case Study.

    ERIC Educational Resources Information Center

    Kleijnen, Jack P. C.; Van Groenendaal, Willem

    2000-01-01

    Proposes a methodology using citations to measure the quality of journals, proceedings, and book publishers. Explains the use of statistical sampling, bootstrapping, and classification that results in ranked lists of journals, proceedings, and publishers, as evidenced in a case study of the information systems field. (Author/LRW)

  3. Methodological considerations for measuring spontaneous physical activity in rodents

    PubMed Central

    Perez-Leighton, Claudio E.; Billington, Charles J.; Kotz, Catherine M.

    2014-01-01

    When exploring biological determinants of spontaneous physical activity (SPA), it is critical to consider whether methodological factors differentially affect rodents and the measured SPA. We determined whether acclimation time, sensory stimulation, vendor, or chamber size affected measures in rodents with varying propensity for SPA. We used principal component analysis to determine which SPA components (ambulatory and vertical counts, time in SPA, and distance traveled) best described the variability in SPA measurements. We compared radiotelemetry and infrared photobeams used to measure SPA and exploratory activity. Acclimation time, sensory stimulation, vendor, and chamber size independently influenced SPA, and the effect was moderated by the propensity for SPA. A 24-h acclimation period prior to SPA measurement was sufficient for habituation. Principal component analysis showed that ambulatory and vertical measurements of SPA describe different dimensions of the rodent's SPA behavior. Smaller testing chambers and a sensory attenuation cubicle around the chamber reduced SPA. SPA varies between rodents purchased from different vendors. Radiotelemetry and infrared photobeams differ in their sensitivity to detect phenotypic differences in SPA and exploratory activity. These data highlight methodological considerations in rodent SPA measurement and a need to standardize SPA methodology. PMID:24598463

  4. Holdup measurements under realistic conditions

    SciTech Connect

    Sprinkel, J.K. Jr.; Marshall, R.; Russo, P.A.; Siebelist, R.

    1997-11-01

    This paper reviews the documentation of the precision and bias of holdup (residual nuclear material remaining in processing equipment) measurements and presents previously unreported results. Precision and bias results for holdup measurements are reported from training seminars with simulated holdup, which represent the best possible results, and compared to actual plutonium processing facility measurements. Holdup measurements for plutonium and uranium processing plants are also compared to reference values. Recommendations for measuring holdup are provided for highly enriched uranium facilities and for low enriched uranium facilities. The random error component of holdup measurements is less than the systematic error component. The most likely factor in measurement error is incorrect assumptions about the measurement, such as background, measurement geometry, or signal attenuation. Measurement precision on the order of 10% can be achieved with some difficulty. Bias of poor quality holdup measurement can also be improved. However, for most facilities, holdup measurement errors have no significant impact on inventory difference, sigma, or safety (criticality, radiation, or environmental); therefore, it is difficult to justify the allocation of more resources to improving holdup measurements. 25 refs., 10 tabs.

  5. National working conditions surveys in Latin America: comparison of methodological characteristics

    PubMed Central

    Merino-Salazar, Pamela; Artazcoz, Lucía; Campos-Serna, Javier; Gimeno, David; Benavides, Fernando G.

    2015-01-01

    Background: High-quality and comparable data to monitor working conditions and health in Latin America are not currently available. In 2007, multiple Latin American countries started implementing national working conditions surveys. However, little is known about their methodological characteristics. Objective: To identify commonalities and differences in the methodologies of working conditions surveys (WCSs) conducted in Latin America through 2013. Methods: The study critically examined WCSs in Latin America between 2007 and 2013. Sampling design, data collection, and questionnaire content were compared. Results: Two types of surveys were identified: (1) surveys covering the entire working population and administered at the respondent's home and (2) surveys administered at the workplace. There was considerable overlap in the topics covered by the dimensions of employment and working conditions measured, but less overlap in terms of health outcomes, prevention resources, and activities. Conclusions: Although WCSs from Latin America are similar, there was heterogeneity across surveyed populations and location of the interview. Reducing differences in surveys between countries will increase comparability and allow for a more comprehensive understanding of occupational health in the region. PMID:26079314

  6. From lab to field conditions: a pilot study on EEG methodology in applied sports sciences.

    PubMed

    Reinecke, Kirsten; Cordes, Marjolijn; Lerch, Christiane; Koutsandréou, Flora; Schubert, Michael; Weiss, Michael; Baumeister, Jochen

    2011-12-01

    Although neurophysiological aspects have become more important in sports and exercise sciences in the last years, it was not possible to measure cortical activity during performance outside a laboratory due to equipment limits or movement artifacts in particular. With this pilot study we want to investigate whether Electroencephalography (EEG) data obtained in a laboratory golf putting performance differ from a suitable putting task under field conditions. Therefore, parameters of the working memory (frontal Theta and parietal Alpha 2 power) were recorded during these two conditions. Statistical calculations demonstrated a significant difference only for Theta power at F4 regarding the two putting conditions "field" and "laboratory". These findings support the idea that brain activity patterns obtained under laboratory conditions are comparable but not equivalent to those obtained under field conditions. Additionally, we were able to show that the EEG methodology seems to be a reliable tool to observe brain activity under field conditions in a golf putting task. However, considering the still existing problems of movement artifacts during EEG measurements, eligible sports and exercises are limited to those being relatively motionless during execution. Further studies are needed to confirm these pilot results. PMID:21800184

  7. Methodology of high-resolution photography for mural condition database

    NASA Astrophysics Data System (ADS)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  8. Research Methodology: Endocrinologic Measurements in Exercise Science and Sports Medicine

    PubMed Central

    Hackney, Anthony C; Viru, Atko

    2008-01-01

    Objective: To provide background information on methodologic factors that influence and add variance to endocrine outcome measurements. Our intent is to aid and improve the quality of exercise science and sports medicine research endeavors of investigators inexperienced in endocrinology. Background: Numerous methodologic factors influence human endocrine (hormonal) measurements and, consequently, can dramatically compromise the accuracy and validity of exercise and sports medicine research. These factors can be categorized into those that are biologic and those that are procedural-analytic in nature. Recommendations: Researchers should design their studies to monitor, control, and adjust for the biologic and procedural-analytic factors discussed within this paper. By doing so, they will find less variance in their hormonal outcomes and thereby will increase the validity of their physiologic data. These actions can assist the researcher in the interpretation and understanding of endocrine data and, in turn, make their research more scientifically sound. PMID:19030142

  9. Physiological outflow boundary conditions methodology for small arteries with multiple outlets: a patient-specific hepatic artery haemodynamics case study.

    PubMed

    Aramburu, Jorge; Antón, Raúl; Bernal, Nebai; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio

    2015-04-01

    Physiological outflow boundary conditions are necessary to carry out computational fluid dynamics simulations that reliably represent the blood flow through arteries. When dealing with complex three-dimensional trees of small arteries, and therefore with multiple outlets, the robustness and speed of convergence are also important. This study derives physiological outflow boundary conditions for cases in which the physiological values at those outlets are not known (neither in vivo measurements nor literature-based values are available) and in which the tree exhibits symmetry to some extent. The inputs of the methodology are the three-dimensional domain and the flow rate waveform and the systolic and diastolic pressures at the inlet. The derived physiological outflow boundary conditions, which are a physiological pressure waveform for each outlet, are based on the results of a zero-dimensional model simulation. The methodology assumes symmetrical branching and is able to tackle the flow distribution problem when the domain outlets are at branches with a different number of upstream bifurcations. The methodology is applied to a group of patient-specific arteries in the liver. The methodology is considered to be valid because the pulsatile computational fluid dynamics simulation with the inflow flow rate waveform (input of the methodology) and the derived outflow boundary conditions lead to physiological results, that is, the resulting systolic and diastolic pressures at the inlet match the inputs of the methodology, and the flow split is also physiological. PMID:25934258

  10. Novel CD-SEM measurement methodology for complex OPCed patterns

    NASA Astrophysics Data System (ADS)

    Lee, Hyung-Joo; Park, Won Joo; Choi, Seuk Hwan; Chung, Dong Hoon; Shin, Inkyun; Kim, Byung-Gook; Jeon, Chan-Uk; Fukaya, Hiroshi; Ogiso, Yoshiaki; Shida, Soichi; Nakamura, Takayuki

    2014-07-01

    As design rules of lithography shrink: accuracy and precision of Critical Dimension (CD) and controllability of hard OPCed patterns are required in semiconductor production. Critical Dimension Scanning Electron Microscopes (CD SEM) are essential tools to confirm the quality of a mask such as CD control; CD uniformity and CD mean to target (MTT). Basically, Repeatability and Reproducibility (R and R) performance depends on the length of Region of Interest (ROI). Therefore, the measured CD can easily fluctuate in cases of extremely narrow regions of OPCed patterns. With that premise, it is very difficult to define MTT and uniformity of complex OPCed masks using the conventional SEM measurement approach. To overcome these difficulties, we evaluated Design Based Metrology (DBM) using Large Field Of View (LFOV) of CD-SEM. DBM can standardize measurement points and positions within LFOV based on the inflection/jog of OPCed patterns. Thus, DBM has realized several thousand multi ROI measurements with average CD. This new measurement technique can remove local CD errors and improved statistical methodology of the entire mask to enhance the representativeness of global CD uniformity. With this study we confirmed this new technique as a more reliable methodology in complex OPCed patterns compared to conventional technology. This paper summarizes the experiments of DBM with LFOV using various types of the patterns and compares them with current CD SEM methods.

  11. Methodology for measurement in schools and kindergartens: experiences.

    PubMed

    Fojtíková, I; Navrátilová Rovenská, K

    2015-06-01

    In more than 1500 schools and preschool facilities, long-term radon measurement was carried out in the last 3 y. The negative effect of thermal retrofitting on the resulting long-term radon averages is evident. In some of the facilities, low ventilation rates and correspondingly high radon levels were found, so it was recommended to change ventilation habits. However, some of the facilities had high radon levels due to its ingress from soil gas. Technical measures should be undertaken to reduce radon exposure in this case. The paper presents the long-term experiences with the two-stage measurement methodology for investigation of radon levels in school and preschool facilities and its possible improvements. PMID:25979748

  12. Methodological Research Priorities in Palliative Care and Hospice Quality Measurement.

    PubMed

    Dy, Sydney Morss; Herr, Keela; Bernacki, Rachelle E; Kamal, Arif H; Walling, Anne M; Ersek, Mary; Norton, Sally A

    2016-02-01

    Quality measurement is a critical tool for improving palliative care and hospice, but significant research is needed to improve the application of quality indicators. We defined methodological priorities for advancing the science of quality measurement in this field based on discussions of the Technical Advisory Panel of the Measuring What Matters consensus project of the American Academy of Hospice and Palliative Medicine and Hospice and Palliative Nurses Association and a subsequent strategy meeting to better clarify research challenges, priorities, and quality measurement implementation strategies. In this article, we describe three key priorities: 1) defining the denominator(s) (or the population of interest) for palliative care quality indicators, 2) developing methods to measure quality from different data sources, and 3) conducting research to advance the development of patient/family-reported indicators. We then apply these concepts to the key quality domain of advance care planning and address relevance to implementation of indicators in improving care. Developing the science of quality measurement in these key areas of palliative care and hospice will facilitate improved quality measurement across all populations with serious illness and care for patients and families. PMID:26596877

  13. A Methodology for Measuring Strain in Power Semiconductors

    NASA Astrophysics Data System (ADS)

    Avery, Seth M.

    The objective of this work is to develop a strain measurement methodology for use in power electronics during electrical operation; such that strain models can be developed and used as the basis of an active strain controller---improving the reliability of power electronics modules. This research involves developing electronic speckle pattern interferometry (ESPI) into a technology capable of measuring thermal-mechanical strain in electrically active power semiconductors. ESPI is a non-contact optical technique capable of high resolution (approx. 10 nm) surface displacement measurements. This work has developed a 3-D ESPI test stand, where simultaneous in- and out-of-plane measured components are combined to accurately determine full-field surface displacement. Two cameras are used to capture both local (interconnect level) displacements and strains, and global (device level) displacements. Methods have been developed to enable strain measurements of larger loads, while avoiding speckle decorrelation (which limits ESPI measurement of large deformations). A method of extracting strain estimates directly from unfiltered and wrapped phase maps has been developed, simplifying data analysis. Experimental noise measurements are made and used to develop optimal filtering using model-based tracking and determined strain noise characteristics. The experimental results of this work are strain measurements made on the surface of a leadframe of an electrically active IGBT. A model-based tracking technique has been developed to allow for the optimal strain solution to be extracted from noisy displacement results. Also, an experimentally validated thermal-mechanical FE strain model has been developed. The results of this work demonstrate that in situ strain measurements in power devices are feasible. Using the procedures developed in the work, strain measurements at critical locations of strain, which limit device reliability, at relevant power levels can be completed.

  14. Select Methodology for Validating Advanced Satellite Measurement Systems

    NASA Technical Reports Server (NTRS)

    Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.

    2008-01-01

    Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.

  15. Conditional Standard Error of Measurement in Prediction.

    ERIC Educational Resources Information Center

    Woodruff, David

    1990-01-01

    A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…

  16. Measuring user experience in digital gaming: theoretical and methodological issues

    NASA Astrophysics Data System (ADS)

    Takatalo, Jari; Häkkinen, Jukka; Kaistinen, Jyrki; Nyman, Göte

    2007-01-01

    There are innumerable concepts, terms and definitions for user experience. Few of them have a solid empirical foundation. In trying to understand user experience in interactive technologies such as computer games and virtual environments, reliable and valid concepts are needed for measuring relevant user reactions and experiences. Here we present our approach to create both theoretically and methodologically sound methods for quantification of the rich user experience in different digital environments. Our approach is based on the idea that the experience received from a content presented with a specific technology is always a result of a complex psychological interpretation process, which components should be understood. The main aim of our approach is to grasp the complex and multivariate nature of the experience and make it measurable. We will present our two basic measurement frameworks, which have been developed and tested in large data set (n=2182). The 15 measurement scales extracted from these models are applied to digital gaming with a head-mounted display and a table-top display. The results show how it is possible to map between experience, technology variables and the background of the user (e.g., gender). This approach can help to optimize, for example, the contents for specific viewing devices or viewing situations.

  17. Telemetric measurement system of beehive environment conditions

    NASA Astrophysics Data System (ADS)

    Walendziuk, Wojciech; Sawicki, Aleksander

    2014-11-01

    This work presents a measurement system of beehive environmental conditions. The purpose of the device is to perform measurements of parameters such as ambient temperature, atmospheric pressure, internal temperature, humidity and sound level. The measured values were transferred to the MySQL database, which is located on an external server, with the use of GPRS protocol. A website presents the measurement data in the form of tables and graphs. The study also shows exemplary results of environmental conditions measurements recorded in the beehive by hour cycle.

  18. Methodological considerations for measuring glucocorticoid metabolites in feathers

    PubMed Central

    Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  19. Methodological considerations for measuring glucocorticoid metabolites in feathers.

    PubMed

    Berk, Sara A; McGettrick, Julie R; Hansen, Warren K; Breuner, Creagh W

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  20. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  1. Innovative methodologies and technologies for thermal energy release measurement.

    NASA Astrophysics Data System (ADS)

    Marotta, Enrica; Peluso, Rosario; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Chiodini, Giovanni; Mangiacapra, Annarita; Petrillo, Zaccaria; Sansivero, Fabio; Vilardo, Giuseppe; Marfe, Barbara

    2016-04-01

    Volcanoes exchange heat, gases and other fluids between the interrior of the Earth and its atmosphere influencing processes both at the surface and above it. This work is devoted to improve the knowledge on the parameters that control the anomalies in heat flux and chemical species emissions associated with the diffuse degassing processes of volcanic and hydrothermal zones. We are studying and developing innovative medium range remote sensing technologies to measure the variations through time of heat flux and chemical emissions in order to boost the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The current methodologies used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. Remote sensing of these parameters will allow for measurements faster than already accredited methods therefore it will be both more effective and efficient in case of emergency and it will be used to make quick routine monitoring. We are currently developing a method based on drone-born IR cameras to measure the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. The use of flying drones will allow to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature at distance in the order of hundreds of meters. Further development of remote sensing will be done through the use, on flying drones, of multispectral and/or iperspectral sensors, UV scanners in order to be able to detect the amount of chemical species released in the athmosphere.

  2. Optimization of Preparation Conditions for Lysozyme Nanoliposomes Using Response Surface Methodology and Evaluation of Their Stability.

    PubMed

    Wu, Zhipan; Guan, Rongfa; Lyu, Fei; Liu, Mingqi; Gao, Jianguo; Cao, Guozou

    2016-01-01

    The main purpose of this study was to optimize the preparation of lysozyme nanoliposomes using response surface methodology and measure their stability. The stabilities of lysozyme nanoliposomes in simulated gastrointestinal fluid (SGF), simulated intestinal fluid (SIF), as well as pH, temperature and sonication treatment time were evaluated. Reverse-phase evaporation method is an easy, speedy, and beneficial approach for nanoliposomes' preparation and optimization. The optimal preparative conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 3.86, lysozyme concentration of 1.96 mg/mL, magnetic stirring time of 40.61 min, and ultrasound time of 14.15 min. At the optimal point, encapsulation efficiency and particle size were found to be 75.36% ± 3.20% and 245.6 nm ± 5.2 nm, respectively. The lysozyme nanoliposomes demonstrated certain stability in SGF and SIF at a temperature of 37 °C for 4 h, and short sonication handling times were required to attain nano-scaled liposomes. Under conditions of high temperature, acidity and alkalinity, lysozyme nanoliposomes are unstable. PMID:27338315

  3. Measurement of steroid concentrations in brain tissue: methodological considerations.

    PubMed

    Taves, Matthew D; Ma, Chunqi; Heimovics, Sarah A; Saldanha, Colin J; Soma, Kiran K

    2011-01-01

    It is well recognized that steroids are synthesized de novo in the brain (neurosteroids). In addition, steroids circulating in the blood enter the brain. Steroids play numerous roles in the brain, such as influencing neural development, adult neuroplasticity, behavior, neuroinflammation, and neurodegenerative diseases such as Alzheimer's disease. In order to understand the regulation and functions of steroids in the brain, it is important to directly measure steroid concentrations in brain tissue. In this brief review, we discuss methods for the detection and quantification of steroids in the brain. We concisely present the major advantages and disadvantages of different technical approaches at various experimental stages: euthanasia, tissue collection, steroid extraction, steroid separation, and steroid measurement. We discuss, among other topics, the potential effects of anesthesia and saline perfusion prior to tissue collection; microdissection via Palkovits punch; solid phase extraction; chromatographic separation of steroids; and immunoassays and mass spectrometry for steroid quantification, particularly the use of mass spectrometry for "steroid profiling." Finally, we discuss the interpretation of local steroid concentrations, such as comparing steroid levels in brain tissue with those in the circulation (plasma vs. whole blood samples; total vs. free steroid levels). We also present reference values for a variety of steroids in different brain regions of adult rats. This brief review highlights some of the major methodological considerations at multiple experimental stages and provides a broad framework for designing studies that examine local steroid levels in the brain as well as other steroidogenic tissues, such as thymus, breast, and prostate. PMID:22654806

  4. Measuring persistence: A literature review focusing on methodological issues

    SciTech Connect

    Wolfe, A.K.; Brown, M.A.; Trumble, D.

    1995-03-01

    This literature review was conducted as part of a larger project to produce a handbook on the measurement of persistence. The past decade has marked the development of the concept of persistence and a growing recognition that the long-term impacts of demand-side management (DSM) programs warrant careful assessment. Although Increasing attention has been paid to the topic of persistence, no clear consensus has emerged either about its definition or about the methods most appropriate for its measurement and analysis. This project strives to fill that gap by reviewing the goals, terminology, and methods of past persistence studies. It was conducted from the perspective of a utility that seeks to acquire demand-side resources and is interested in their long-term durability; it was not conducted from the perspective of the individual consumer. Over 30 persistence studies, articles, and protocols were examined for this report. The review begins by discussing the underpinnings of persistence studies: namely, the definitions of persistence and the purposes of persistence studies. Then. it describes issues relevant to both the collection and analysis of data on the persistence of energy and demand savings. Findings from persistence studies also are summarized. Throughout the review, four studies are used repeatedly to illustrate different methodological and analytical approaches to persistence so that readers can track the data collection. data analysis, and findings of a set of comprehensive studies that represent alternative approaches.

  5. Measurement of testosterone in human sexuality research: methodological considerations.

    PubMed

    van Anders, Sari M; Goldey, Katherine L; Bell, Sarah N

    2014-02-01

    Testosterone (T) and other androgens are incorporated into an increasingly wide array of human sexuality research, but there are a number of issues that can affect or confound research outcomes. This review addresses various methodological issues relevant to research design in human studies with T; unaddressed, these issues may introduce unwanted noise, error, or conceptual barriers to interpreting results. Topics covered are (1) social and demographic factors (gender and sex; sexual orientations and sexual diversity; social/familial connections and processes; social location variables), (2) biological rhythms (diurnal variation; seasonality; menstrual cycles; aging and menopause), (3) sample collection, handling, and storage (saliva vs. blood; sialogogues, saliva, and tubes; sampling frequency, timing, and context; shipping samples), (4) health, medical issues, and the body (hormonal contraceptives; medications and nicotine; health conditions and stress; body composition, weight, and exercise), and (5) incorporating multiple hormones. Detailing a comprehensive set of important issues and relevant empirical evidence, this review provides a starting point for best practices in human sexuality research with T and other androgens that may be especially useful for those new to hormone research. PMID:23807216

  6. Adaptability of laser diffraction measurement technique in soil physics methodology

    NASA Astrophysics Data System (ADS)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  7. Measurement of Steroid Concentrations in Brain Tissue: Methodological Considerations

    PubMed Central

    Taves, Matthew D.; Ma, Chunqi; Heimovics, Sarah A.; Saldanha, Colin J.; Soma, Kiran K.

    2011-01-01

    It is well recognized that steroids are synthesized de novo in the brain (neurosteroids). In addition, steroids circulating in the blood enter the brain. Steroids play numerous roles in the brain, such as influencing neural development, adult neuroplasticity, behavior, neuroinflammation, and neurodegenerative diseases such as Alzheimer’s disease. In order to understand the regulation and functions of steroids in the brain, it is important to directly measure steroid concentrations in brain tissue. In this brief review, we discuss methods for the detection and quantification of steroids in the brain. We concisely present the major advantages and disadvantages of different technical approaches at various experimental stages: euthanasia, tissue collection, steroid extraction, steroid separation, and steroid measurement. We discuss, among other topics, the potential effects of anesthesia and saline perfusion prior to tissue collection; microdissection via Palkovits punch; solid phase extraction; chromatographic separation of steroids; and immunoassays and mass spectrometry for steroid quantification, particularly the use of mass spectrometry for “steroid profiling.” Finally, we discuss the interpretation of local steroid concentrations, such as comparing steroid levels in brain tissue with those in the circulation (plasma vs. whole blood samples; total vs. free steroid levels). We also present reference values for a variety of steroids in different brain regions of adult rats. This brief review highlights some of the major methodological considerations at multiple experimental stages and provides a broad framework for designing studies that examine local steroid levels in the brain as well as other steroidogenic tissues, such as thymus, breast, and prostate. PMID:22654806

  8. [Measuring the intracoronary pressure gradient--value and methodologic limitations].

    PubMed

    Sievert, H; Kaltenbach, M

    1987-06-01

    Measurements of pressure gradients were performed in a fluid-filled model. The hydrostatically regulated perfusion pressure, as well as the diameter of the tube segments and the regulation of the flow by peripheral resistance, were comparable to conditions in human coronary arteries. Pressure gradients above 20 mm Hg were only measured with a reduction in cross-sectional area of more than 90%. Even after increasing the flow four-fold, which corresponds to the human coronary flow reserve, as well as after probing the stenosis with different catheters (2F-5F), gradients greater than 20 mm Hg were only recorded with high-grade stenoses (more than 80% reduction in cross-sectional area). The findings in this model demonstrate that measurement of pressure gradients allows only a quantitative differentiation between high-grade (greater than 80%) and low-grade (less than 80%) stenoses. The catheter itself can substantially contribute to the gradient by vessel obstruction, depending on the diameter of the catheter and of the coronary vessel. A quantitative assessment of the stenosis therefore requires knowledge of the pre- and post-stenotic vessel diameter as well as of the catheter diameter. However, pressure measurements during transluminal coronary angioplasty should not be abandoned. They can be useful to aid catheter positioning and to estimate dilatation efficacy. Moreover, measurement of coronary capillary wedge pressure during balloon expansion provides valuable information about the extent of collateralisation. PMID:2957862

  9. Apparatus for measuring rat body volume: a methodological proposition.

    PubMed

    Hohl, Rodrigo; de Oliveira, Renato Buscariolli; Vaz de Macedo, Denise; Brenzikofer, René

    2007-03-01

    We propose a communicating-vessels system to measure body volume in live rats through water level detection by hydrostatic weighing. The reproducibility, accuracy, linearity, and reliability of this apparatus were evaluated in two tests using previously weighed water or six aluminum cylinders of known volume after proper system calibration. The applicability of this apparatus to measurement of live animals (Wistar rats) was tested in a transversal experiment with five rats, anesthetized and nonanesthetized. We took 18 measurements of the volume under each condition (anesthetized and nonanesthetized), totaling 90 measurements. The addition of water volumes (50-700 ml) produced a regression equation with a slope of 1.0006 +/- 0.0017, intercept of 0.75 +/- 0.81 (R(2) = 0.99999, standard error of estimate = 0.58 ml), and bias of approximately 1 ml. The differences between cylinders of known volumes and volumes calculated by the system were <0.4 ml. Mean volume errors were 0.01-0.07%. Among the live models, the difference between the volumes obtained for anesthetized and nonanesthetized rats was 0.31 +/- 2.34 (SD) ml (n = 90). These data showed that animal movement does not interfere with the volume measured by the proposed apparatus, and neither anesthesia nor fur shaving is needed for this procedure. Nevertheless, some effort should be taken to eliminate air bubbles trapped in the apparatus or the fur. The proposed apparatus for measuring rat body volume is inexpensive and may be useful for a range of scientific purposes. PMID:17082370

  10. Responsive Surface Methodology Optimizes Extraction Conditions of Industrial by-products, Camellia japonica Seed Cake

    PubMed Central

    Kim, Jae Kyeom; Lim, Ho-Jeong; Kim, Mi-So; Choi, Soo Jung; Kim, Mi-Jeong; Kim, Cho Rong; Shin, Dong-Hoon; Shin, Eui-Cheol

    2016-01-01

    Background: The central nervous system is easily damaged by oxidative stress due to high oxygen consumption and poor defensive capacity. Hence, multiple studies have demonstrated that inhibiting oxidative stress-induced damage, through an antioxidant-rich diet, might be a reasonable approach to prevent neurodegenerative disease. Objective: In the present study, response surface methodology was utilized to optimize the extraction for neuro-protective constituents of Camellia japonica byproducts. Materials and Methods: Rat pheochromocytoma cells were used to evaluate protective potential of Camellia japonica byproducts. Results: Optimum conditions were 33.84 min, 75.24%, and 75.82°C for time, ethanol concentration and temperature. Further, we demonstrated that major organic acid contents were significantly impacted by the extraction conditions, which may explain varying magnitude of protective potential between fractions. Conclusions: Given the paucity of information in regards to defatted C. japonica seed cake and their health promoting potential, our results herein provide interesting preliminary data for utilization of this byproduct from oil processing in both academic and industrial applications. SUMMARY Neuro-protective potential of C. japonica seed cake on cell viability was affected by extraction conditionsExtraction conditions effectively influenced on active constituents of C. japonica seed cakeBiological activity of C. japonica seed cake was optimized by the responsive surface methodology. Abbreviations used: GC-MS: Gas chromatography-mass spectrometer, MTT: 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide, PC12 cells: Pheochromocytoma, RSM: Response surface methodology. PMID:27601847

  11. Aircraft emission measurements by remote sensing methodologies at airports

    NASA Astrophysics Data System (ADS)

    Schäfer, Klaus; Jahn, Carsten; Sturm, Peter; Lechner, Bernhard; Bacher, Michael

    The emission indices of aircraft engine exhausts from measurements taken under operating conditions, to calculate precisely the emission inventories of airports, are not available up to now. To determine these data, measurement campaigns were performed on idling aircraft at major European airports using non-intrusive spectroscopic methods like Fourier transform infrared spectrometry and differential optical absorption spectroscopy. Emission indices for CO and NO x were calculated and compared to the values given in the International Civil Aviation Organisation (ICAO) database. The emission index for CO for 36 different aircraft engine types and for NO x (24 different engine types) were determined. It was shown that for idling aircraft, CO emissions are underestimated using the ICAO database. The emission indices for NO x determined in this work are lower than given in the ICAO database. In addition, a high variance of emission indices in each aircraft family and from engine to engine of the same engine type was found. During the same measurement campaigns, the emission indices for CO and NO of eight different types of auxilliary power units were investigated.

  12. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ..., service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY... reporting methodologies. The Postal Service shall file notice with the Commission describing all changes to measurement systems, service standards, service goals or reporting methodologies, including the use of...

  13. The Role of Measuring in the Learning of Environmental Occupations and Some Aspects of Environmental Methodology

    ERIC Educational Resources Information Center

    István, Lüko

    2016-01-01

    The methodology neglected area of pedagogy, within the environmental specialist teaching methodology cultivating the best part is incomplete. In this article I shall attempt to environmental methodology presented in one part of the University of West Environmental Education workshop, based on the measurement of experiential learning as an…

  14. Advanced quantitative measurement methodology in physics education research

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The

  15. Weak measurement and Bohmian conditional wave functions

    SciTech Connect

    Norsen, Travis; Struyve, Ward

    2014-11-15

    It was recently pointed out and demonstrated experimentally by Lundeen et al. that the wave function of a particle (more precisely, the wave function possessed by each member of an ensemble of identically-prepared particles) can be “directly measured” using weak measurement. Here it is shown that if this same technique is applied, with appropriate post-selection, to one particle from a perhaps entangled multi-particle system, the result is precisely the so-called “conditional wave function” of Bohmian mechanics. Thus, a plausibly operationalist method for defining the wave function of a quantum mechanical sub-system corresponds to the natural definition of a sub-system wave function which Bohmian mechanics uniquely makes possible. Similarly, a weak-measurement-based procedure for directly measuring a sub-system’s density matrix should yield, under appropriate circumstances, the Bohmian “conditional density matrix” as opposed to the standard reduced density matrix. Experimental arrangements to demonstrate this behavior–and also thereby reveal the non-local dependence of sub-system state functions on distant interventions–are suggested and discussed. - Highlights: • We study a “direct measurement” protocol for wave functions and density matrices. • Weakly measured states of entangled particles correspond to Bohmian conditional states. • Novel method of observing quantum non-locality is proposed.

  16. Measurement conditions in the space experiment MONICA

    NASA Astrophysics Data System (ADS)

    Bakaldin, A. V.; Karelin, A. V.; Koldashov, S. V.; Voronov, S. A.

    2016-02-01

    The present contribution is dedicated to the investigation of the background conditions for cosmic ray ion ionization state measurements in MONICA experiment [1]. The future experiment MONICA is aimed to study the cosmic ray ion fluxes from H till Ni in energy range 10-300 MeV/n by multilayer semiconductor telescope-spectrometer installed onboard satellite. The satellite orbit parameters (circular, altitude is about 600 km, polar) were chosen for the realization of the unique method for the measurement of the charge state of the ions with energies >10 MeV/n by using the Earth magnetic field as a separator of ion charge. The analysis of the background particle fluxes is presented taking into account the recent data of the satellite experiments and known AE-8, AP-8 models and elaborated the recommendations to improve the background conditions for the MONICA experiment.

  17. Methodology for determining time-dependent mechanical properties of tuff subjected to near-field repository conditions

    SciTech Connect

    Blacic, J.D.; Andersen, R.

    1983-01-01

    We have established a methodology to determine the time dependence of strength and transport properties of tuff under conditions appropriate to a nuclear waste repository. Exploratory tests to determine the approximate magnitudes of thermomechanical property changes are nearly complete. In this report we describe the capabilities of an apparatus designed to precisely measure the time-dependent deformation and permeability of tuff at simulated repository conditions. Preliminary tests with this new apparatus indicate that microclastic creep failure of tuff occurs over a narrow strain range with little precursory Tertiary creep behavior. In one test, deformation under conditions of slowly decreasing effective pressure resulted in failure, whereas some strain indicators showed a decreasing rate of strain.

  18. Response Surface Methodology: An Extensive Potential to Optimize in vivo Photodynamic Therapy Conditions

    SciTech Connect

    Tirand, Loraine; Bastogne, Thierry; Bechet, Denise M.Sc.; Linder, Michel; Thomas, Noemie; Frochot, Celine; Guillemin, Francois; Barberi-Heyob, Muriel

    2009-09-01

    Purpose: Photodynamic therapy (PDT) is based on the interaction of a photosensitizing (PS) agent, light, and oxygen. Few new PS agents are being developed to the in vivo stage, partly because of the difficulty in finding the right treatment conditions. Response surface methodology, an empirical modeling approach based on data resulting from a set of designed experiments, was suggested as a rational solution with which to select in vivo PDT conditions by using a new peptide-conjugated PS targeting agent, neuropilin-1. Methods and Materials: A Doehlert experimental design was selected to model effects and interactions of the PS dose, fluence, and fluence rate on the growth of U87 human malignant glioma cell xenografts in nude mice, using a fixed drug-light interval. All experimental results were computed by Nemrod-W software and Matlab. Results: Intrinsic diameter growth rate, a tumor growth parameter independent of the initial volume of the tumor, was selected as the response variable and was compared to tumor growth delay and relative tumor volumes. With only 13 experimental conditions tested, an optimal PDT condition was selected (PS agent dose, 2.80 mg/kg; fluence, 120 J/cm{sup 2}; fluence rate, 85 mW/cm{sup 2}). Treatment of glioma-bearing mice with the peptide-conjugated PS agent, followed by the optimized PDT condition showed a statistically significant improvement in delaying tumor growth compared with animals who received the PDT with the nonconjugated PS agent. Conclusions: Response surface methodology appears to be a useful experimental approach for rapid testing of different treatment conditions and determination of optimal values of PDT factors for any PS agent.

  19. Calibration methodology for proportional counters applied to yield measurements of a neutron burst

    SciTech Connect

    Tarifeño-Saldivia, Ariel E-mail: atarisal@gmail.com; Pavez, Cristian; Soto, Leopoldo; Mayer, Roberto E.

    2014-01-15

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  20. A methodology for hard/soft information fusion in the condition monitoring of aircraft

    NASA Astrophysics Data System (ADS)

    Bernardo, Joseph T.

    2013-05-01

    Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery with electronic sensors that continuously monitor components and collect data for analysis. The addition of the multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation, coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition. Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore, observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem detection capability, improve flight safety, and increase mission readiness. This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of concept, which will support development of activities to demonstrate hard/soft information fusion in operationally relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a prototype.

  1. A methodological framework for linking bioreactor function to microbial communities and environmental conditions.

    PubMed

    de los Reyes, Francis L; Weaver, Joseph E; Wang, Ling

    2015-06-01

    In the continuing quest to relate microbial communities in bioreactors to function and environmental and operational conditions, engineers and biotechnologists have adopted the latest molecular and 'omic methods. Despite the large amounts of data generated, gaining mechanistic insights and using the data for predictive and practical purposes is still a huge challenge. We present a methodological framework that can guide experimental design, and discuss specific issues that can affect how researchers generate and use data to elucidate the relationships. We also identify, in general terms, bioreactor research opportunities that appear promising. PMID:25710123

  2. Measurement-based auralization methodology for the assessment of noise mitigation measures

    NASA Astrophysics Data System (ADS)

    Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick

    2016-09-01

    The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.

  3. Optimization of hydrolysis conditions for bovine plasma protein using response surface methodology.

    PubMed

    Seo, Hyun-Woo; Jung, Eun-Young; Go, Gwang-Woong; Kim, Gap-Don; Joo, Seon-Tea; Yang, Han-Sul

    2015-10-15

    The purpose of this study was to establish optimal conditions for the hydrolysis of bovine plasma protein. Response surface methodology was used to model and optimize responses [degree of hydrolysis (DH), 2,2-diphenyl-1-picrydrazyl (DPPH) radical-scavenging activity and Fe(2+)-chelating activity]. Hydrolysis conditions, such as hydrolysis temperature (46.6-63.4 °C), hydrolysis time (98-502 min), and hydrolysis pH (6.32-9.68) were selected as the main processing conditions in the hydrolysis of bovine plasma protein. Optimal conditions for maximum DH (%), DPPH radical-scavenging activity (%) and Fe(2+)-chelating activity (%) of the hydrolyzed bovine plasma protein, were respectively established. We discovered the following three conditions for optimal hydrolysis of bovine plasma: pH of 7.82-8.32, temperature of 54.1 °C, and time of 338.4-398.4 min. We consequently succeeded in hydrolyzing bovine plasma protein under these conditions and confirmed the various desirable properties of optimal hydrolysis. PMID:25952847

  4. Alcohol Measurement Methodology in Epidemiology: Recent Advances and Opportunities

    PubMed Central

    Greenfield, Thomas K.; Kerr, William C.

    2009-01-01

    Aim To review and discuss measurement issues in survey assessment of alcohol consumption for epidemiological studies. Methods The following areas are considered: implications of cognitive studies of question answering like self-referenced schemata of drinking, reference period and retrospective recall, as well as the assets and liabilities of types of current (e.g., food frequency, quantity frequency, graduated frequencies, and heavy drinking indicators) and lifetime drinking measures. Finally we consider units of measurement and improving measurement by detailing the ethanol content of drinks in natural settings. Results and conclusions Cognitive studies suggest inherent limitations in the measurement enterprise, yet diary studies show promise of broadly validating methods that assess a range of drinking amounts per occasion; improvements in survey measures of drinking in the life course are indicated; attending in detail to on and off-premise drink pour sizes and ethanol concentrations of various beverages shows promise of narrowing the coverage gap plaguing survey alcohol measurement. PMID:18422826

  5. Methodology of the Westinghouse dynamic rod worth measurement technique

    SciTech Connect

    Chao, Y.A.; Chapman, D.M.; Easter, M.E.; Hill, D.J.; Hoerner, J.A. ); Kurtz, P.N. )

    1992-01-01

    During zero-power physics testing, plant operations personnel use one of various techniques to measure the reactivity worth of the control rods to confirm shutdown margin. A simple and fast procedure for measuring rod worths called dynamic rod worth measurement (DRWM) has been developed at Westinghouse. This procedure was tested at the recent startups of Point Beach Nuclear Power Plant Unit 1 cycle 20 and Unit 2 cycle 18. The results of these tests show that DRWM measures rod worths with accuracy comparable to that of both boron dilution and rod bank exchange measurements. The DRWM procedure is a fast process of measuring the reactivity worth of individual banks by inserting and withdrawing the bank continuously at the maximum stepping speed without changing the boron concentration and recording the signals of the ex-core detectors.

  6. Causality analysis in business performance measurement system using system dynamics methodology

    NASA Astrophysics Data System (ADS)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  7. Measurement-based reliability prediction methodology. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Linn, Linda Shen

    1991-01-01

    In the past, analytical and measurement based models were developed to characterize computer system behavior. An open issue is how these models can be used, if at all, for system design improvement. The issue is addressed here. A combined statistical/analytical approach to use measurements from one environment to model the system failure behavior in a new environment is proposed. A comparison of the predicted results with the actual data from the new environment shows a close correspondence.

  8. A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles

    PubMed Central

    Janson, Lucas; Rajaratnam, Bala

    2014-01-01

    Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature. PMID:25587203

  9. Optimizing ultrasonic ellagic acid extraction conditions from infructescence of Platycarya strobilacea using response surface methodology.

    PubMed

    Zhang, Liang-Liang; Xu, Man; Wang, Yong-Mei; Wu, Dong-Mei; Chen, Jia-Hong

    2010-11-01

    The infructescence of Platycarya strobilacea is a rich source of ellagic acid (EA) which has shown antioxidant, anticancer and antimutagen properties. Response surface methodology (RSM) was used to optimize the conditions for ultrasonic extraction of EA from infructescence of P. strobilacea. A central composite design (CCD) was used for experimental design and analysis of the results to obtain the optimal processing parameters. The content of EA in the extracts was determined by HPLC with UV detection. Three independent variables such as ultrasonic extraction temperature (°C), liquid:solid ratio (mL/g), and ultrasonic extraction time (min) were investigated. The experimental data obtained were fitted to a quadratic equation using multiple regression analysis and also analyzed by appropriate statistical methods. The 3-D response surface and the contour plots derived from the mathematical models were applied to determine the optimal conditions. The optimum ultrasonic extraction conditions were as follows: ultrasonic extraction temperature 70 °C, liquid:solid ratio 22.5, and ultrasonic extraction time 40 min. Under these conditions, the experimental percentage value was 1.961%, which is in close agreement with the value predicted by the model. PMID:21060299

  10. Optimization of hull-less pumpkin seed roasting conditions using response surface methodology.

    PubMed

    Vujasinović, Vesna; Radočaj, Olga; Dimić, Etelka

    2012-05-01

    Response surface methodology (RSM) was applied to optimize hull-less pumpkin seed roasting conditions before seed pressing to maximize the biochemical composition and antioxidant capacity of the virgin pumpkin oils obtained using a hydraulic press. Hull-less pumpkin seeds were roasted for various lengths of time (30 to 70 min) at various roasting temperatures (90 to 130 °C), resulting in 9 different oil samples, while the responses were phospholipids content, total phenols content, α- and γ-tocopherols, and antioxidative activity [by 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical assay]. Mathematical models have shown that roasting conditions influenced all dependent variables at P < 0.05. The higher roasting temperatures had a significant effect (P < 0.05) on phospholipids, phenols, and α-tocopherols contents, while longer roasting time had a significant effect (P < 0.05) on γ-tocopherol content and antioxidant capacity, among the samples prepared under different roasting conditions. The optimum conditions for roasting the hull-less pumpkin seeds were 120 °C for duration of 49 min, which resulted in these oil concentrations: phospholipids 0.29%, total phenols 23.06 mg/kg, α-tocopherol 5.74 mg/100 g, γ-tocopherol 24.41 mg/100 g, and an antioxidative activity (EC(50)) of 27.18 mg oil/mg DPPH. PMID:23163936

  11. Aqueduct: a methodology to measure and communicate global water risks

    NASA Astrophysics Data System (ADS)

    Gassert, Francis; Reig, Paul

    2013-04-01

    , helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.

  12. Measuring Individual Differences in Decision Biases: Methodological Considerations

    PubMed Central

    Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence

    2015-01-01

    Individual differences in people's susceptibility to heuristics and biases (HB) are often measured by multiple-bias questionnaires consisting of one or a few items for each bias. This research approach relies on the assumptions that (1) different versions of a decision bias task measure are interchangeable as they measure the same cognitive failure; and (2) that some combination of these tasks measures the same underlying construct. Based on these assumptions, in Study 1 we developed two versions of a new decision bias survey for which we modified 13 HB tasks to increase their comparability, construct validity, and the participants' motivation. The analysis of the responses (N = 1279) showed weak internal consistency within the surveys and a great level of discrepancy between the extracted patterns of the underlying factors. To explore these inconsistencies, in Study 2 we used three original examples of HB tasks for each of seven biases. We created three decision bias surveys by allocating one version of each HB task to each survey. The participants' responses (N = 527) showed a similar pattern as in Study 1, questioning the assumption that the different examples of the HB tasks are interchangeable and that they measure the same underlying construct. These results emphasize the need to understand the domain-specificity of cognitive biases as well as the effect of the wording of the cover story and the response mode on bias susceptibility before employing them in multiple-bias questionnaires. PMID:26635677

  13. Measuring Individual Differences in Decision Biases: Methodological Considerations.

    PubMed

    Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence

    2015-01-01

    Individual differences in people's susceptibility to heuristics and biases (HB) are often measured by multiple-bias questionnaires consisting of one or a few items for each bias. This research approach relies on the assumptions that (1) different versions of a decision bias task measure are interchangeable as they measure the same cognitive failure; and (2) that some combination of these tasks measures the same underlying construct. Based on these assumptions, in Study 1 we developed two versions of a new decision bias survey for which we modified 13 HB tasks to increase their comparability, construct validity, and the participants' motivation. The analysis of the responses (N = 1279) showed weak internal consistency within the surveys and a great level of discrepancy between the extracted patterns of the underlying factors. To explore these inconsistencies, in Study 2 we used three original examples of HB tasks for each of seven biases. We created three decision bias surveys by allocating one version of each HB task to each survey. The participants' responses (N = 527) showed a similar pattern as in Study 1, questioning the assumption that the different examples of the HB tasks are interchangeable and that they measure the same underlying construct. These results emphasize the need to understand the domain-specificity of cognitive biases as well as the effect of the wording of the cover story and the response mode on bias susceptibility before employing them in multiple-bias questionnaires. PMID:26635677

  14. Multi-Attribute Decision Theory methodology for pollution control measure analysis

    SciTech Connect

    Barrera Roldan, A.S.; Corona Juarez, A.; Hardie, R.W.; Thayer, G.R.

    1992-12-31

    A methodology based in Multi-Attribute Decision Theory was developed to prioritize air pollution control measures and strategies (a set of measures) for Mexico City Metropolitan Area (MCMA). We have developed a framework that takes into account economic, technical feasibility, environmental, social, political, and institutional factors to evaluate pollution mitigation measures and strategies utilizing a decision analysis process. In a series of meetings with a panel of experts in air pollution from different offices of the mexican government we have developed General and Specific criteria for a decision analysis tree. With these tools the measures or strategies can be graded and a figure of merit can be assigned to each of them, so they can be ranked. Two pollution mitigation measures were analyzed to test the methodology, the results are presented. This methodology was developed specifically for Mexico City, though the experience gained in this work can be used to develop similar methodologies for other metropolitan areas throughout the world.

  15. Multi-Attribute Decision Theory methodology for pollution control measure analysis

    SciTech Connect

    Barrera Roldan, A.S.; Corona Juarez, A. ); Hardie, R.W.; Thayer, G.R. )

    1992-01-01

    A methodology based in Multi-Attribute Decision Theory was developed to prioritize air pollution control measures and strategies (a set of measures) for Mexico City Metropolitan Area (MCMA). We have developed a framework that takes into account economic, technical feasibility, environmental, social, political, and institutional factors to evaluate pollution mitigation measures and strategies utilizing a decision analysis process. In a series of meetings with a panel of experts in air pollution from different offices of the mexican government we have developed General and Specific criteria for a decision analysis tree. With these tools the measures or strategies can be graded and a figure of merit can be assigned to each of them, so they can be ranked. Two pollution mitigation measures were analyzed to test the methodology, the results are presented. This methodology was developed specifically for Mexico City, though the experience gained in this work can be used to develop similar methodologies for other metropolitan areas throughout the world.

  16. [Methodological challenges in measurements of functional ability in gerontological research].

    PubMed

    Avlund, E K

    1997-10-20

    This article addresses the advantages and disadvantages of different methods in measuring functional ability with its main focus on frame of reference, operationalization, practical procedure, validity, discriminatory power, and responsiveness. When measuring functional ability it is recommended: 1) Always to consider the theoretical frame of reference as part of the validation process. 2) Always to assess the content validity of items before they are combined into an index and before performing tests for construct validity. 3) Not to combine mobility, PADL and IADL in the same index/scale. 4) Not to use IADL as a health-related functional ability measure or, if used, to ask whether problems with IADL or non-performance of IADL are caused by health-related factors. 5) Always to analyse functional ability separately for men and women. 6) To exclude the dead in analyses of change in functional ability if the focus is on predictors of deterioration in functional ability. PMID:9411957

  17. Conceptual and methodological advances in child-reported outcomes measurement

    PubMed Central

    Bevans, Katherine B; Riley, Anne W; Moon, JeanHee; Forrest, Christopher B

    2011-01-01

    Increasingly, clinical, pharmaceutical and translational research studies use patient-reported outcomes as primary and secondary end points. Obtaining this type of information from children themselves is now possible, but effective assessment requires developmentally sensitive conceptual models of child health and an appreciation for the rapid change in children’s cognitive capacities. To overcome these barriers, outcomes researchers have capitalized on innovations in modern measurement theory, qualitative methods for instrument development and new computerized technologies to create reliable and valid methods for obtaining self-reported health data among 8–17-year-old children. This article provides a developmentally focused framework for selecting child-report health assessment instruments. Several generic health-related quality of life instruments and the assessment tools developed by the NIH-sponsored Patient-Reported Outcome Measurement Information System network are discussed to exemplify advances in the measurement of children’s self-reported health, illness, wellbeing and quality of life. PMID:20715916

  18. Effective speed and agility conditioning methodology for random intermittent dynamic type sports.

    PubMed

    Bloomfield, Jonathan; Polman, Remco; O'Donoghue, Peter; McNaughton, Lars

    2007-11-01

    Different coaching methods are often used to improve performance. This study compared the effectiveness of 2 methodologies for speed and agility conditioning for random, intermittent, and dynamic activity sports (e.g., soccer, tennis, hockey, basketball, rugby, and netball) and the necessity for specialized coaching equipment. Two groups were delivered either a programmed method (PC) or a random method (RC) of conditioning with a third group receiving no conditioning (NC). PC participants used the speed, agility, quickness (SAQ) conditioning method, and RC participants played supervised small-sided soccer games. PC was also subdivided into 2 groups where participants either used specialized SAQ equipment or no equipment. A total of 46 (25 males and 21 females) untrained participants received (mean +/- SD) 12.2 +/- 2.1 hours of physical conditioning over 6 weeks between a battery of speed and agility parameter field tests. Two-way analysis of variance results indicated that both conditioning groups showed a significant decrease in body mass and body mass index, although PC achieved significantly greater improvements on acceleration, deceleration, leg power, dynamic balance, and the overall summation of % increases when compared to RC and NC (p < 0.05). PC in the form of SAQ exercises appears to be a superior method for improving speed and agility parameters; however, this study found that specialized SAQ equipment was not a requirement to observe significant improvements. Further research is required to establish whether these benefits transfer to sport-specific tasks as well as to the underlying mechanisms resulting in improved performance. PMID:18076227

  19. Methodology for reliability based condition assessment. Application to concrete structures in nuclear plants

    SciTech Connect

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period.

  20. Conceptual and Methodological Issues in Treatment Integrity Measurement

    ERIC Educational Resources Information Center

    McLeod, Bryce D.; Southam-Gerow, Michael A.; Weisz, John R.

    2009-01-01

    This special series focused on treatment integrity in the child mental health and education field is timely. The articles do a laudable job of reviewing (a) the current status of treatment integrity research and measurement, (b) existing conceptual models of treatment integrity, and (c) the limitations of prior research. Overall, this thoughtful…

  1. Measure of Landscape Heterogeneity by Agent-Based Methodology

    NASA Astrophysics Data System (ADS)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.

    2016-06-01

    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  2. MEASUREMENT OF CARDIOPULMONARY FUNCTION BY REBREATHING METHODOLOGY IN PIGLETS

    EPA Science Inventory

    The use of a multiple gas rebreathing method for the measurement of cardiopulmonary function in mechanically ventilated neonates was evaluated. The following indices of cardiopulmonary function were assessed in 20 piglets (mean weight, 2.3 kg): (1) pulmonary capillary blood flow ...

  3. Methodologies for measuring residual stress distributions in epitaxial thin films

    NASA Astrophysics Data System (ADS)

    Liu, M.; Ruan, H. H.; Zhang, L. C.

    2013-01-01

    Residual stresses in a thin film deposited on a dissimilar substrate can bring about various interface or subsurface damages, such as delamination, dislocation, twinning and cracking. In high performance integrated circuits and MEMS, a too high residual stress can significantly alter their electronic properties. A proper residual stress characterization needs the description of full stress tensors and their variations with thickness. The problem is that film thickness measurement requires different means, and that direct measurement techniques to fulfill the tasks are not straightforward. This paper provides a simple method using X-ray diffraction (XRD) and Raman scattering for the measurement of residual stresses and their thickness dependence. Using the epitaxial silicon film on a sapphire substrate as an example, this paper demonstrates that the improved XRD technique can make use of multiple diffraction peaks to give rise to a highly accurate stress tensor. The co-existence of silicon and sapphire peaks in a Raman spectrum then allows a simultaneous measurement of film thickness from the peak intensity ratio and the residual stress from the peak shift. The paper also concludes the relation between film thickness and residual stresses.

  4. Sensible organizations: technology and methodology for automatically measuring organizational behavior.

    PubMed

    Olguin Olguin, Daniel; Waber, Benjamin N; Kim, Taemie; Mohan, Akshay; Ara, Koji; Pentland, Alex

    2009-02-01

    We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks. PMID:19150759

  5. Optimization of culture conditions for flexirubin production by Chryseobacterium artocarpi CECT 8497 using response surface methodology.

    PubMed

    Venil, Chidambaram Kulandaisamy; Zakaria, Zainul Akmar; Ahmad, Wan Azlina

    2015-01-01

    Flexirubins are the unique type of bacterial pigments produced by the bacteria from the genus Chryseobacterium, which are used in the treatment of chronic skin disease, eczema etc. and may serve as a chemotaxonomic marker. Chryseobacterium artocarpi CECT 8497, an yellowish-orange pigment producing strain was investigated for maximum production of pigment by optimizing medium composition employing response surface methodology (RSM). Culture conditions affecting pigment production were optimized statistically in shake flask experiments. Lactose, l-tryptophan and KH2PO4 were the most significant variables affecting pigment production. Box Behnken design (BBD) and RSM analysis were adopted to investigate the interactions between variables and determine the optimal values for maximum pigment production. Evaluation of the experimental results signified that the optimum conditions for maximum production of pigment (521.64 mg/L) in 50 L bioreactor were lactose 11.25 g/L, l-tryptophan 6 g/L and KH2PO4 650 ppm. Production under optimized conditions increased to 7.23 fold comparing to its production prior to optimization. Results of this study showed that statistical optimization of medium composition and their interaction effects enable short listing of the significant factors influencing maximum pigment production from Chryseobacterium artocarpi CECT 8497. In addition, this is the first report optimizing the process parameters for flexirubin type pigment production from Chryseobacterium artocarpi CECT 8497. PMID:25979288

  6. Post-retrieval extinction as reconsolidation interference: methodological issues or boundary conditions?

    PubMed Central

    Auber, Alessia; Tedesco, Vincenzo; Jones, Carolyn E.; Monfils, Marie-H.; Chiamulera, Christian

    2013-01-01

    Memories that are emotionally arousing generally promote the survival of species; however, the systems that modulate emotional learning can go awry, resulting in pathological conditions such as post-traumatic stress disorders, phobias, and addiction. Understanding the conditions under which emotional memories can be targeted is a major research focus as the potential to translate these methods into clinical populations carries important implications. It has been demonstrated that both fear and drug-related memories can be destabilised at their retrieval and require reconsolidation to be maintained. Therefore, memory reconsolidation offers a potential target period during which the aberrant memories underlying psychiatric disorders can be disrupted. Monfils et al. in 2009 have shown for the first time that safe information provided through an extinction session after retrieval (during the reconsolidation window) may update the original memory trace and prevent the return of fear in rats. In recent years several authors have then tested the effect of post-retrieval extinction on reconsolidation of either fear or drug related memories in both laboratory animals and humans. In this article we review the literature on post-reactivation extinction, discuss the differences across studies on the methodological ground, and review the potential boundary conditions that may explain existing discrepancies and limit the potential application of post-reactivation extinction approaches. PMID:23404065

  7. A methodological approach of estimating resistance to flow under unsteady flow conditions

    NASA Astrophysics Data System (ADS)

    Mrokowska, M. M.; Rowiński, P. M.; Kalinowska, M. B.

    2015-10-01

    This paper presents an evaluation and analysis of resistance parameters: friction slope, friction velocity and Manning coefficient in unsteady flow. The methodology to enhance the evaluation of resistance by relations derived from flow equations is proposed. The main points of the methodology are (1) to choose a resistance relation with regard to a shape of a channel and (2) type of wave, (3) to choose an appropriate method to evaluate slope of water depth, and (4) to assess the uncertainty of result. In addition to a critical analysis of existing methods, new approaches are presented: formulae for resistance parameters for a trapezoidal channel, and a translation method instead of Jones' formula to evaluate the gradient of flow depth. Measurements obtained from artificial dam-break flood waves in a small lowland watercourse have made it possible to apply the method and to analyse to what extent resistance parameters vary in unsteady flow. The study demonstrates that results of friction slope and friction velocity are more sensitive to applying simplified formulae than the Manning coefficient (n). n is adequate as a flood routing parameter but may be misleading when information on trend of resistance with flow rate is crucial. Then friction slope or friction velocity seems to be better choice.

  8. [Measuring nursing care times--methodologic and documentation problems].

    PubMed

    Bartholomeyczik, S; Hunstein, D

    2001-08-01

    The time for needed nursing care is one important measurement as a basic for financing care. In Germany the Long Term Care Insurance (LTCI) reimburses nursing care depending on the time family care givers need to complete selected activities. The LTCI recommends certain time ranges for these activities, which are wholly compensatory, as a basic for assessment. The purpose is to enhance assessment justice and comparability. With the example of a German research project, which had to investigate the duration of these activities and the reasons for differences, questions are raised about some definition and interpretation problems. There are definition problems, since caring activities especially in private households are nearly never performed as clearly defined modules. Moreover, often different activities are performed simultaneously. However, the most important question is what exactly time numbers can say about the essentials of nursing care. PMID:12385262

  9. Optimization conditions for anthocyanin and phenolic content extraction form purple sweet potato using response surface methodology.

    PubMed

    Ahmed, Maruf; Akter, Mst Sorifa; Eun, Jong-Bang

    2011-02-01

    Purple sweet potato flour could be used to enhance the bioactive components such as phenolic compounds and anthocyanin content that might be used as nutraceutical ingredients for formulated foods. Optimization of anthocyanin and phenolic contents of purple sweet potato were investigated using response surface methodology. A face-centered cube design was used to investigate the effects of three independent variables: namely, drying temperature 55-65°C, citric acid concentration 1-3% w/v and soaking time 1-3 min. The optimal conditions for anthocyanin and phenolic contents were 62.91°C, 1.38%, 2.53 min and 60.94°C, 1.04% and 2.24 min, respectively. However, optimal conditions of anthocyanin content were not apparent. The experimental value of anthocyanin content was 19.78 mg/100 g and total phenolic content was 61.55 mg/g. These data showed that the experimental responses were reasonably close to the predicted responses. Therefore, the results showed that treated flours could be used to enhance the antioxidant activities of functional foods. PMID:20858156

  10. Optimization of microwave-assisted hot air drying conditions of okra using response surface methodology.

    PubMed

    Kumar, Deepak; Prasad, Suresh; Murthy, Ganti S

    2014-02-01

    Okra (Abelmoschus esculentus) was dried to a moisture level of 0.1 g water/g dry matter using a microwave-assisted hot air dryer. Response surface methodology was used to optimize the drying conditions based on specific energy consumption and quality of dried okra. The drying experiments were performed using a central composite rotatable design for three variables: air temperature (40-70 °C), air velocity (1-2 m/s) and microwave power level (0.5-2.5 W/g). The quality of dried okra was determined in terms of color change, rehydration ratio and hardness of texture. A second-order polynomial model was well fitted to all responses and high R(2) values (>0.8) were observed in all cases. The color change of dried okra was found higher at high microwave power and air temperatures. Rehydration properties were better for okra samples dried at higher microwave power levels. Specific energy consumption decreased with increase in microwave power due to decrease in drying time. The drying conditions of 1.51 m/s air velocity, 52.09 °C air temperature and 2.41 W/g microwave power were found optimum for product quality and minimum energy consumption for microwave-convective drying of okra. PMID:24493879

  11. Optimization of osmotic dehydration conditions of peach slices in sucrose solution using response surface methodology.

    PubMed

    Yadav, Baljeet Singh; Yadav, Ritika B; Jatain, Monika

    2012-10-01

    Osmotic dehydration (OD) conditions of peach slices were optimized using response surface methodology (RSM) with respect to sucrose concentration (50-70°B), immersion time (2-4 h) and process temperature (35-55 °C) for maximum water loss (WL), minimum solute gain (SG) and maximum rehydration ratio (RR) as response variables. A central composite rotatable design (CCRD) was used as experimental design. The models developed for all responses were significant. All model terms were significant in WL except the quadratic levels of sucrose concentration and temperature whereas in SG, linear terms of time and linear and quadratic terms of temperature were significant. All the terms except linear term of time and interaction term of time and sucrose concentration, were significant in RR. The optimized conditions were sucrose concentration = 69.9°B, time = 3.97 h and temperature = 37.63 °C in order to obtain WL of 28.42 (g/100 g of fresh weight), SG of 8.39 (g/100 g of fresh weight) and RR of 3.38. PMID:24082265

  12. A combined linear optimisation methodology for water resources allocation in Alfeios River Basin (Greece) under uncertain and vague system conditions

    NASA Astrophysics Data System (ADS)

    Bekri, Eleni; Yannopoulos, Panayotis; Disse, Markus

    2013-04-01

    In the present study, a combined linear programming methodology, based on Li et al. (2010) and Bekri et al. (2012), is employed for optimizing water allocation under uncertain system conditions in the Alfeios River Basin, in Greece. The Alfeios River is a water resources system of great natural, ecological, social and economic importance for Western Greece, since it has the longest and highest flow rate watercourse in the Peloponnisos region. Moreover, the river basin was exposed in the last decades to a plethora of environmental stresses (e.g. hydrogeological alterations, intensively irrigated agriculture, surface and groundwater overexploitation and infrastructure developments), resulting in the degradation of its quantitative and qualitative characteristics. As in most Mediterranean countries, water resource management in Alfeios River Basin has been focused up to now on an essentially supply-driven approach. It is still characterized by a lack of effective operational strategies. Authority responsibility relationships are fragmented, and law enforcement and policy implementation are weak. The present regulated water allocation puzzle entails a mixture of hydropower generation, irrigation, drinking water supply and recreational activities. Under these conditions its water resources management is characterised by high uncertainty and by vague and imprecise data. The considered methodology has been developed in order to deal with uncertainties expressed as either probability distributions, or/and fuzzy boundary intervals, derived by associated α-cut levels. In this framework a set of deterministic submodels is studied through linear programming. The ad hoc water resources management and alternative management patterns in an Alfeios subbasin are analyzed and evaluated under various scenarios, using the above mentioned methodology, aiming to promote a sustainable and equitable water management. Li, Y.P., Huang, G.H. and S.L., Nie, (2010), Planning water resources

  13. Optimization extraction conditions for improving phenolic content and antioxidant activity in Berberis asiatica fruits using response surface methodology (RSM).

    PubMed

    Belwal, Tarun; Dhyani, Praveen; Bhatt, Indra D; Rawal, Ranbeer Singh; Pande, Veena

    2016-09-15

    This study for the first time designed to optimize the extraction of phenolic compounds and antioxidant potential of Berberis asiatica fruits using response surface methodology (RSM). Solvent selection was done based on the preliminary experiments and a five-factors-three-level, Central Composite Design (CCD). Extraction temperature (X1), sample to solvent ratio (X3) and solvent concentration (X5) significantly affect response variables. The quadratic model well fitted for all the responses. Under optimal extraction conditions, the dried fruit sample mixed with 80% methanol having 3.0 pH in a ratio of 1:50 and the mixture was heated at 80 °C for 30 min; the measured parameters was found in accordance with the predicted values. High Performance Liquid Chromatography (HPLC) analysis at optimized condition reveals 6 phenolic compounds. The results suggest that optimization of the extraction conditions is critical for accurate quantification of phenolics and antioxidants in Berberis asiatica fruits, which may further be utilized for industrial extraction procedure. PMID:27080887

  14. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  15. Tracking MOV operability under degraded voltage condition by periodic test measurements

    SciTech Connect

    Hussain, B.; Behera, A.K.; Alsammarae, A.J.

    1996-12-31

    The purpose of this paper is to develop a methodology for evaluating the operability of Alternating Current (AC) Motor Operated Valve (MOV) under degraded voltage condition, based on the seating parameter measured during surveillance/testing. This approach will help resolve Nuclear Regulatory Commission`s (NRC`s) concern on verifying the AC MOV`s design basis capability through periodic testing.

  16. Measuring and Targeting Internal Conditions for School Effectiveness in the Free State of South Africa

    ERIC Educational Resources Information Center

    Kgaile, Abraham; Morrison, Keith

    2006-01-01

    A questionnaire-based methodology for constructing an overall index of school effectiveness is reported, focusing on within-school conditions of schools, which is currently being used in the Free State of South Africa. The article reports the construction and use of a straightforward instrument for measuring the effectiveness of key aspects of the…

  17. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology

    PubMed Central

    Arulmathi, P.; Elangovan, G.; Begum, A. Farjana

    2015-01-01

    Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD) and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD). The results showed that electrochemical treatment process effectively removed the COD (89.5%) and color (95.1%) of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm2, electrolysis time of 103.27 min, and electrolyte (NaCl) concentration of 1.67 g/L, respectively. PMID:26491716

  18. An Updated Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Hirt, Evelyn H.; Coles, Garill A.; Bonebrake, Christopher A.; Ivans, William J.; Wootan, David W.; Mitchell, Mark R.

    2014-07-18

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment, as AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors and the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results on augmenting an initial methodology for enhanced risk monitors that integrate real-time information about equipment condition and POF into risk monitors. Methods to propagate uncertainty through the enhanced risk monitor are evaluated. Available data to quantify the level of uncertainty and the POF of key components are examined for their relevance, and a status update of this data evaluation is described. Finally, we describe potential targets for developing new risk metrics that may be useful for studying trade-offs for economic

  19. Technical Report on Preliminary Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Coles, Garill A.; Coble, Jamie B.; Hirt, Evelyn H.

    2013-09-17

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. AdvSMRs may provide a longer-term alternative to traditional light-water reactors (LWRs) and SMRs based on integral pressurized water reactor concepts currently being considered. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment. AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors. Some of this loss can be recovered through reduced capital costs through smaller size, fewer components, modular fabrication processes, and the opportunity for modular construction. However, the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments that are a step towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results from an initial methodology for enhanced risk monitors by integrating real-time information about equipment condition and POF into risk monitors.

  20. Methodology of Blade Unsteady Pressure Measurement in the NASA Transonic Flutter Cascade

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; McFarland, E. R.; Capece, V. R.; Jett, T. A.; Senyitko, R. G.

    2002-01-01

    In this report the methodology adopted to measure unsteady pressures on blade surfaces in the NASA Transonic Flutter Cascade under conditions of simulated blade flutter is described. The previous work done in this cascade reported that the oscillating cascade produced waves, which for some interblade phase angles reflected off the wind tunnel walls back into the cascade, interfered with the cascade unsteady aerodynamics, and contaminated the acquired data. To alleviate the problems with data contamination due to the back wall interference, a method of influence coefficients was selected for the future unsteady work in this cascade. In this approach only one blade in the cascade is oscillated at a time. The majority of the report is concerned with the experimental technique used and the experimental data generated in the facility. The report presents a list of all test conditions for the small amplitude of blade oscillations, and shows examples of some of the results achieved. The report does not discuss data analysis procedures like ensemble averaging, frequency analysis, and unsteady blade loading diagrams reconstructed using the influence coefficient method. Finally, the report presents the lessons learned from this phase of the experimental effort, and suggests the improvements and directions of the experimental work for tests to be carried out for large oscillation amplitudes.

  1. Measuring Soil Nitrogen Mineralization under Field Conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Land application of animal manure is known to alter rates of nitrogen (N) mineralization in soils, but quantitative information concerning intensity and duration of these effects has been difficult to obtain under field conditions. We estimated net effects of manure on N mineralization in soils unde...

  2. The Methodology of Doppler-Derived Central Blood Flow Measurements in Newborn Infants

    PubMed Central

    de Waal, Koert A.

    2012-01-01

    Central blood flow (CBF) measurements are measurements in and around the heart. It incorporates cardiac output, but also measurements of cardiac input and assessment of intra- and extracardiac shunts. CBF can be measured in the central circulation as right or left ventricular output (RVO or LVO) and/or as cardiac input measured at the superior vena cava (SVC flow). Assessment of shunts incorporates evaluation of the ductus arteriosus and the foramen ovale. This paper describes the methodology of CBF measurements in newborn infants. It provides a brief overview of the evolution of Doppler ultrasound blood flow measurements, basic principles of Doppler ultrasound, and an overview of all used methodology in the literature. A general guide for interpretation and normal values with suggested cutoffs of CBFs are provided for clinical use. PMID:22291718

  3. Methodological Conditions of Congruent Factors: A Comparison of EEG Frequency Structure Between Hemispheres.

    PubMed

    Andresen, B; Stemmler, G; Thom, E; Irrgang, E

    1984-01-01

    A study was conducted to examine congruence relations of EEG frequency factors between hemispheres. The central questions were aimed at procedural details of the factoring process which have the greatest "destructive" influence on congruence of factor patterns. On the basis of serially-rotated oblique solutions and evaluative simple structure criteria, two methodological results are reported: (a) the original principal components of two lead placements show a marked departure from an ideal congruence pattern, but can be forced into high congruence beyond 0.90 for the first twelve components by orthogonal procrustes rotation; (b) after this matching of principal components the expected optimum of congruence is only achieved in rotated solutions showing an optimal simple structure in terms of the Index of Factorial Simplicity (Kaiser, 1974). Additionally, the study leads to two substantive electroencephalographic results; (c) the factorial frequency structure of spontaneous EEG from two homologous positions (C3, C4) of the hemispheres is highly congruent. Thus, an identical measurement rationale for EEG frequency bands can be justified in asymmetry research; (d) the spectral dimensions of the alpha region show good correspondence to a recently published synoptic model of factor analytically defined EEG bands (Andresen, Thom, Irrgang, & Stemmler, 1982). The weight system of the three alpha components in this model can be recommended for psychophysiologically oriented EEG research. PMID:26776065

  4. A measurement methodology for dynamic angle of sight errors in hardware-in-the-loop simulation

    NASA Astrophysics Data System (ADS)

    Zhang, Wen-pan; Wu, Jun-hui; Gan, Lin; Zhao, Hong-peng; Liang, Wei-wei

    2015-10-01

    In order to precisely measure dynamic angle of sight for hardware-in-the-loop simulation, a dynamic measurement methodology was established and a set of measurement system was built. The errors and drifts, such as synchronization delay, CCD measurement error and drift, laser spot error on diffuse reflection plane and optics axis drift of laser, were measured and analyzed. First, by analyzing and measuring synchronization time between laser and time of controlling data, an error control method was devised and lowered synchronization delay to 21μs. Then, the relationship between CCD device and laser spot position was calibrated precisely and fitted by two-dimension surface fitting. CCD measurement error and drift were controlled below 0.26mrad. Next, angular resolution was calculated, and laser spot error on diffuse reflection plane was estimated to be 0.065mrad. Finally, optics axis drift of laser was analyzed and measured which did not exceed 0.06mrad. The measurement results indicate that the maximum of errors and drifts of the measurement methodology is less than 0.275mrad. The methodology can satisfy the measurement on dynamic angle of sight of higher precision and lager scale.

  5. Validation of a CFD Methodology for Variable Speed Power Turbine Relevant Conditions

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Giel, Paul W.; McVetta, Ashlie B.

    2013-01-01

    Analysis tools are needed to investigate aerodynamic performance of Variable-Speed Power Turbines (VSPT) for rotorcraft applications. The VSPT operates at low Reynolds numbers (transitional flow) and over a wide range of incidence. Previously, the capability of a published three-equation turbulence model to predict accurately the transition location for three-dimensional heat transfer problems was assessed. In this paper, the results of a post-diction exercise using a three-dimensional flow in a transonic linear cascade comprising VSPT blading are presented. The measured blade pressure distributions and exit total pressure and flow angles for two incidence angles corresponding to cruise (i = 5.8deg) and takeoff (i = -36.7deg) were used for this study. For the higher loading condition of cruise and the negative incidence condition of takeoff, overall agreement with data may be considered satisfactory but areas of needed improvement are also indicated.

  6. A coupled Bayesian and fault tree methodology to assess future groundwater conditions in light of climate change

    NASA Astrophysics Data System (ADS)

    Huang, J. J.; Du, M.; McBean, E. A.; Wang, H.; Wang, J.

    2014-08-01

    Maintaining acceptable groundwater levels, particularly in arid areas, while protecting ecosystems, are key measures against desertification. Due to complicated hydrological processes and their inherent uncertainties, investigations of groundwater recharge conditions are challenging, particularly in arid areas under climate changing conditions. To assist planning to protect against desertification, a fault tree methodology, in conjunction with fuzzy logic and Bayesian data mining, are applied to Minqin Oasis, a highly vulnerable regime in northern China. A set of risk factors is employed within the fault tree framework, with fuzzy logic translating qualitative risk data into probabilities. Bayesian data mining is used to quantify the contribution of each risk factor to the final aggregated risk. The implications of both historical and future climate trends are employed for temperature, precipitation and potential evapotranspiration (PET) to assess water table changes under various future scenarios. The findings indicate that water table levels will continue to drop at the rate of 0.6 m yr-1 in the future when climatic effects alone are considered, if agricultural and industrial production capacity remain at 2004 levels.

  7. Operant place conditioning measures examined using two nondrug reinforcers.

    PubMed

    Crowder, W F; Hutto, C W

    1992-04-01

    Detection of water and social play reinforcers by a place conditioning method based on instrumental conditioning was investigated in rats and compared to detection by conditioned place preference, a method currently used primarily to measure drug reinforcement. Operant place conditioning measures of reinforcement were choices between the reward and nonreward chambers during an apparatus exploration test and during a discrete-trials choice test and also, in some experiments, choices and latencies of chamber entry during training. Three of these four measures showed larger reinforcement effects than did the conditioned place preference measure of relative time spent in the reward chamber. By all reinforcement measures, conditioned place preference training was effective with water reinforcement but was ineffective with social reinforcement. Operant place conditioning was effective with both reinforcers by all measures. PMID:1594650

  8. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  9. Measuring College Students' Alcohol Consumption in Natural Drinking Environments: Field Methodologies for Bars and Parties

    ERIC Educational Resources Information Center

    Clapp, John D.; Holmes, Megan R.; Reed, Mark B.; Shillington, Audrey M.; Freisthler, Bridget; Lange, James E.

    2007-01-01

    In recent years researchers have paid substantial attention to the issue of college students' alcohol use. One limitation to the current literature is an over reliance on retrospective, self-report survey data. This article presents field methodologies for measuring college students' alcohol consumption in natural drinking environments.…

  10. Extended Axiomatic Conjoint Measurement: A Solution to a Methodological Problem in Studying Fertility-Related Behaviors.

    ERIC Educational Resources Information Center

    Nickerson, Carol A.; McClelland, Gary H.

    1988-01-01

    A methodology is developed based on axiomatic conjoint measurement to accompany a fertility decision-making model. The usefulness of the model is then demonstrated via an application to a study of contraceptive choice (N=100 male and female family-planning clinic clients). Finally, the validity of the model is evaluated. (TJH)

  11. Computer Science and Technology: Measurement of Interative Computing: Methodology and Application.

    ERIC Educational Resources Information Center

    Cotton, Ira W.

    This dissertation reports the development and application of a new methodology for the measurement and evaluation of interactive computing, applied to either the users of an interactive computing system or to the system itself, including the service computer and any communications network through which the service is delivered. The focus is on the…

  12. A BOUNDARY LAYER SAMPLING METHODOLOGY FOR MEASURING GASEOUS EMISSIONS FROM CAFO'S

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Various methodologies have been employed to measure CAFO emissions (e.g. flux chambers, lasers, and stationary towers), but these methods are usually limited in their ability to fully characterize the emission plume from a heterogeneous farm, and thus are limited in their ability to quantify total e...

  13. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 39 Postal Service 1 2012-07-01 2012-07-01 false Changes to measurement systems, service standards, service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL SERVICE PERFORMANCE AND CUSTOMER SATISFACTION REPORTING Annual Reporting of Service Performance Achievements § 3055.5 Changes...

  14. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 39 Postal Service 1 2014-07-01 2014-07-01 false Changes to measurement systems, service standards, service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL SERVICE PERFORMANCE AND CUSTOMER SATISFACTION REPORTING Annual Reporting of...

  15. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 39 Postal Service 1 2013-07-01 2013-07-01 false Changes to measurement systems, service standards, service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL SERVICE PERFORMANCE AND CUSTOMER SATISFACTION REPORTING Annual Reporting of...

  16. Translating Oral Health-Related Quality of Life Measures: Are There Alternative Methodologies?

    ERIC Educational Resources Information Center

    Brondani, Mario; He, Sarah

    2013-01-01

    Translating existing sociodental indicators to another language involves a rigorous methodology, which can be costly. Free-of-charge online translator tools are available, but have not been evaluated in the context of research involving quality of life measures. To explore the value of using online translator tools to develop oral health-related…

  17. Integrating Multi-Tiered Measurement Outcomes for Special Education Eligibility with Sequential Decision-Making Methodology

    ERIC Educational Resources Information Center

    Decker, Scott L.; Englund, Julia; Albritton, Kizzy

    2012-01-01

    Changes to federal guidelines for the identification of children with disabilities have supported the use of multi-tiered models of service delivery. This study investigated the impact of measurement methodology as used across numerous tiers in determining special education eligibility. Four studies were completed using a sample of inner-city…

  18. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a... of organs transplanted per standard criteria donor, including pancreata used for islet...

  19. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a... of organs transplanted per standard criteria donor, including pancreata used for islet...

  20. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a... transplantation; (ii) The number of organs transplanted per expanded criteria donor, including pancreata used...

  1. 42 CFR 486.318 - Condition: Outcome measures.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SUPPLIERS Requirements for Certification and Designation and Conditions for Coverage: Organ Procurement Organizations Organ Procurement Organization Outcome Requirements § 486.318 Condition: Outcome measures. (a... transplantation; (ii) The number of organs transplanted per expanded criteria donor, including pancreata used...

  2. The development and application of an automatic boundary segmentation methodology to evaluate the vaporizing characteristics of diesel spray under engine-like conditions

    NASA Astrophysics Data System (ADS)

    Ma, Y. J.; Huang, R. H.; Deng, P.; Huang, S.

    2015-04-01

    Studying the vaporizing characteristics of diesel spray could greatly help to reduce engine emission and improve performance. The high-speed schlieren imaging method is an important optical technique for investigating the macroscopic vaporizing morphological evolution of liquid fuel, and pre-combustion constant volume combustion bombs are often used to simulate the high pressure and high temperature conditions occurring in diesel engines. Complicated background schlieren noises make it difficult to segment the spray region in schlieren spray images. To tackle this problem, this paper develops a vaporizing spray boundary segmentation methodology based on an automatic threshold determination algorithm. The methodology was also used to quantify the macroscopic characteristics of vaporizing sprays including tip penetration, near-field and far-field angles, and projected spray area and spray volume. The spray boundary segmentation methodology was realized in a MATLAB-based program. Comparisons were made between the spray characteristics obtained using the program method and those acquired using a manual method and the Hiroyasu prediction model. It is demonstrated that the methodology can segment and measure vaporizing sprays precisely and efficiently. Furthermore, the experimental results show that the spray angles were slightly affected by the injection pressure at high temperature and high pressure and under inert conditions. A higher injection pressure leads to longer spray tip penetration and a larger projected area and volume, while elevating the temperature of the environment can significantly promote the evaporation of cold fuel.

  3. ASSESSING THE CONDITION OF SOUTH CAROLINA'S ESTUARIES: A NEW APPROACH INVOLVING INTEGRATED MEASURES OF CONDITION

    EPA Science Inventory

    The South Carolina Estuarine and Coastal Assessment Program (SCECAP) was initiated in 1999 to assess the condition of the state's coastal habitats using multiple measures of water quality, sediment quality, and biological condition. Sampling has subsequently been expanded to incl...

  4. A system and methodology for measuring volatile organic compounds produced by hydroponic lettuce in a controlled environment

    NASA Technical Reports Server (NTRS)

    Charron, C. S.; Cantliffe, D. J.; Wheeler, R. M.; Manukian, A.; Heath, R. R.

    1996-01-01

    A system and methodology were developed for the nondestructive qualitative and quantitative analysis of volatile emissions from hydroponically grown 'Waldmann's Green' leaf lettuce (Lactuca sativa L.). Photosynthetic photon flux (PPF), photoperiod, and temperature were automatically controlled and monitored in a growth chamber modified for the collection of plant volatiles. The lipoxygenase pathway products (Z)-3-hexenal, (Z)-3-hexenol, and (Z)-3-hexenyl acetate were emitted by lettuce plants after the transition from the light period to the dark period. The volatile collection system developed in this study enabled measurements of volatiles emitted by intact plants, from planting to harvest, under controlled environmental conditions.

  5. An inverse problem for a class of conditional probability measure-dependent evolution equations

    NASA Astrophysics Data System (ADS)

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-09-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by partial differential equation models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach.

  6. Inferential Conditions in the Statistical Detection of Measurement Bias.

    ERIC Educational Resources Information Center

    Millsap, Roger E.; Meredith, William

    1992-01-01

    Inferential conditions in the statistical detection of measurement bias are discussed in the contexts of differential item functioning and predictive bias in educational and employment settings. It is concluded that bias measures that rely strictly on observed measures are not generally diagnostic of measurement bias or lack of bias. (SLD)

  7. Major methodological constraints to the assessment of environmental status based on the condition of benthic communities

    NASA Astrophysics Data System (ADS)

    Medeiros, João Paulo; Pinto, Vanessa; Sá, Erica; Silva, Gilda; Azeda, Carla; Pereira, Tadeu; Quintella, Bernardo; Raposo de Almeida, Pedro; Lino Costa, José; José Costa, Maria; Chainho, Paula

    2014-05-01

    The Marine Strategy Framework Directive (MSFD) was published in 2008 and requires Member States to take the necessary measures to achieve or maintain good environmental status in aquatic ecosystems by the year of 2020. The MSFD indicates 11 qualitative descriptors for environmental status assessment, including seafloor integrity, using the condition of the benthic community as an assessment indicator. Member States will have to define monitoring programs for each of the MSFD descriptors based on those indicators in order to understand which areas are in a Good Environmental Status and what measures need to be implemented to improve the status of areas that fail to achieve that major objective. Coastal and offshore marine waters are not frequently monitored in Portugal and assessment tools have only been developed very recently with the implementation of the Water Framework Directive (WFD). The lack of historical data and knowledge on the constraints of benthic indicators in coastal areas requires the development of specific studies addressing this issue. The major objective of the current study was to develop and test and experimental design to assess impacts of offshore projects. The experimental design consisted on the seasonal and interannual assessment of benthic invertebrate communities in the area of future implementation of the structures (impact) and two potential control areas 2 km from the impact area. Seasonal benthic samples were collected at nine random locations within the impact and control areas in two consecutive years. Metrics included in the Portuguese benthic assessment tool (P-BAT) were calculated since this multimetric tool was proposed for the assessment of the ecological status in Portuguese coastal areas under the WFD. Results indicated a high taxonomic richness in this coastal area and no significant differences were found between impact and control areas, indicating the feasibility of establishing adequate control areas in marine

  8. Methodology for prediction of ecological impacts under real conditions in Mexico

    NASA Astrophysics Data System (ADS)

    Bojórquez-Tapia, Luis Antonio

    1989-09-01

    Most environmental impact assessments in Mexico are short-duration studies that tend to be descriptive rather than analytical and predictive. Therefore, a general methodology has been developed for the prediction and communication of ecological impacts in situations where time, funds, and information are major limitations. The approach] involves two stages: (1) environmental characterization and (2) prediction and analysis of impacts. In stage 1, the assessment team is divided into autonomous groups; the groups work on the same land systems, which are characterized based on literature, aerial photography, and short field observations. In stage 2, the groups meet to evaluate impacts by means of interaction matrices and nonnumerical simulation models. This methodology has proved to be valuable to perform environmental impact statements for large engineering projects in Mexico.

  9. A Practical Methodology to Measure Unbiased Gas Chromatographic Retention Factor vs. Temperature Relationships

    PubMed Central

    Peng, Baijie; Kuo, Mei-Yi; Yang, Panhia; Hewitt, Joshua T.; Boswell, Paul G.

    2014-01-01

    Compound identification continues to be a major challenge. Gas chromatography-mass spectrometry (GC-MS) is a primary tool used for this purpose, but the GC retention information it provides is underutilized because existing retention databases are experimentally restrictive and unreliable. A methodology called “retention projection” has the potential to overcome these limitations, but it requires the retention factor (k) vs. T relationship of a compound to calculate its retention time. Direct methods of measuring k vs. T relationships from a series of isothermal runs are tedious and time-consuming. Instead, a series of temperature programs can be used to quickly measure the k vs. T relationships, but they are generally not as accurate when measured this way because they are strongly biased by non-ideal behavior of the GC system in each of the runs. In this work, we overcome that problem by using the retention times of 25 n-alkanes to back-calculate the effective temperature profile and hold-up time vs. T profiles produced in each of six temperature programs. When the profiles were measured this way and taken into account, the k vs. T relationships measured from each of two different GC-MS instruments were nearly as accurate as the ones measured isothermally, showing less than 2-fold more error. Furthermore, temperature-programmed retention times calculated in five other labs from the new k vs. T relationships had the same distribution of error as when they were calculated from k vs. T relationships measured isothermally. Free software was developed to make the methodology easy to use. The new methodology potentially provides a relatively fast and easy way to measure unbiased k vs. T relationships. PMID:25496658

  10. Validation of a Monte Carlo Based Depletion Methodology Using HFIR Post-Irradiation Measurements

    SciTech Connect

    Chandler, David; Maldonado, G Ivan; Primm, Trent

    2009-11-01

    Post-irradiation uranium isotopic atomic densities within the core of the High Flux Isotope Reactor (HFIR) were calculated and compared to uranium mass spectrographic data measured in the late 1960s and early 70s [1]. This study was performed in order to validate a Monte Carlo based depletion methodology for calculating the burn-up dependent nuclide inventory, specifically the post-irradiation uranium

  11. Methodologies for measuring travelers' risk perception of infectious diseases: A systematic review.

    PubMed

    Sridhar, Shruti; Régner, Isabelle; Brouqui, Philippe; Gautret, Philippe

    2016-01-01

    Numerous studies in the past have stressed the importance of travelers' psychology and perception in the implementation of preventive measures. The aim of this systematic review was to identify the methodologies used in studies reporting on travelers' risk perception of infectious diseases. A systematic search for relevant literature was conducted according to Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines. There were 39 studies identified. In 35 of 39 studies, the methodology used was that of a knowledge, attitude and practice (KAP) survey based on questionnaires. One study used a combination of questionnaires and a visual psychometric measuring instrument called the 'pictorial representation of illness and self-measurement" or PRISM. One study used a self-representation model (SRM) method. Two studies measured psychosocial factors. Valuable information was obtained from KAP surveys showing an overall lack of knowledge among travelers about the most frequent travel-associated infections and associated preventive measures. This methodological approach however, is mainly descriptive, addressing knowledge, attitudes, and practices separately and lacking an examination of the interrelationships between these three components. Another limitation of the KAP method is underestimating psychosocial variables that have proved influential in health related behaviors, including perceived benefits and costs of preventive measures, perceived social pressure, perceived personal control, unrealistic optimism and risk propensity. Future risk perception studies in travel medicine should consider psychosocial variables with inferential and multivariate statistical analyses. The use of implicit measurements of attitudes could also provide new insights in the field of travelers' risk perception of travel-associated infectious diseases. PMID:27238906

  12. A methodology to determine boundary conditions from forced convection experiments using liquid crystal thermography

    NASA Astrophysics Data System (ADS)

    Jakkareddy, Pradeep S.; Balaji, C.

    2016-05-01

    This paper reports the results of an experimental study to estimate the heat flux and convective heat transfer coefficient using liquid crystal thermography and Bayesian inference in a heat generating sphere, enclosed in a cubical Teflon block. The geometry considered for the experiments comprises a heater inserted in a hollow hemispherical aluminium ball, resulting in a volumetric heat generation source that is placed at the center of the Teflon block. Calibrated thermochromic liquid crystal sheets are used to capture the temperature distribution at the front face of the Teflon block. The forward model is the three dimensional conduction equation which is solved within the Teflon block to obtain steady state temperatures, using COMSOL. Match up experiments are carried out for various velocities by minimizing the residual between TLC and simulated temperatures for every assumed loss coefficient, to obtain a correlation of average Nusselt number against Reynolds number. This is used for prescribing the boundary condition for the solution to the forward model. A surrogate model obtained by artificial neural network built upon the data from COMSOL simulations is used to drive a Markov Chain Monte Carlo based Metropolis Hastings algorithm to generate the samples. Bayesian inference is adopted to solve the inverse problem for determination of heat flux and heat transfer coefficient from the measured temperature field. Point estimates of the posterior like the mean, maximum a posteriori and standard deviation of the retrieved heat flux and convective heat transfer coefficient are reported. Additionally the effect of number of samples on the performance of the estimation process has been investigated.

  13. Testing VOC emission measurement techniques in wood-coating industrial processes and developing a cost-effective measurement methodology.

    PubMed

    Ojala, S; Lassi, U; Keiski, R L

    2006-01-01

    Availability of reliable emission measurements of concentrated volatile organic compounds (VOCs) bear great significance in facilitating the selection of a feasible emission abatement technique. There are numerous methods, which can be used to measure VOC emissions, however, there is no single method that would allow sampling of the whole range of volatile organics. In addition, research efforts are usually directed to the development of measuring VOCs in diluted concentrations. Therefore, there is a need for a novel measurement method, which can give reliable results while entailing simple operations and low costs. This paper represents a development effort of finding a reliable measurement procedure. A methodology is proposed and used to measure solvent emissions from coating processes. PMID:15893795

  14. A methodology to condition distorted acoustic emission signals to identify fracture timing from human cadaver spine impact tests.

    PubMed

    Arun, Mike W J; Yoganandan, Narayan; Stemper, Brian D; Pintar, Frank A

    2014-12-01

    While studies have used acoustic sensors to determine fracture initiation time in biomechanical studies, a systematic procedure is not established to process acoustic signals. The objective of the study was to develop a methodology to condition distorted acoustic emission data using signal processing techniques to identify fracture initiation time. The methodology was developed from testing a human cadaver lumbar spine column. Acoustic sensors were glued to all vertebrae, high-rate impact loading was applied, load-time histories were recorded (load cell), and fracture was documented using CT. Compression fracture occurred to L1 while other vertebrae were intact. FFT of raw voltage-time traces were used to determine an optimum frequency range associated with high decibel levels. Signals were bandpass filtered in this range. Bursting pattern was found in the fractured vertebra while signals from other vertebrae were silent. Bursting time was associated with time of fracture initiation. Force at fracture was determined using this time and force-time data. The methodology is independent of selecting parameters a priori such as fixing a voltage level(s), bandpass frequency and/or using force-time signal, and allows determination of force based on time identified during signal processing. The methodology can be used for different body regions in cadaver experiments. PMID:25241279

  15. A novel methodology for interpreting air quality measurements from urban streets using CFD modelling

    NASA Astrophysics Data System (ADS)

    Solazzo, Efisio; Vardoulakis, Sotiris; Cai, Xiaoming

    2011-09-01

    In this study, a novel computational fluid dynamics (CFD) based methodology has been developed to interpret long-term averaged measurements of pollutant concentrations collected at roadside locations. The methodology is applied to the analysis of pollutant dispersion in Stratford Road (SR), a busy street canyon in Birmingham (UK), where a one-year sampling campaign was carried out between August 2005 and July 2006. Firstly, a number of dispersion scenarios are defined by combining sets of synoptic wind velocity and direction. Assuming neutral atmospheric stability, CFD simulations are conducted for all the scenarios, by applying the standard k-ɛ turbulence model, with the aim of creating a database of normalised pollutant concentrations at specific locations within the street. Modelled concentration for all wind scenarios were compared with hourly observed NO x data. In order to compare with long-term averaged measurements, a weighted average of the CFD-calculated concentration fields was derived, with the weighting coefficients being proportional to the frequency of each scenario observed during the examined period (either monthly or annually). In summary the methodology consists of (i) identifying the main dispersion scenarios for the street based on wind speed and directions data, (ii) creating a database of CFD-calculated concentration fields for the identified dispersion scenarios, and (iii) combining the CFD results based on the frequency of occurrence of each dispersion scenario during the examined period. The methodology has been applied to calculate monthly and annually averaged benzene concentration at several locations within the street canyon so that a direct comparison with observations could be made. The results of this study indicate that, within the simplifying assumption of non-buoyant flow, CFD modelling can aid understanding of long-term air quality measurements, and help assessing the representativeness of monitoring locations for population

  16. Optimisation of Ultrasound-Assisted Extraction Conditions for Phenolic Content and Antioxidant Capacity from Euphorbia tirucalli Using Response Surface Methodology

    PubMed Central

    Vuong, Quan V.; Goldsmith, Chloe D.; Dang, Trung Thanh; Nguyen, Van Tang; Bhuyan, Deep Jyoti; Sadeqzadeh, Elham; Scarlett, Christopher J.; Bowyer, Michael C.

    2014-01-01

    Euphorbia tirucalli (E. tirucalli) is now widely distributed around the world and is well known as a source of traditional medicine in many countries. This study aimed to utilise response surface methodology (RSM) to optimise ultrasonic-assisted extraction (UAE) conditions for total phenolic compounds (TPC) and antioxidant capacity from E. tirucalli leaf. The results showed that ultrasonic temperature, time and power effected TPC and antioxidant capacity; however, the effects varied. Ultrasonic power had the strongest influence on TPC; whereas ultrasonic temperature had the greatest impact on antioxidant capacity. Ultrasonic time had the least impact on both TPC and antioxidant capacity. The optimum UAE conditions were determined to be 50 °C, 90 min. and 200 W. Under these conditions, the E. tirucalli leaf extract yielded 2.93 mg GAE/g FW of TPC and exhibited potent antioxidant capacity. These conditions can be utilised for further isolation and purification of phenolic compounds from E. tirucalli leaf. PMID:26785074

  17. A Methodology to Measure Synergy Among Energy-Efficiency Programs at the Program Participant Level

    SciTech Connect

    Tonn, B.E.

    2003-11-14

    This paper presents a methodology designed to measure synergy among energy-efficiency programs at the program participant level (e.g., households, firms). Three different definitions of synergy are provided: strong, moderate, and weak. Data to measure synergy can be collected through simple survey questions. Straightforward mathematical techniques can be used to estimate the three types of synergy and explore relative synergistic impacts of different subsets of programs. Empirical research is needed to test the concepts and methods and to establish quantitative expectations about synergistic relationships among programs. The market for new energy-efficient motors is the context used to illustrate all the concepts and methods in this paper.

  18. Sensitive Measures of Condition Change in EEG Data

    SciTech Connect

    Hively, L.M.; Gailey, P.C.; Protopopescu, V.

    1999-03-10

    We present a new, robust, model-independent technique for measuring condition change in nonlinear data. We define indicators of condition change by comparing distribution functions (DF) defined on the attractor for time windowed data sets via L{sub 1}-distance and {chi}{sup 2} statistics. The new measures are applied to EEG data with the objective of detecting the transition between non-seizure and epileptic brain activity in an accurate and timely manner. We find a clear superiority of the new metrics in comparison to traditional nonlinear measures as discriminators of condition change.

  19. Structural acoustic control of plates with variable boundary conditions: design methodology.

    PubMed

    Sprofera, Joseph D; Cabell, Randolph H; Gibbs, Gary P; Clark, Robert L

    2007-07-01

    A method for optimizing a structural acoustic control system subject to variations in plate boundary conditions is provided. The assumed modes method is used to build a plate model with varying levels of rotational boundary stiffness to simulate the dynamics of a plate with uncertain edge conditions. A transducer placement scoring process, involving Hankel singular values, is combined with a genetic optimization routine to find spatial locations robust to boundary condition variation. Predicted frequency response characteristics are examined, and theoretically optimized results are discussed in relation to the range of boundary conditions investigated. Modeled results indicate that it is possible to minimize the impact of uncertain boundary conditions in active structural acoustic control by optimizing the placement of transducers with respect to those uncertainties. PMID:17614487

  20. An in-situ soil structure characterization methodology for measuring soil compaction

    NASA Astrophysics Data System (ADS)

    Dobos, Endre; Kriston, András; Juhász, András; Sulyok, Dénes

    2016-04-01

    The agricultural cultivation has several direct and indirect effects on the soil properties, among which the soil structure degradation is the best known and most detectable one. Soil structure degradation leads to several water and nutrient management problems, which reduce the efficiency of agricultural production. There are several innovative technological approaches aiming to reduce these negative impacts on the soil structure. The tests, validation and optimization of these methods require an adequate technology to measure the impacts on the complex soil system. This study aims to develop an in-situ soil structure and root development testing methodology, which can be used in field experiments and which allows one to follow the real time changes in the soil structure - evolution / degradation and its quantitative characterization. The method is adapted from remote sensing image processing technology. A specifically transformed A/4 size scanner is placed into the soil into a safe depth that cannot be reached by the agrotechnical treatments. Only the scanner USB cable comes to the surface to allow the image acquisition without any soil disturbance. Several images from the same place can be taken throughout the vegetation season to follow the soil consolidation and structure development after the last tillage treatment for the seedbed preparation. The scanned image of the soil profile is classified using supervised image classification, namely the maximum likelihood classification algorithm. The resulting image has two principal classes, soil matrix and pore space and other complementary classes to cover the occurring thematic classes, like roots, stones. The calculated data is calibrated with filed sampled porosity data. As the scanner is buried under the soil with no changes in light conditions, the image processing can be automated for better temporal comparison. Besides the total porosity each pore size fractions and their distributions can be calculated for

  1. [Methodological aspects of measuring injuries from traffic accidents at the site of occurrence].

    PubMed

    Híjar-Medina, M C; López-López, M V; Flores-Aldana, M; Anaya, R

    1997-02-01

    Traffic accidents are a well-known public health problem worldwide. In Mexico research into risk factors for motor involving vehicles accidents and their consequences has recently been taken into account. The relevant literature does not normally describe the methodological aspects involved in the collection of primary data, since most studies have used secondary data the good quality and validity of which are assumed. The paper presented seeks to discuss and share with researchers in this field, some of the methodological aspects to be considered in the attempt to recreate the scene of the accident and obtain information approximating to reality. The measurements in situ of, such traffic accident variables as injury, use of seat belt, speed and alcohol intake are discussed. PMID:9430931

  2. Optimization of meat level and processing conditions for development of chicken meat noodles using response surface methodology.

    PubMed

    Khare, Anshul Kumar; Biswas, Asim Kumar; Balasubramanium, S; Chatli, Manish Kumar; Sahoo, Jhari

    2015-06-01

    Response surface methodology (RSM) is a mathematical and statistical technique for testing multiple process variables and their interactive, linear and quadratic effects, and useful in solving multivariable equations obtained from experiments simultaneously. In present study optimum meat level and processing conditions for development of shelf stable chicken meat noodles was determined using central composite design of response surface methodology (RSM). Effects of meat level (110-130 g); processing conditions such as steaming time (12-18 min) and drying time (7-9 h) on the water activity, yield, water absorption index, water solubility index, hardness, overall acceptability and total colour change of chicken noodles were investigated. The aim of present study was to optimize meat level and processing conditions for development of chicken noodles. The coefficients of determination, R(2) of all the response variables were higher than 0.8. Based on the response surface and superimposed plots, the optimum conditions such as 60 % meat level, 12 min steaming time and 9 h drying time for development of chicken noodles with desired sensory quality was obtained. PMID:26028756

  3. Methodology for the assessment of measuring uncertainties of articulated arm coordinate measuring machines

    NASA Astrophysics Data System (ADS)

    Romdhani, Fekria; Hennebelle, François; Ge, Min; Juillion, Patrick; Coquet, Richard; François Fontaine, Jean

    2014-12-01

    Articulated Arm Coordinate Measuring Machines (AACMMs) have gradually evolved and are increasingly used in mechanical industry. At present, measurement uncertainties relating to the use of these devices are not yet well quantified. The work carried out consists of determining the measurement uncertainties of a mechanical part by an AACMM. The studies aiming to develop a model of measurement uncertainty are based on the Monte Carlo method developed in Supplement 1 of the Guide to Expression of Uncertainty in Measurement [1] but also identifying and characterizing the main sources of uncertainty. A multi-level Monte Carlo approach principle has been developed which allows for characterizing the possible evolution of the AACMM during the measurement and quantifying in a second level the uncertainty on the considered measurand. The first Monte Carlo level is the most complex and is thus divided into three sub-levels, namely characterization on the positioning error of a point, estimation of calibration errors and evaluation of fluctuations of the ‘localization point’. The global method is thus presented and results of the first sub-level are particularly developed. The main sources of uncertainty, including AACMM deformations, are exposed.

  4. Methodology for the calibration of and data acquisition with a six-degree-of-freedom acceleration measurement device

    NASA Astrophysics Data System (ADS)

    Lee, Harvey; Plank, Gordon; Weinstock, Herbert; Coltman, Michael

    1989-06-01

    Described here is a methodology for calibrating and gathering data with a six-degree-of-freedom acceleration measurement device that is intended to measure head acceleration of anthropomorphic dummies and human volunteers in automotive crash testing and head impact trauma studies. Error models (system equations) were developed for systems using six accelerometers in a coplanar (3-2-1) configuration, nine accelerometers in a coplanar (3-3-3) configuration and nine accelerometers in a non-coplanar (3-2-2-2) configuration and the accuracy and stability of these systems were compared. The model was verified under various input and computational conditions. Results of parametric sensitivity analyses which included parameters such as system geometry, coordinate system location, data sample rate and accelerometer cross axis sensitivities are presented. Recommendations to optimize data collection and reduction are given. Complete source listings of all of the software developed are presented.

  5. Assessing the Practical Equivalence of Conversions when Measurement Conditions Change

    ERIC Educational Resources Information Center

    Liu, Jinghua; Dorans, Neil J.

    2012-01-01

    At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…

  6. An innovative methodology for measurement of stress distribution of inflatable membrane structures

    NASA Astrophysics Data System (ADS)

    Zhao, Bing; Chen, Wujun; Hu, Jianhui; Chen, Jianwen; Qiu, Zhenyu; Zhou, Jinyu; Gao, Chengjun

    2016-02-01

    The inflatable membrane structure has been widely used in the fields of civil building, industrial building, airship, super pressure balloon and spacecraft. It is important to measure the stress distribution of the inflatable membrane structure because it influences the safety of the structural design. This paper presents an innovative methodology for the measurement and determination of the stress distribution of the inflatable membrane structure under different internal pressures, combining photogrammetry and the force-finding method. The shape of the inflatable membrane structure is maintained by the use of pressurized air, and the internal pressure is controlled and measured by means of an automatic pressure control system. The 3D coordinates of the marking points pasted on the membrane surface are acquired by three photographs captured from three cameras based on photogrammetry. After digitizing the markings on the photographs, the 3D curved surfaces are rebuilt. The continuous membrane surfaces are discretized into quadrilateral mesh and simulated by membrane links to calculate the stress distributions using the force-finding method. The internal pressure is simplified to the external node forces in the normal direction according to the contributory area of the node. Once the geometry x, the external force r and the topology C are obtained, the unknown force densities q in each link can be determined. Therefore, the stress distributions of the inflatable membrane structure can be calculated, combining the linear adjustment theory and the force density method based on the force equilibrium of inflated internal pressure and membrane internal force without considering the mechanical properties of the constitutive material. As the use of the inflatable membrane structure is attractive in the field of civil building, an ethylene-tetrafluoroethylene (ETFE) cushion is used with the measurement model to validate the proposed methodology. The comparisons between the

  7. The Epidemiology of Lead Toxicity in Adults: Measuring Dose and Consideration of Other Methodologic Issues

    PubMed Central

    Hu, Howard; Shih, Regina; Rothenberg, Stephen; Schwartz, Brian S.

    2007-01-01

    We review several issues of broad relevance to the interpretation of epidemiologic evidence concerning the toxicity of lead in adults, particularly regarding cognitive function and the cardiovascular system, which are the subjects of two systematic reviews that are also part of this mini-monograph. Chief among the recent developments in methodologic advances has been the refinement of concepts and methods for measuring individual lead dose in terms of appreciating distinctions between recent versus cumulative doses and the use of biological markers to measure these parameters in epidemiologic studies of chronic disease. Attention is focused particularly on bone lead levels measured by K-shell X-ray fluorescence as a relatively new biological marker of cumulative dose that has been used in many recent epidemiologic studies to generate insights into lead’s impact on cognition and risk of hypertension, as well as the alternative method of estimating cumulative dose using available repeated measures of blood lead to calculate an individual’s cumulative blood lead index. We review the relevance and interpretation of these lead biomarkers in the context of the toxico-kinetics of lead. In addition, we also discuss methodologic challenges that arise in studies of occupationally and environmentally exposed subjects and those concerning race/ethnicity and socioeconomic status and other important covariates. PMID:17431499

  8. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions.

    PubMed

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-01-01

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10(-3) g/cm(3) (1%). PMID:26262619

  9. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions

    PubMed Central

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-01-01

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10−3 g/cm3 (1%). PMID:26262619

  10. The methodologies and instruments of vehicle particulate emission measurement for current and future legislative regulations

    NASA Astrophysics Data System (ADS)

    Otsuki, Yoshinori; Nakamura, Hiroshi; Arai, Masataka; Xu, Min

    2015-09-01

    Since the health risks associated with fine particles whose aerodynamic diameters are smaller than 2.5 μm was first proven, regulations restricting particulate matter (PM) mass emissions from internal combustion engines have become increasingly severe. Accordingly, the gravimetric method of PM mass measurement is facing its lower limit of detection as the emissions from vehicles are further reduced. For example, the variation in the adsorption of gaseous components such as hydrocarbons from unburned fuel and lubricant oil and the presence of agglomerated particles, which are not directly generated in engine combustion but re-entrainment particulates from walls of sampling pipes, can cause uncertainty in measurement. The PM mass measurement systems and methodologies have been continuously refined in order to improve measurement accuracy. As an alternative metric, the particle measurement programme (PMP) within the United Nations Economic Commission for Europe (UNECE) developed a solid particle number measurement method in order to improve the sensitivity of particulate emission measurement from vehicles. Consequently, particle number (PN) limits were implemented into the regulations in Europe from 2011. Recently, portable emission measurement systems (PEMS) for in-use vehicle emission measurements are also attracting attention, currently in North America and Europe, and real-time PM mass and PN instruments are under evaluation.

  11. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  12. Hyper-X Mach 10 Engine Flowpath Development: Fifth Entry Test Conditions and Methodology

    NASA Technical Reports Server (NTRS)

    Bakos, R. J.; Tsai, C.-Y.; Rogers, R. C.; Shih, A. T.

    2001-01-01

    A series of Hyper-X Mach 10 flowpath ground tests are underway to obtain engine performance and operation data and to confirm and refine the flowpath design methods. The model used is a full-scale height, partial-width replica of the Hyper-X Research Vehicle propulsive flowpath with truncated forebody and aftbody. This is the fifth test entry for this model in the NASA-HYPULSE facility at GASL. For this entry the facility nozzle and model forebody were modified to better simulate the engine inflow conditions at the target flight conditions. The forebody was modified to be a wide flat plate with no flow fences, the facility nozzle Mach number was increased, and the model was positioned to be tested in a semi-direct-connect arrangement. This paper presents a review of the test conditions, model calibrations, and a description of steady flow confirmation. The test series included runs using hydrogen fuel, and a silane-in-hydrogen fuel mixture. Other test parameters included the model mounting angle (relative to the tunnel flow), and the test gas oxygen fraction to account for the presence of [NO] in the test gas at the M10 conditions.

  13. Measurement of multijunction cells under close-match conditions

    SciTech Connect

    Wilkinson, V.A.; Goodbody, C.; Williams, W.G.

    1997-12-31

    This paper presents details of a new close-match solar simulator developed for DERA`s Space Power Laboratory for the accurate characterization of multijunction solar cells. The authors present data on the simulator measurements of dual and triple junction cells. The measurements are compared with those made under less ideal spectral conditions.

  14. Measurement of Lubricating Condition between Swashplate and Shoe in Swashplate Compressors under Practical Operating Conditions

    NASA Astrophysics Data System (ADS)

    Suzuki, Hisashi; Fukuta, Mitsuhiro; Yanagisawa, Tadashi

    In this paper, lubricating conditions between a swashplate and a shoe in a swashplate compressor for automotive air conditioners is investigated experimentally. The conditions are measured with an electric resistance method that utilizes the swash plate and the shoe as electrodes respectively. The instrumented compressor is connected to an experimental cycle with R134a and operated under various operating conditions of pressure and rotational speed. An improved measurement technique and applying a solid contact ratio to the measurement results permit to measure the lubricating condition at high rotational speed (more than 8000 rpm) and to predic an occurrence of scuffing between the swashplate and the shoe, and therefore enables a detailed study of lubricating characteristics. It is shown by the measurement that the voltage of the contact signal decreases, which means better lubricating condition, with the decrease of the compression pressure and with the increase of the rotational speed from 1000 rpm through 5000 rpm. The lubricating condition tends to worsen at more than 5000 rpm. Furthermore, it is confirmed that the lubricating condition under transient operation is worse obviously as compared with that under steady-state operation.

  15. A method of measuring dynamic strain under electromagnetic forming conditions.

    PubMed

    Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng

    2016-04-01

    Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (∼5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 10(5) s(-1), which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (∼10(4) s(-1)). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes. PMID:27131683

  16. Absolute Radiation Measurements in Earth and Mars Entry Conditions

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2014-01-01

    This paper reports on the measurement of radiative heating for shock heated flows which simulate conditions for Mars and Earth entries. Radiation measurements are made in NASA Ames' Electric Arc Shock Tube at velocities from 3-15 km/s in mixtures of N2/O2 and CO2/N2/Ar. The technique and limitations of the measurement are summarized in some detail. The absolute measurements will be discussed in regards to spectral features, radiative magnitude and spatiotemporal trends. Via analysis of spectra it is possible to extract properties such as electron density, and rotational, vibrational and electronic temperatures. Relaxation behind the shock is analyzed to determine how these properties relax to equilibrium and are used to validate and refine kinetic models. It is found that, for some conditions, some of these values diverge from non-equilibrium indicating a lack of similarity between the shock tube and free flight conditions. Possible reasons for this are discussed.

  17. A method of measuring dynamic strain under electromagnetic forming conditions

    NASA Astrophysics Data System (ADS)

    Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng

    2016-04-01

    Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (˜5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 105 s-1, which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (˜104 s-1). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.

  18. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  19. A smartphone-driven methodology for estimating physical activities and energy expenditure in free living conditions.

    PubMed

    Guidoux, Romain; Duclos, Martine; Fleury, Gérard; Lacomme, Philippe; Lamaudière, Nicolas; Manenq, Pierre-Henri; Paris, Ludivine; Ren, Libo; Rousset, Sylvie

    2014-12-01

    This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases. PMID:25048352

  20. Optimization of fermentation conditions for 1,3-propanediol production by marine Klebsiella pneumonia HSL4 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Li, Lili; Zhou, Sheng; Ji, Huasong; Gao, Ren; Qin, Qiwei

    2014-09-01

    The industrially important organic compound 1,3-propanediol (1,3-PDO) is mainly used as a building block for the production of various polymers. In the present study, response surface methodology protocol was followed to determine and optimize fermentation conditions for the maximum production of 1,3-PDO using marine-derived Klebsiella pneumoniae HSL4. Four nutritional supplements together with three independent culture conditions were optimized as follows: 29.3 g/L glycerol, 8.0 g/L K2 HPO4, 7.6 g/L (NH4)2 SO4, 3.0 g/L KH2 PO4, pH 7.1, cultivation at 35°C for 12 h. Under the optimal conditions, a maximum 1,3-PDO concentration of 14.5 g/L, a productivity of 1.21 g/(L·h) and a conversion of glycerol of 0.49 g/g were obtained. In comparison with the control conditions, fermentation under the optimized conditions achieved an increase of 38.8% in 1,3-PDO concentration, 39.0% in productivity and 25.7% in glycerol conversion in flask. This enhancement trend was further confirmed when the fermentation was conducted in a 5-L fermentor. The optimized fermentation conditions could be an important basis for developing lowcost, large-scale methods for industrial production of 1,3-PDO in the future.

  1. A Uniform Methodology for Measuring Parental Ability to Pay: Implications for the College Scholarship Service in 1975-76.

    ERIC Educational Resources Information Center

    Bowman, James L.

    The movement toward a uniform methodology of determining parental ability to pay to be used over time by all institutions and agencies awarding financial aid funds is consistent with the goals and objectives of the College Scholarship Service (CSS). This paper describes a proposed system for a uniform methodology for measuring parental ability to…

  2. Technical Guide for "Measuring Up 2006": Documenting Methodology, Indicators, and Data Sources. National Center #06-6

    ERIC Educational Resources Information Center

    National Center for Public Policy and Higher Education, 2006

    2006-01-01

    This "Technical Guide" describes the methodology and concepts used to measure and grade the performance of the 50 states in the higher education arena. Part I presents the methodology for grading states and provides information on data collection and reporting. Part II explains the indicators that comprise each of the graded categories.…

  3. Refinement of current monitoring methodology for electroosmotic flow assessment under low ionic strength conditions.

    PubMed

    Saucedo-Espinosa, Mario A; Lapizco-Encinas, Blanca H

    2016-05-01

    Current monitoring is a well-established technique for the characterization of electroosmotic (EO) flow in microfluidic devices. This method relies on monitoring the time response of the electric current when a test buffer solution is displaced by an auxiliary solution using EO flow. In this scheme, each solution has a different ionic concentration (and electric conductivity). The difference in the ionic concentration of the two solutions defines the dynamic time response of the electric current and, hence, the current signal to be measured: larger concentration differences result in larger measurable signals. A small concentration difference is needed, however, to avoid dispersion at the interface between the two solutions, which can result in undesired pressure-driven flow that conflicts with the EO flow. Additional challenges arise as the conductivity of the test solution decreases, leading to a reduced electric current signal that may be masked by noise during the measuring process, making for a difficult estimation of an accurate EO mobility. This contribution presents a new scheme for current monitoring that employs multiple channels arranged in parallel, producing an increase in the signal-to-noise ratio of the electric current to be measured and increasing the estimation accuracy. The use of this parallel approach is particularly useful in the estimation of the EO mobility in systems where low conductivity mediums are required, such as insulator based dielectrophoresis devices. PMID:27375813

  4. Reducing ultraviolet radiation exposure to prevent skin cancer methodology and measurement.

    PubMed

    Glanz, Karen; Mayer, Joni A

    2005-08-01

    Skin cancer is the most common type of cancer, and is also one of the most preventable. This paper builds on an evidence review of skin cancer prevention interventions that was conducted for the Guide to Community Preventive Services (n=85 studies), and summarizes the state of knowledge about research methodology and measurement in studies of the effectiveness of interventions to reduce ultraviolet radiation (UVR) exposure. As this field advances, researchers should strive to minimize threats to validity in their study designs, as well as to consider the balance between internal and external validity. There is a need for more longer-duration interventions, and follow-up periods that make possible conclusions about the potential of these interventions to affect intermediate markers of skin cancer or at least sustained behavior change. Also, more work is needed to minimize attrition and characterize nonresponders and study dropouts. Verbal report measures of behavior are the most widely used measures of solar protection behavior. Given their limitations, investigators should routinely collect data about reliability and validity of those measures. They should also increase efforts to complement verbal data with objective measures including observations, skin reflectance, personal dosimetry, skin swabbing, and inspection of moles. Measures of environments and policies should incorporate observations, documentation, and direct measures of ambient UVR and shade. This article places the data derived from the evidence review in the context of needs and recommendations for future research in skin cancer prevention. PMID:16005810

  5. Gas concentration measurement by optical similitude absorption spectroscopy: methodology and experimental demonstration.

    PubMed

    Anselmo, Christophe; Welschinger, Jean-Yves; Cariou, Jean-Pierre; Miffre, Alain; Rairoux, Patrick

    2016-06-13

    We propose a new methodology to measure gas concentration by light-absorption spectroscopy when the light source spectrum is larger than the spectral width of one or several molecular gas absorption lines. We named it optical similitude absorption spectroscopy (OSAS), as the gas concentration is derived from a similitude between the light source and the target gas spectra. The main OSAS-novelty lies in the development of a robust inversion methodology, based on the Newton-Raphson algorithm, which allows retrieving the target gas concentration from spectrally-integrated differential light-absorption measurements. As a proof, OSAS is applied in laboratory to the 2ν3 methane absorption band at 1.66 µm with uncertainties revealed by the Allan variance. OSAS has also been applied to non-dispersive infra-red and the optical correlation spectroscopy arrangements. This all-optics gas concentration retrieval does not require the use of a gas calibration cell and opens new tracks to atmospheric gas pollution and greenhouse gases sources monitoring. PMID:27410280

  6. Deception detection with behavioral, autonomic, and neural measures: Conceptual and methodological considerations that warrant modesty.

    PubMed

    Meijer, Ewout H; Verschuere, Bruno; Gamer, Matthias; Merckelbach, Harald; Ben-Shakhar, Gershon

    2016-05-01

    The detection of deception has attracted increased attention among psychological researchers, legal scholars, and ethicists during the last decade. Much of this has been driven by the possibility of using neuroimaging techniques for lie detection. Yet, neuroimaging studies addressing deception detection are clouded by lack of conceptual clarity and a host of methodological problems that are not unique to neuroimaging. We review the various research paradigms and the dependent measures that have been adopted to study deception and its detection. In doing so, we differentiate between basic research designed to shed light on the neurocognitive mechanisms underlying deceptive behavior and applied research aimed at detecting lies. We also stress the distinction between paradigms attempting to detect deception directly and those attempting to establish involvement by detecting crime-related knowledge, and discuss the methodological difficulties and threats to validity associated with each paradigm. Our conclusion is that the main challenge of future research is to find paradigms that can isolate cognitive factors associated with deception, rather than the discovery of a unique (brain) correlate of lying. We argue that the Comparison Question Test currently applied in many countries has weak scientific validity, which cannot be remedied by using neuroimaging measures. Other paradigms are promising, but the absence of data from ecologically valid studies poses a challenge for legal admissibility of their outcomes. PMID:26787599

  7. Advanced haptic sensor for measuring human skin conditions

    NASA Astrophysics Data System (ADS)

    Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami

    2009-12-01

    This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.

  8. Advanced haptic sensor for measuring human skin conditions

    NASA Astrophysics Data System (ADS)

    Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami

    2010-01-01

    This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.

  9. Methodology for quality control when conditioning radioactive waste regarding the flow chart procedure

    SciTech Connect

    Ham, U.; Kunz, W.

    1995-12-31

    Radioactive waste that is to be stored into an interim and/or a final storage facility has to fulfill several quality requirements. These requirements are the result of safety analysis for the individual storage facility. The fulfillment of these requirements has to be proven before storing the waste package in the designated storage facility. In co-operation with the responsible authorities and experts, the Gesellschaft fuer Nuklear-Service mbH (GNS), Essen, Germany, has developed the flow chart procedure for low and medium active waste for proving and documenting the quality of the product when conditioning the radioactive waste. Meanwhile, this flow chart procedure is part of the ``Technical Acceptance Criteria`` for interim storage facilities and the ``Requirements for Final Storage`` for final storage facilities respectively for low and medium active waste in Germany. This procedure can also be used for high-active vitrified waste from the reprocessing of irradiated fuel.

  10. Intercomparison of magnetic field measurements near MV/LV transformer substations: methodological planning and results.

    PubMed

    Violanti, S; Fraschetta, M; Adda, S; Caputo, E

    2009-12-01

    Within the framework of Environmental Agencies system's activities, coordinated by ISPRA (superior institute for environmental protection and research), a comparison among measurements was designed and accomplished, in order to go into depth on the matter of measurement problems and to evaluate magnetic field at power frequencies. These measurements have been taken near medium voltage /low voltage transformer substation. This project was developed with the contribution of several experts who belong to different Regional Agencies. In three of these regions, substations having specific international standard characteristics were chosen; then a measurement and data analysis protocol was arranged. Data analysis showed a good level of coherence among results obtained by different laboratories. However, a range of problems emerged, either during the protocol predisposition and definition of the data analysis procedure or during the execution of measures and data reprocessing, because of the spatial and temporal variability of magnetic field. These problems represent elements of particular interest in determining a correct measurement methodology, whose purpose is the comparison with limits of exposure, attention values and quality targets. PMID:19843547

  11. Self-calibration methodology by normalized intensity for wavelength modulation spectroscopy measurement

    NASA Astrophysics Data System (ADS)

    Shao, Jie; Guo, Jie; Wang, Liming; Ying, Chaofu; Zhou, Zhen

    2015-02-01

    A methodology of self-calibration for concentration measurement based on wavelength modulation absorption spectroscopy has been developed by normalized fixed intensity. Experimental results show that the simple self-calibration method not only effectively improves the calibration accuracy, but also greatly improves stabilization and reliability as compared with the popular method called self-calibration by 1f normalized. In addition, the proposed system do not need any additional equipment when comparing with the use of traditional wavelength modulation absorption spectrometer. The standard deviation of a concentration measurement with different optical intensity aimed on the detector has been found to be below 1.0%. The dependence of the concentration assessment on the laser intensity fluctuation has also been investigated, which shows that the method of self-calibration could be applied in the field, specially where the dust is easily splattered on the windows of the detector.

  12. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    NASA Astrophysics Data System (ADS)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  13. A new metric for measuring condition in large predatory sharks.

    PubMed

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released. PMID:25130454

  14. Mass measuring instrument for use under microgravity conditions

    SciTech Connect

    Fujii, Yusaku; Yokota, Masayuki; Hashimoto, Seiji; Sugita, Yoichi; Ito, Hitomi; Shimada, Kazuhito

    2008-05-15

    A prototype instrument for measuring astronaut body mass under microgravity conditions has been developed and its performance was evaluated by parabolic flight tests. The instrument, which is the space scale, is applied as follows. Connect the subject astronaut to the space scale with a rubber cord. Use a force transducer to measure the force acting on the subject and an optical interferometer to measure the velocity of the subject. The subject's mass is calculated as the impulse divided by the velocity change, i.e., M={integral}Fdt/{delta}v. Parabolic flight by using a jet aircraft produces a zero-gravity condition lasting approximately 20 s. The performance of the prototype space scale was evaluated during such a flight by measuring the mass of a sample object.

  15. Study on optical measurement conditions for noninvasive blood glucose sensing

    NASA Astrophysics Data System (ADS)

    Xu, Kexin; Chen, Wenliang; Jiang, Jingying; Qiu, Qingjun

    2004-05-01

    Utilizing Near-infrared Spectroscopy for non-invasive glucose concentration sensing has been a focusing topic in biomedical optics applications. In this paper study on measuring conditions of spectroscopy on human body is carried out and a series of experiments on glucose concentration sensing are conducted. First, Monte Carlo method is applied to simulate and calculate photons" penetration depth within skin tissues at 1600 nm. The simulation results indicate that applying our designed optical probe, the detected photons can penetrate epidermis of the palm and meet the glucose sensing requirements within the dermis. Second, we analyze the influence of the measured position variations and the contact pressure between the optical fiber probe and the measured position on the measured spectrum during spectroscopic measurement of a human body. And, a measurement conditions reproduction system is introduced to enhance the measurement repeatability. Furthermore, through a series of transmittance experiments on glucose aqueous solutions sensing from simple to complex we found that though some absorption variation information of glucose can be obtained from measurements using NIR spectroscopy, while under the same measuring conditions and with the same modeling method, choices toward measured components reduce when complication degree of components increases, and this causes a decreased prediction accuracy. Finally, OGTT experiments were performed, and a PLS (Partial Least Square) mathematical model for a single experiment was built. We can easily get a prediction expressed as RMSEP (Root Mean Square Error of Prediction) with a value of 0.5-0.8mmol/dl. But the model"s extended application and reliability need more investigation.

  16. Determination of backscattering cross section of individual particles from cytometric measurements: a new methodology.

    PubMed

    Duforêt-Gaurier, Lucile; Moutier, William; Guiselin, Natacha; Thyssen, Mélilotus; Dubelaar, George; Mériaux, Xavier; Courcot, Lucie; Dessailly, David; Loisel, Hubert

    2015-11-30

    A methodology is developed to derive the backscattering cross section of individual particles as measured with the CytoSense (CytoBuoy b.v., NL). This in situ flow cytometer detects light scatter in forward and sideward directions and fluorescence in various spectral bands for a wide range of particles. First, the weighting functions are determined for the forward and sideward detectors to take into account their instrumental response as a function of the scattering angle. The CytoSense values are converted into forward and sideward scattering cross sections. The CytoSense estimates of uniform polystyrene microspheres from 1 to 90 μm are compared with Mie computations. The mean absolute relative differences ΔE are around 33.7% and 23.9% for forward and sideward scattering, respectively. Then, a theoretical relationship is developed to convert sideward scattering into backscattering cross section, from a synthetic database of 495,900 simulations including homogeneous and multi-layered spheres. The relationship follows a power law with a coefficient of determination of 0.95. To test the methodology, a laboratory experiment is carried out on a suspension of silica beads to compare backscattering cross section as measured by the WET Labs ECO-BB9 and derived from CytoSense. Relative differences are between 35% and 60%. They are of the same order of magnitude as the instrumental variability. Differences can be partly explained by the fact that the two instruments do not measure exactly the same parameter: the cross section of individual particles for the CytoSense and the bulk cross section for the ECO-BB9. PMID:26698775

  17. Methodology for Intraoperative Laser Doppler Vibrometry Measurements of Ossicular Chain Reconstruction

    PubMed Central

    Sokołowski, Jacek; Lachowska, Magdalena; Bartoszewicz, Robert; Niemczyk, Kazimierz

    2016-01-01

    Objectives Despite the increasing number of research concerning the applications of the Laser Doppler Vibrometry (LDV) in medicine, its usefulness is still under discussion. The aim of this study is to present a methodology developed in our Department for the LDV intraoperative assessment of ossicular chain reconstruction. Methods Ten patients who underwent “second look” tympanoplasty were involved in the study. The measurements of the acoustic conductivity of the middle ear were performed using the LDV system. Tone bursts with carrier frequencies of 500, 1,000, 2,000, and 4,000 Hz set in motion the ossicular chain. The study was divided into four experiments that examined the intra- and interindividual reproducibility, the utility of the posterior tympanotomy, the impact of changes in the laser beam angle, and the influence of reflective tape presence on measurements. Results There were no statistically significant differences between the two measurements performed in the same patient. However, interindividual differences were significant. In all cases, posterior tympanotomy proved to be useful for LDV measurements of the ossicular prosthesis vibrations. In most cases, changing the laser beam angle decreased signal amplitude about 1.5% (not significant change). The reflective tape was necessary to achieve adequate reflection of the laser beam. Conclusion LDV showed to be a valuable noncontact intraoperative tool for measurements of the middle ear conductive system mobility with a very good intraindividual repeatability. Neither a small change in the angle of the laser beam nor performing the measurements through posterior tympanotomy showed a significant influence on the results. Reflective tape was necessary to obtain good quality responses in LDV measurements. PMID:27090282

  18. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    SciTech Connect

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  19. The Harmonizing Outcome Measures for Eczema (HOME) roadmap: a methodological framework to develop core sets of outcome measurements in dermatology.

    PubMed

    Schmitt, Jochen; Apfelbacher, Christian; Spuls, Phyllis I; Thomas, Kim S; Simpson, Eric L; Furue, Masutaka; Chalmers, Joanne; Williams, Hywel C

    2015-01-01

    Core outcome sets (COSs) are consensus-derived minimum sets of outcomes to be assessed in a specific situation. COSs are being increasingly developed to limit outcome-reporting bias, allow comparisons across trials, and strengthen clinical decision making. Despite the increasing interest in outcomes research, methods to develop COSs have not yet been standardized. The aim of this paper is to present the Harmonizing Outcomes Measures for Eczema (HOME) roadmap for the development and implementation of COSs, which was developed on the basis of our experience in the standardization of outcome measurements for atopic eczema. Following the establishment of a panel representing all relevant stakeholders and a research team experienced in outcomes research, the scope and setting of the core set should be defined. The next steps are the definition of a core set of outcome domains such as symptoms or quality of life, followed by the identification or development and validation of appropriate outcome measurement instruments to measure these core domains. Finally, the consented COS needs to be disseminated, implemented, and reviewed. We believe that the HOME roadmap is a useful methodological framework to develop COSs in dermatology, with the ultimate goal of better decision making and promoting patient-centered health care. PMID:25186228

  20. Optimization of Fermentation Conditions for Recombinant Human Interferon Beta Production by Escherichia coli Using the Response Surface Methodology

    PubMed Central

    Morowvat, Mohammad Hossein; Babaeipour, Valiollah; Rajabi Memari, Hamid; Vahidi, Hossein

    2015-01-01

    Background: The periplasmic overexpression of recombinant human interferon beta (rhIFN-β)-1b using a synthetic gene in Escherichia coli BL21 (DE3) was optimized in shake flasks using Response Surface Methodology (RSM) based on the Box-Behnken Design (BBD). Objectives: This study aimed to predict and develop the optimal fermentation conditions for periplasmic expression of rhIFN-β-1b in shake flasks whilst keeping the acetate excretion as the lowest amount and exploit the best results condition for rhIFN-β in a bench top bioreactor. Materials and Methods: The process variables studied were the concentration of glucose as carbon source, cell density prior the induction (OD 600 nm) and induction temperature. Ultimately, a three-factor three-level BBD was employed during the optimization process. The rhIFN-β production and the acetate excretion served as the evaluated responses. Results: The proposed optimum fermentation condition consisted of 7.81 g L-1 glucose, OD 600 nm prior induction 1.66 and induction temperature of 30.27°C. The model prediction of 0.267 g L-1 of rhIFN-β and 0.961 g L-1 of acetate at the optimum conditions was verified experimentally as 0.255 g L-1 and 0.981 g L-1 of acetate. This agreement between the predicted and observed values confirmed the precision of the applied method to predict the optimum conditions. Conclusions: It can be concluded that the RSM is an effective method for the optimization of recombinant protein expression using synthetic genes in E. coli. PMID:26034535

  1. Quadratic Measurement and Conditional State Preparation in an Optomechanical System

    NASA Astrophysics Data System (ADS)

    Brawley, George; Vanner, Michael; Bowen, Warwick; Schmid, Silvan; Boisen, Anja

    2014-03-01

    An important requirement in the study of quantum systems is the ability to measure non-linear observables at the level of quantum fluctuations. Such measurements enable the conditional preparation of highly non-classical states. Nonlinear measurement, although achieved in a variety of quantum systems including microwave cavity modes and optical fields, remains an outstanding problem in both electromechanical and optomechanical systems. To the best of our knowledge, previous experimental efforts to achieve nonlinear measurement of mechanical motion have not yielded strong coupling, nor the observation of quadratic mechanical motion. Here using a new technique reliant on the intrinsic nonlinearity of the optomechanical interaction, we experimentally observe for the first time a position squared (x2) measurement of the room-temperature Brownian motion of a nanomechanical oscillator. We utilize this measurement to conditionally prepare non-Gaussian bimodal states, which are the high temperature classical analogue of quantum macroscopic superposition states, or cat states. In the future with the aid of cryogenics and state-of-the-art optical cavities, our approach will provide a viable method of generating quantum superposition states of mechanical oscillators. This research was funded by the ARC Center of Excellence for Engineered Quantum Systems.

  2. Application of nitric oxide measurements in clinical conditions beyond asthma

    PubMed Central

    Malinovschi, Andrei; Ludviksdottir, Dora; Tufvesson, Ellen; Rolla, Giovanni; Bjermer, Leif; Alving, Kjell; Diamant, Zuzana

    2015-01-01

    Fractional exhaled nitric oxide (FeNO) is a convenient, non-invasive method for the assessment of active, mainly Th2-driven, airway inflammation, which is sensitive to treatment with standard anti-inflammatory therapy. Consequently, FeNO serves as a valued tool to aid diagnosis and monitoring in several asthma phenotypes. More recently, FeNO has been evaluated in several other respiratory, infectious, and/or immunological conditions. In this short review, we provide an overview of several clinical studies and discuss the status of potential applications of NO measurements in clinical conditions beyond asthma. PMID:26672962

  3. Moving to Capture Children's Attention: Developing a Methodology for Measuring Visuomotor Attention.

    PubMed

    Hill, Liam J B; Coats, Rachel O; Mushtaq, Faisal; Williams, Justin H G; Aucott, Lorna S; Mon-Williams, Mark

    2016-01-01

    Attention underpins many activities integral to a child's development. However, methodological limitations currently make large-scale assessment of children's attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of 'Visual Motor Attention' (VMA)-a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method's core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults' attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198

  4. Moving to Capture Children’s Attention: Developing a Methodology for Measuring Visuomotor Attention

    PubMed Central

    Coats, Rachel O.; Mushtaq, Faisal; Williams, Justin H. G.; Aucott, Lorna S.; Mon-Williams, Mark

    2016-01-01

    Attention underpins many activities integral to a child’s development. However, methodological limitations currently make large-scale assessment of children’s attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of ‘Visual Motor Attention’ (VMA)—a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method’s core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults’ attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198

  5. A new methodology for measurement of sludge residence time distribution in a paddle dryer using X-ray fluorescence analysis.

    PubMed

    Charlou, Christophe; Milhé, Mathieu; Sauceau, Martial; Arlabosse, Patricia

    2015-02-01

    Drying is a necessary step before sewage sludge energetic valorization. Paddle dryers allow working with such a complex material. However, little is known about sludge flow in this kind of processes. This study intends to set up an original methodology for sludge residence time distribution (RTD) measurement in a continuous paddle dryer, based on the detection of mineral tracers by X-ray fluorescence. This accurate analytical technique offers a linear response to tracer concentration in dry sludge; the protocol leads to a good repeatability of RTD measurements. Its equivalence to RTD measurement by NaCl conductivity in sludge leachates is assessed. Moreover, it is shown that tracer solubility has no influence on RTD: liquid and solid phases have the same flow pattern. The application of this technique on sludge with different storage duration at 4 °C emphasizes the influence of this parameter on sludge RTD, and thus on paddle dryer performances: the mean residence time in a paddle dryer is almost doubled between 24 and 48 h of storage for identical operating conditions. PMID:25463926

  6. Phoretic and Radiometric Force Measurements on Microparticles in Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. James

    1996-01-01

    Thermophoretic, diffusiophoretic and radiometric forces on microparticles are being measured over a wide range of gas phase and particle conditions using electrodynamic levitation of single particles to simulate microgravity conditions. The thermophoretic force, which arises when a particle exists in a gas having a temperature gradient, is measured by levitating an electrically charged particle between heated and cooled plates mounted in a vacuum chamber. The diffusiophoretic force arising from a concentration gradient in the gas phase is measured in a similar manner except that the heat exchangers are coated with liquids to establish a vapor concentration gradient. These phoretic forces and the radiation pressure force acting on a particle are measured directly in terms of the change in the dc field required to levitate the particle with and without the force applied. The apparatus developed for the research and the experimental techniques are discussed, and results obtained by thermophoresis experiments are presented. The determination of the momentum and energy accommodation coefficients associated with molecular collisions between gases molecules and particles and the measurement of the interaction between electromagnetic radiation and small particles are of particular interest.

  7. Wall Conditioning and Impurity Measurements in the PEGASUS Experiment

    NASA Astrophysics Data System (ADS)

    Ono, M.; Fonck, R.; Toonen, R.; Thorson, T.; Tritz, K.; Winz, G.

    1999-11-01

    Wall conditioning and impurity effects on plasma evolution are increasingly relevant to the PEGASUS program. Surface conditioning consists of hydrogen glow discharge cleaning (GDC) to remove water and oxides, followed by He GDC to reduce the hydrogen inventory. Isotope exchange measurements indicate that periodic He GDC almost eliminates uncontrolled fueling from gas desorbed from the limiting surfaces. Additional wall conditioning will include Ti gettering and/or boronization. Impurity monitoring is provided by the recent installation of a SPRED multichannel VUV spectrometer (wavelength range = 10-110 nm; 1 msec time resolution), several interference filter (IF) monochromators, and a multichannel Ross-filter SXR diode assembly (for CV, CVI, OVII, and OVIII). The IF monitors indicate increased C radiation upon contact of the plasma with the upper and lower limiters for highly elongated plasmas. This radiation appears correlated with a subsequent rollover in the plasma current, and motivates an upgrade to the poloidal limiters to provide better plasma-wall interaction control.

  8. A novel methodology for online measurement of thoron using Lucas scintillation cell

    NASA Astrophysics Data System (ADS)

    Eappen, K. P.; Sapra, B. K.; Mayya, Y. S.

    2007-03-01

    The use of Lucas scintillation cell (LSC) technique for thoron estimation requires a modified methodology as opposed to radon estimation. While in the latter, the α counting is performed after a delay period varying between few hours to few days, in the case of thoron estimation the α counting has to be carried out immediately after sampling owing to the short half-life of thoron (55 s). This can be achieved best by having an on-line LSC sampling and counting system. However, half-life of the thoron decay product 212Pb being 10.6 h, the background accumulates in LSC during online measurements and hence subsequent use of LSC is erroneous unless normal background level is achieved in the cell. This problem can be circumvented by correcting for the average background counts accumulated during the counting period which may be theoretically estimated. In this study, a methodology has been developed to estimate the true counts due to thoron. A linear regression between the counts obtained experimentally and the fractional decay in regular intervals of time is used to obtain the actual thoron concentration. The novelty of this approach is that the background of the cell is automatically estimated as the intercept of the regression graph. The results obtained by this technique compare well with the two filter method and the thoron concentration produced from a standard thoron source. However, the LSC as such cannot be used for environmental samples because the minimum detection level is comparable with that of thoron concentrations prevailing in normal atmosphere.

  9. [Measuring quality in the German Guideline Programme in Oncology (GGPO)—methodology and implementation].

    PubMed

    Nothacker, Monika; Muche-Borowski, Cathleen; Kopp, Ina B

    2014-01-01

    The German Guideline Programme in Oncology (GGPO) is a joint initiative between the German Cancer Society, the Association of the Scientific Medical Societies in Germany and German Cancer Aid. In accordance with the aims of the German National Cancer Plan, the GGPO supports the systematic development of high-quality guidelines. To enhance implementation and evaluation, the suggestion of performance measures (PMs) derived from guideline recommendations following a standardised methodology is obligatory within the GGPO. For this purpose, PM teams are convened representing the multidisciplinary guideline development groups including clinical experts, methodologists and patient representatives as well as those organisations that take an active part in and share responsibility for documentation and quality improvement, i.e., clinical cancer registries, certified cancer centres and, if appropriate, the institution responsible for external quality assurance according to the German Social Code (SGB). The primary selection criteria for PMs include strength of the underlying recommendation (strong, grade A), existing potential for improvement of care and measurability. The premises of data economy and standardised documentation are taken into account. Between May 2008 and July 2014, 12 guidelines with suggestions for 100 PMs have been published. The majority of the suggested performance measures is captured by the specific documentation requirements of the clinical cancer registries and certified cancer centres. This creates a solid basis for an active quality management and re-evaluation of the suggested PMs. In addition, the suspension of measures should be considered if improvement has been achieved on a broad scale and for a longer period in order to concentrate on a quality-oriented, economic documentation. PMID:25523845

  10. Extremely low frequency electromagnetic field measurements at the Hylaty station and methodology of signal analysis

    NASA Astrophysics Data System (ADS)

    Kulak, Andrzej; Kubisz, Jerzy; Klucjasz, Slawomir; Michalec, Adam; Mlynarczyk, Janusz; Nieckarz, Zenon; Ostrowski, Michal; Zieba, Stanislaw

    2014-06-01

    We present the Hylaty geophysical station, a high-sensitivity and low-noise facility for extremely low frequency (ELF, 0.03-300 Hz) electromagnetic field measurements, which enables a variety of geophysical and climatological research related to atmospheric, ionospheric, magnetospheric, and space weather physics. The first systematic observations of ELF electromagnetic fields at the Jagiellonian University were undertaken in 1994. At the beginning the measurements were carried out sporadically, during expeditions to sparsely populated areas of the Bieszczady Mountains in the southeast of Poland. In 2004, an automatic Hylaty ELF station was built there, in a very low electromagnetic noise environment, which enabled continuous recording of the magnetic field components of the ELF electromagnetic field in the frequency range below 60 Hz. In 2013, after 8 years of successful operation, the station was upgraded by extending its frequency range up to 300 Hz. In this paper we show the station's technical setup, and how it has changed over the years. We discuss the design of ELF equipment, including antennas, receivers, the time control circuit, and power supply, as well as antenna and receiver calibration. We also discuss the methodology we developed for observations of the Schumann resonance and wideband observations of ELF field pulses. We provide examples of various kinds of signals recorded at the station.

  11. Analysis and methodology for measuring oxygen concentration in liquid sodium with a plugging meter

    SciTech Connect

    Nollet, B. K.; Hvasta, M.; Anderson, M.

    2012-07-01

    Oxygen concentration in liquid sodium is a critical measurement in assessing the potential for corrosion damage in sodium-cooled fast reactors (SFRs). There has been little recent work on sodium reactors and oxygen detection. Thus, the technical expertise dealing with oxygen measurements within sodium is no longer readily available in the U.S. Two methods of oxygen detection that have been investigated are the plugging meter and the galvanic cell. One of the overall goals of the Univ. of Wisconsin's sodium research program is to develop an affordable, reliable galvanic cell oxygen sensor. Accordingly, attention must first be dedicated to a well-known standard known as a plugging meter. Therefore, a sodium loop has been constructed on campus in effort to develop the plugging meter technique and gain experience working with liquid metal. The loop contains both a galvanic cell test section and a plugging meter test section. Consistent plugging results have been achieved below 20 [wppm], and a detailed process for achieving effective plugging has been developed. This paper will focus both on an accurate methodology to obtain oxygen concentrations from a plugging meter, and on how to easily control the oxygen concentration of sodium in a test loop. Details of the design, materials, manufacturing, and operation will be presented. Data interpretation will also be discussed, since a modern discussion of plugging data interpretation does not currently exist. (authors)

  12. Statistical optimization of ultraviolet irradiate conditions for vitamin D₂ synthesis in oyster mushrooms (Pleurotus ostreatus) using response surface methodology.

    PubMed

    Wu, Wei-Jie; Ahn, Byung-Yong

    2014-01-01

    Response surface methodology (RSM) was used to determine the optimum vitamin D2 synthesis conditions in oyster mushrooms (Pleurotus ostreatus). Ultraviolet B (UV-B) was selected as the most efficient irradiation source for the preliminary experiment, in addition to the levels of three independent variables, which included ambient temperature (25-45°C), exposure time (40-120 min), and irradiation intensity (0.6-1.2 W/m2). The statistical analysis indicated that, for the range which was studied, irradiation intensity was the most critical factor that affected vitamin D2 synthesis in oyster mushrooms. Under optimal conditions (ambient temperature of 28.16°C, UV-B intensity of 1.14 W/m2, and exposure time of 94.28 min), the experimental vitamin D2 content of 239.67 µg/g (dry weight) was in very good agreement with the predicted value of 245.49 µg/g, which verified the practicability of this strategy. Compared to fresh mushrooms, the lyophilized mushroom powder can synthesize remarkably higher level of vitamin D2 (498.10 µg/g) within much shorter UV-B exposure time (10 min), and thus should receive attention from the food processing industry. PMID:24736742

  13. Statistical Optimization of Ultraviolet Irradiate Conditions for Vitamin D2 Synthesis in Oyster Mushrooms (Pleurotus ostreatus) Using Response Surface Methodology

    PubMed Central

    Wu, Wei-Jie; Ahn, Byung-Yong

    2014-01-01

    Response surface methodology (RSM) was used to determine the optimum vitamin D2 synthesis conditions in oyster mushrooms (Pleurotus ostreatus). Ultraviolet B (UV-B) was selected as the most efficient irradiation source for the preliminary experiment, in addition to the levels of three independent variables, which included ambient temperature (25–45°C), exposure time (40–120 min), and irradiation intensity (0.6–1.2 W/m2). The statistical analysis indicated that, for the range which was studied, irradiation intensity was the most critical factor that affected vitamin D2 synthesis in oyster mushrooms. Under optimal conditions (ambient temperature of 28.16°C, UV-B intensity of 1.14 W/m2, and exposure time of 94.28 min), the experimental vitamin D2 content of 239.67 µg/g (dry weight) was in very good agreement with the predicted value of 245.49 µg/g, which verified the practicability of this strategy. Compared to fresh mushrooms, the lyophilized mushroom powder can synthesize remarkably higher level of vitamin D2 (498.10 µg/g) within much shorter UV-B exposure time (10 min), and thus should receive attention from the food processing industry. PMID:24736742

  14. Optimization of conditions for probiotic curd formulation by Enterococcus faecium MTCC 5695 with probiotic properties using response surface methodology.

    PubMed

    Ramakrishnan, Vrinda; Goveas, Louella Concepta; Prakash, Maya; Halami, Prakash M; Narayan, Bhaskar

    2014-11-01

    Enterococcus faecium MTCC 5695 possessing potential probiotic properties as well as enterocin producing ability was used as starter culture. Effect of time (12-24 h) and inoculum level (3-7 % v/v) on cell growth, bacteriocin production, antioxidant property, titrable acidity and pH of curd was studied by response surface methodology (RSM). The optimized conditions were 26.48 h and 2.17%v/v inoculum and the second order model validated. Co cultivation studies revealed that the formulated product had the ability to prevent growth of foodborne pathogens that affect keeping quality of the product during storage. The results indicated that application of E. faecium MTCC 5695 along with usage of optimized conditions attributed to the formation of highly consistent well set curd with bioactive and bioprotective properties. Formulated curd with potential probiotic attributes can be used as therapeutic agent for the treatment of foodborne diseases like Traveler's diarrhea and gastroenteritis which thereby help in improvement of bowel health. PMID:26396297

  15. H/L ratio as a measurement of stress in laying hens - methodology and reliability.

    PubMed

    Lentfer, T L; Pendl, H; Gebhardt-Henrich, S G; Fröhlich, E K F; Von Borell, E

    2015-04-01

    Measuring the ratio of heterophils and lymphocytes (H/L) in response to different stressors is a standard tool for assessing long-term stress in laying hens but detailed information on the reliability of measurements, measurement techniques and methods, and absolute cell counts is often lacking. Laying hens offered different sites of the nest boxes at different ages were compared in a two-treatment crossover experiment to provide detailed information on the procedure for measuring and the difficulties in the interpretation of H/L ratios in commercial conditions. H/L ratios were pen-specific and depended on the age and aviary system. There was no effect for the position of the nest. Heterophiles and lymphocytes were not correlated within individuals. Absolute cell counts differed in the number of heterophiles and lymphocytes and H/L ratios, whereas absolute leucocyte counts between individuals were similar. The reliability of the method using relative cell counts was good, yielding a correlation coefficient between double counts of r > 0.9. It was concluded that population-based reference values may not be sensitive enough to detect individual stress reactions and that the H/L ratio as an indicator of stress under commercial conditions may not be useful because of confounding factors and that other, non-invasive, measurements should be adopted. PMID:25622692

  16. Measurement of laminar burning speeds and Markstein lengths using a novel methodology

    SciTech Connect

    Tahtouh, Toni; Halter, Fabien; Mounaim-Rousselle, Christine

    2009-09-15

    Three different methodologies used for the extraction of laminar information are compared and discussed. Starting from an asymptotic analysis assuming a linear relation between the propagation speed and the stretch acting on the flame front, temporal radius evolutions of spherically expanding laminar flames are postprocessed to obtain laminar burning velocities and Markstein lengths. The first methodology fits the temporal radius evolution with a polynomial function, while the new methodology proposed uses the exact solution of the linear relation linking the flame speed and the stretch as a fit. The last methodology consists in an analytical resolution of the problem. To test the different methodologies, experiments were carried out in a stainless steel combustion chamber with methane/air mixtures at atmospheric pressure and ambient temperature. The equivalence ratio was varied from 0.55 to 1.3. The classical shadowgraph technique was used to detect the reaction zone. The new methodology has proven to be the most robust and provides the most accurate results, while the polynomial methodology induces some errors due to the differentiation process. As original radii are used in the analytical methodology, it is more affected by the experimental radius determination. Finally, laminar burning velocity and Markstein length values determined with the new methodology are compared with results reported in the literature. (author)

  17. [Methodological Approaches to the Organization of Counter Measures Taking into Account Landscape Features of Radioactively Contaminated Territories].

    PubMed

    Kuznetsov, V K; Sanzharova, N I

    2016-01-01

    Methodological approaches to the organization of counter measures are considered taking into account the landscape features of the radioactively contaminated territories. The current status and new requirements to the organization of counter measures in the contaminated agricultural areas are analyzed. The basic principles, objectives and problems of the formation of counter measures with regard to the landscape characteristics of the territory are presented; also substantiated are the organization and optimization of the counter measures in radioactively contaminated agricultural landscapes. PMID:27245009

  18. Extension of laboratory-measured soil spectra to field conditions

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Baumgardner, M. F.; Weismiller, R. A.; Biehl, L. L.; Robinson, B. F.

    1982-01-01

    Spectral responses of two glaciated soils, Chalmers silty clay loam and Fincastle silt loam, formed under prairie grass and forest vegetation, respectively, were measured in the laboratory under controlled moisture equilibria using an Exotech Model 20C spectroradiometer to obtain spectral data in the laboratory under artificial illumination. The same spectroradiometer was used outdoors under solar illumination to obtain spectral response from dry and moistened field plots with and without corn residue cover, representing the two different soils. Results indicate that laboratory-measured spectra of moist soil are directly proportional to the spectral response of that same field-measured moist bare soil over the 0.52 micrometer to 1.75 micrometer wavelength range. The magnitudes of difference in spectral response between identically treated Chalmers and Fincastle soils are greatest in the 0.6 micrometers to 0.8 micrometer transition region between the visible and near infrared, regardless of field condition or laboratory preparation studied.

  19. Thermophysical Properties Measurement of High-Temperature Liquids Under Microgravity Conditions in Controlled Atmospheric Conditions

    NASA Technical Reports Server (NTRS)

    Watanabe, Masahito; Ozawa, Shumpei; Mizuno, Akotoshi; Hibiya, Taketoshi; Kawauchi, Hiroya; Murai, Kentaro; Takahashi, Suguru

    2012-01-01

    Microgravity conditions have advantages of measurement of surface tension and viscosity of metallic liquids by the oscillating drop method with an electromagnetic levitation (EML) device. Thus, we are preparing the experiments of thermophysical properties measurements using the Materials-Science Laboratories ElectroMagnetic-Levitator (MSL-EML) facilities in the international Space station (ISS). Recently, it has been identified that dependence of surface tension on oxygen partial pressure (Po2) must be considered for industrial application of surface tension values. Effect of Po2 on surface tension would apparently change viscosity from the damping oscillation model. Therefore, surface tension and viscosity must be measured simultaneously in the same atmospheric conditions. Moreover, effect of the electromagnetic force (EMF) on the surface oscillations must be clarified to obtain the ideal surface oscillation because the EMF works as the external force on the oscillating liquid droplets, so extensive EMF makes apparently the viscosity values large. In our group, using the parabolic flight levitation experimental facilities (PFLEX) the effect of Po2 and external EMF on surface oscillation of levitated liquid droplets was systematically investigated for the precise measurements of surface tension and viscosity of high temperature liquids for future ISS experiments. We performed the observation of surface oscillations of levitated liquid alloys using PFLEX on board flight experiments by Gulfstream II (G-II) airplane operated by DAS. These observations were performed under the controlled Po2 and also under the suitable EMF conditions. In these experiments, we obtained the density, the viscosity and the surface tension values of liquid Cu. From these results, we discuss about as same as reported data, and also obtained the difference of surface oscillations with the change of the EMF conditions.

  20. Assessment of hygienic conditions of ground pepper (Piper nigrum L.) on the market in Sao Paulo City, by means of two methodologies for detecting the light filth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Pepper should to be collected, processed, and packed under optimum conditions to avoid the presence of foreign matter. The hygienic conditions of ground pepper marketted in São Paulo city were assessed in determining the presence of foreign matter by means of two extraction methodologies. This study...

  1. Detection of Chamber Conditioning Through Optical Emission and Impedance Measurements

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.; Rao, M. V. V. S.; Sharma, Surendra P.; Meyyappan, Meyya

    2001-01-01

    During oxide etch processes, buildup of fluorocarbon residues on reactor sidewalls can cause run-to-run drift and will necessitate some time for conditioning and seasoning of the reactor. Though diagnostics can be applied to study and understand these phenomena, many of them are not practical for use in an industrial reactor. For instance, measurements of ion fluxes and energy by mass spectrometry show that the buildup of insulating fluorocarbon films on the reactor surface will cause a shift in both ion energy and current in an argon plasma. However, such a device cannot be easily integrated into a processing system. The shift in ion energy and flux will be accompanied by an increase in the capacitance of the plasma sheath. The shift in sheath capacitance can be easily measured by a common commercially available impedance probe placed on the inductive coil. A buildup of film on the chamber wall is expected to affect the production of fluorocarbon radicals, and thus the presence of such species in the optical emission spectrum of the plasma can be monitored as well. These two techniques are employed on a GEC (Gaseous Electronics Conference) Reference Cell to assess the validity of optical emission and impedance monitoring as a metric of chamber conditioning. These techniques are applied to experimental runs with CHF3 and CHF3/O2/Ar plasmas, with intermediate monitoring of pure argon plasmas as a reference case for chamber conditions.

  2. Classification of heart valve condition using acoustic measurements

    SciTech Connect

    Clark, G.

    1994-11-15

    Prosthetic heart valves and the many great strides in valve design have been responsible for extending the life spans of many people with serious heart conditions. Even though the prosthetic valves are extremely reliable, they are eventually susceptible to long-term fatigue and structural failure effects expected from mechanical devices operating over long periods of time. The purpose of our work is to classify the condition of in vivo Bjork-Shiley Convexo-Concave (BSCC) heart valves by processing acoustic measurements of heart valve sounds. The structural failures of interest for Bscc valves is called single leg separation (SLS). SLS can occur if the outlet strut cracks and separates from the main structure of the valve. We measure acoustic opening and closing sounds (waveforms) using high sensitivity contact microphones on the patient`s thorax. For our analysis, we focus our processing and classification efforts on the opening sounds because they yield direct information about outlet strut condition with minimal distortion caused by energy radiated from the valve disc.

  3. What about temperature? Measuring permeability at magmatic conditions.

    NASA Astrophysics Data System (ADS)

    Kushnir, Alexandra R. L.; Martel, Caroline; Champallier, Rémi; Reuschlé, Thierry

    2015-04-01

    The explosive potential of volcanoes is intimately linked to permeability, which is governed by the connectivity of the porous structure of the magma and surrounding edifice. As magma ascends, volatiles exsolve from the melt and expand, creating a gas phase within the conduit. In the absence of a permeable structure capable of dissipating these gases, the propulsive force of an explosive eruption arises from the gas expansion and the build up of subsurface overpressures. Thus, characterizing the permeability of volcanic rocks under in-situ conditions (high temperature and pressure) allows us to better understand the outgassing potential and explosivity of volcanic systems. Current studies of the permeabilities of volcanic rocks generally measure permeability at room temperature using gas permeameters or model permeability using analytic imaging. Our goal is to perform and assess permeability measurements made at high temperature and high pressure in the interest of approaching the permeability of the samples at magmatic conditions. We measure the permeability of andesitic samples expelled during the 2010 Mt. Merapi eruption. We employ and compare two protocols for measuring permeability at high temperature and under high pressure using argon gas in an internally heated Paterson apparatus with an isolated pore fluid system. We first use the pulse decay method to measure the permeability of our samples, then compare these values to permeability measurements performed under steady state flow. We consider the steady state flow method the more rigorous of the two protocols, as we are more capable of accounting for the temperature gradient within the entire pore fluid system. At temperatures in excess of 700°C and pressures of 100 MPa, permeability values plummet by several orders of magnitude. These values are significantly lower than those commonly reported for room temperature permeameter measurements. The reduction in permeability at high temperature is a

  4. Safety Assessment for a Surface Repository in the Chernobyl Exclusion Zone - Methodology for Assessing Disposal under Intervention Conditions - 13476

    SciTech Connect

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, RWDF Buryakovka is still being operated but its maximum capacity is nearly reached. Plans for enlargement of the facility exist since more than 10 years but have not been implemented yet. In the framework of an European Commission Project DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) based on the planned enlargement. Due to its history RWDF Buryakovka does not fully comply with today's best international practices and the latest Ukrainian regulations in this area. The most critical aspects are its inventory of long-lived radionuclides, and the non-existent multi-barrier waste confinement system. A significant part of the project was dedicated, therefore, to the development of a methodology for the safety assessment taking into consideration the facility's special situation and to reach an agreement with all stakeholders involved in the later review and approval procedure of the safety analysis reports. Main aspect of the agreed methodology was to analyze the safety, not strictly based on regulatory requirements but on the assessment of the actual situation of the facility including its location within the Exclusion Zone. For both safety analysis reports, SAR and PSAR, the assessment of the long-term safety led to results that were either within regulatory limits or within the limits allowing for a specific situational evaluation by the regulator. (authors)

  5. The application of conditioning paradigms in the measurement of pain

    PubMed Central

    Li, Jun-Xu

    2013-01-01

    Pain is a private experience that involves both sensory and emotional components. Animal studies of pain can only be inferred by their responses, and therefore the measurement of reflexive responses dominate the pain literature for nearly a century. It has been argued that although reflexive responses are important to unveil the sensory nature of pain in organisms, pain affect is equally important but largely ignored in pain studies primarily due to the lack of validated animal models. One strategy to begin to understand pain affect is to use conditioning principles to indirectly reveal the affective condition of pain. This review critically analyzed several procedures that are thought to measure affective learning of pain. The procedures regarding the current knowledge, the applications, and their advantages and disadvantages in pain research are discussed. It is proposed that these procedures should be combined with traditional reflex-based pain measurements in future studies of pain, which could greatly benefit both the understanding of neural underpinnings of pain and preclinical assessment of novel analgesics. PMID:23500202

  6. Anonymous indexing of health conditions for a similarity measure.

    PubMed

    Song, Insu; Marsh, Nigel V

    2012-07-01

    A health social network is an online information service which facilitates information sharing between closely related members of a community with the same or a similar health condition. Over the years, many automated recommender systems have been developed for social networking in order to help users find their communities of interest. For health social networking, the ideal source of information for measuring similarities of patients is the medical information of the patients. However, it is not desirable that such sensitive and private information be shared over the Internet. This is also true for many other security sensitive domains. A new information-sharing scheme is developed where each patient is represented as a small number of (possibly disjoint) d-words (discriminant words) and the d-words are used to measure similarities between patients without revealing sensitive personal information. The d-words are simple words like "food,'' and thus do not contain identifiable personal information. This makes our method an effective one-way hashing of patient assessments for a similarity measure. The d-words can be easily shared on the Internet to find peers who might have similar health conditions. PMID:22531815

  7. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    PubMed

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented. PMID:23200427

  8. High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.

    1997-01-01

    To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.

  9. Development and validation of a continuous measure of patient condition using the Electronic Medical Record.

    PubMed

    Rothman, Michael J; Rothman, Steven I; Beals, Joseph

    2013-10-01

    Patient condition is a key element in communication between clinicians. However, there is no generally accepted definition of patient condition that is independent of diagnosis and that spans acuity levels. We report the development and validation of a continuous measure of general patient condition that is independent of diagnosis, and that can be used for medical-surgical as well as critical care patients. A survey of Electronic Medical Record data identified common, frequently collected non-static candidate variables as the basis for a general, continuously updated patient condition score. We used a new methodology to estimate in-hospital risk associated with each of these variables. A risk function for each candidate input was computed by comparing the final pre-discharge measurements with 1-year post-discharge mortality. Step-wise logistic regression of the variables against 1-year mortality was used to determine the importance of each variable. The final set of selected variables consisted of 26 clinical measurements from four categories: nursing assessments, vital signs, laboratory results and cardiac rhythms. We then constructed a heuristic model quantifying patient condition (overall risk) by summing the single-variable risks. The model's validity was assessed against outcomes from 170,000 medical-surgical and critical care patients, using data from three US hospitals. Outcome validation across hospitals yields an area under the receiver operating characteristic curve(AUC) of ≥0.92 when separating hospice/deceased from all other discharge categories, an AUC of ≥0.93 when predicting 24-h mortality and an AUC of 0.62 when predicting 30-day readmissions. Correspondence with outcomes reflective of patient condition across the acuity spectrum indicates utility in both medical-surgical units and critical care units. The model output, which we call the Rothman Index, may provide clinicians with a longitudinal view of patient condition to help address known

  10. Innovative Methodologies for thermal Energy Release Measurement: case of La Solfatara volcano (Italy)

    NASA Astrophysics Data System (ADS)

    Marfe`, Barbara; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Marotta, Enrica; Peluso, Rosario

    2015-04-01

    This work is devoted to improve the knowledge on the parameters that control the heat flux anomalies associated with the diffuse degassing processes of volcanic and hydrothermal areas. The methodologies currently used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. A new method, based on the use of thermal imaging cameras, has been applied to estimate the heat flux and its time variations. This approach will allow faster heat flux measurement than already accredited methods, improving in this way the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The idea is to extrapolate the heat flux from the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. We use thermal imaging cameras, at short distances (meters to hundreds of meters), to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature. Preliminary studies have been carried out throughout the whole of the La Solfatara crater in order to investigate a possible correlation between the surface temperature and the shallow thermal gradient. We have used a FLIR SC640 thermal camera and K type thermocouples to assess the two measurements at the same time. Results suggest a good correlation between the shallow temperature gradient ΔTs and the surface temperature Ts depurated from background, and despite the campaigns took place during a period of time of a few years, this correlation seems to be stable over the time. This is an extremely motivating result for a further development of a measurement method based only on the use of small range thermal imaging camera. Surveys with thermal cameras may be manually done using a tripod to take thermal images of small contiguous areas and then joining

  11. Thermal-Diffusivity Measurements of Mexican Citrus Essential Oils Using Photoacoustic Methodology in the Transmission Configuration

    NASA Astrophysics Data System (ADS)

    Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.

    2011-05-01

    Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.

  12. Three-dimensional force measurements on oral implants: a methodological study.

    PubMed

    Duyck, J; Van Oosterwyck, H; De Cooman, M; Puers, R; Vander Sloten, J; Naert, I

    2000-09-01

    This paper describes a methodology that allows in vitro and in vivo quantification and qualification of forces on oral implants. Strain gauges are adapted to the outer surface of 5.5 and 7 mm standard abutments (Brånemark System, Nobel Biocare, Sweden). The readings of the strain gauges are transformed into a numerical representation of the normal force and the bending moment around the X- and Y-axis. The hardware and the software of the 3D measuring device based on the strain gauge technology is explained and its accuracy and reliability tested. The accuracy level for axial forces and bending moments is 9.72 N and 2.5 N x cm, respectively, based on the current techniques for strain gauged abutments. As an example, an in vivo force analysis was performed in a patient with a full fixed prosthesis in the mandible. Since axial loads of 450 N and bending moments of 70 N x cm were recorded, it was concluded that the accuracy of the device falls well within the scope of our needs. Nevertheless, more in vivo research is needed before well defined conclusions can be drawn and strategies developed to improve the biomechanics of oral implants. PMID:11012848

  13. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  14. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials. PMID:26098126

  15. An analysis of the pilot point methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields

    USGS Publications Warehouse

    Cooley, R.L.

    2000-01-01

    An analysis of the pilot point method for automated calibration of an ensemble of conditionally simulated transmissivity fields was conducted on the basis of the simplifying assumption that the flow model is a linear function of log transmissivity. The analysis shows that the pilot point and conditional simulation method of model calibration and uncertainty analysis can produce accurate uncertainty measures if it can be assumed that errors of unknown origin in the differences between observed and model-computed water pressures are small. When this assumption is not met, the method could yield significant errors from overparameterization and the neglect of potential sources of model inaccuracy. The conditional simulation part of the method is also shown to be a variant of the percentile bootstrap method, so that when applied to a nonlinear model, the method is subject to bootstrap errors. These sources of error must be considered when using the method.

  16. A METHODOLOGY TO INTEGRATE MAGNETIC RESONANCE AND ACOUSTIC MEASUREMENTS FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Jorge O. Parra; Chris L. Hackert; Lorna L. Wilson

    2002-09-20

    The work reported herein represents the third year of development efforts on a methodology to interpret magnetic resonance and acoustic measurements for reservoir characterization. In this last phase of the project we characterize a vuggy carbonate aquifer in the Hillsboro Basin, Palm Beach County, South Florida, using two data sets--the first generated by velocity tomography and the second generated by reflection tomography. First, we integrate optical macroscopic (OM), scanning electron microscope (SEM) and x-ray computed tomography (CT) images, as well as petrography, as a first step in characterizing the aquifer pore system. This pore scale integration provides information with which to evaluate nuclear magnetic resonance (NMR) well log signatures for NMR well log calibration, interpret ultrasonic data, and characterize flow units at the field scale between two wells in the aquifer. Saturated and desaturated NMR core measurements estimate the irreducible water in the rock and the variable T{sub 2} cut-offs for the NMR well log calibration. These measurements establish empirical equations to extract permeability from NMR well logs. Velocity and NMR-derived permeability and porosity relationships integrated with velocity tomography (based on crosswell seismic measurements recorded between two wells 100 m apart) capture two flow units that are supported with pore scale integration results. Next, we establish a more detailed picture of the complex aquifer pore structures and the critical role they play in water movement, which aids in our ability to characterize not only carbonate aquifers, but reservoirs in general. We analyze petrography and cores to reveal relationships between the rock physical properties that control the compressional and shear wave velocities of the formation. A digital thin section analysis provides the pore size distributions of the rock matrix, which allows us to relate pore structure to permeability and to characterize flow units at the

  17. A Novel Fiber Bragg Grating Based Sensing Methodology for Direct Measurement of Surface Strain on Body Muscles during Physical Exercises

    NASA Astrophysics Data System (ADS)

    Prasad Arudi Subbarao, Guru; Subbaramajois Narasipur, Omkar; Kalegowda, Anand; Asokan, Sundarrajan

    2012-07-01

    The present work proposes a new sensing methodology, which uses Fiber Bragg Gratings (FBGs) to measure in vivo the surface strain and strain rate on calf muscles while performing certain exercises. Two simple exercises, namely ankle dorsi-flexion and ankle plantar-flexion, have been considered and the strain induced on the medial head of the gastrocnemius muscle while performing these exercises has been monitored. The real time strain generated has been recorded and the results are compared with those obtained using a commercial Color Doppler Ultrasound (CDU) system. It is found that the proposed sensing methodology is promising for surface strain measurements in biomechanical applications.

  18. Methodological issues in ethnic and racial identity research with ethnic minority populations: theoretical precision, measurement issues, and research designs.

    PubMed

    Schwartz, Seth J; Syed, Moin; Yip, Tiffany; Knight, George P; Umaña-Taylor, Adriana J; Rivas-Drake, Deborah; Lee, Richard M

    2014-01-01

    This article takes stock of research methods employed in the study of racial and ethnic identity with ethnic minority populations. The article is presented in three parts. The first section reviews theories, conceptualizations, and measurement of ethnic and racial identity (ERI) development. The second section reviews theories, conceptualizations, and measurement of ERI content. The final section reviews key methodological and analytic principles that are important to consider for both ERI development and content. The article concludes with suggestions for future research addressing key methodological limitations when studying ERI. PMID:24490892

  19. A simple methodology to calibrate local meltrate function from ground observation in absence of density measurements

    NASA Astrophysics Data System (ADS)

    Mariani, Davide; Guyennon, Nicolas; Maggioni, Margherita; Salerno, Franco; Romano, Emanuele

    2015-04-01

    Snowpack melting can represent a major contribution to the seasonal variability of the surface and ground water budget at basin scale: the snow pack, acting as a natural reservoir, stores water during the winter season and releases it during spring and summer. The radiative budget driving the melting process depends on numerous variables that may be affected by the ongoing climatic changes. As a result, a shift in time during the spring and summer discharge may significantly affect surface water management at basin scale. For this reason, a reliable model able to quantitatively describe the snow melting processes is very important also for management purposes. The estimation of snow melt rate requires a full energy (mass) balance snowpack assessment. The limited availability of necessary data often does not allow implementing a radiative (mass) balance model. As an alternative we propose here a simple methodology to reconstruct the daily snowmelt and associated melt rate function based only on solid precipitation, air mean temperature and snowpack depth measurements, while snow density observations are often missing. The model differentiates between the melting and the compaction processes based on a daily mean temperature threshold (i.e. above or below the freezing point) and the snowpack state. The snow pack is described as two-layer model, each of them considers its own depth and density. The first one is a fresh snow surface layer whose density is a constant parameter. It is modulated by the daily snowfall/melting budget or it can be compacted and embedded within the second layer. The second one is the ripe snow, whose density is a weighted average with depths of antecedent snowpack and possible first layer contribution. The two snow layers allow starting the fusion by the snowpack's top where the density is lower, while much water is released when the ripe snow starts melting during the late melting season. Finally, we estimate the associated degree-day and

  20. Reverse micellar extraction of lectin from black turtle bean (Phaseolus vulgaris): optimisation of extraction conditions by response surface methodology.

    PubMed

    He, Shudong; Shi, John; Walid, Elfalleh; Zhang, Hongwei; Ma, Ying; Xue, Sophia Jun

    2015-01-01

    Lectin from black turtle bean (Phaseolus vulgaris) was extracted and purified by reverse micellar extraction (RME) method. Response surface methodology (RSM) was used to optimise the processing parameters for both forward and backward extraction. Hemagglutinating activity analysis, SDS-PAGE, RP-HPLC and FTIR techniques were used to characterise the lectin. The optimum extraction conditions were determined as 77.59 mM NaCl, pH 5.65, AOT 127.44 mM sodium bis (2-ethylhexyl) sulfosuccinate (AOT) for the forward extraction; and 592.97 mM KCl, pH 8.01 for the backward extraction. The yield was 63.21 ± 2.35 mg protein/g bean meal with a purification factor of 8.81 ± 0.17. The efficiency of RME was confirmed by SDS-PAGE and RP-HPLC, respectively. FTIR analysis indicated there were no significant changes in the secondary protein structure. Comparison with conventional chromatographic method confirmed that the RME method could be used for the purification of lectin from the crude extract. PMID:25053033

  1. Non-invasive ultrasound based temperature measurements at reciprocating screw plastication units: Methodology and applications

    NASA Astrophysics Data System (ADS)

    Straka, Klaus; Praher, Bernhard; Steinbichler, Georg

    2015-05-01

    Previous attempts to accurately measure the real polymer melt temperature in the screw chamber as well as in the screw channels have failed on account of the challenging metrological boundary conditions (high pressure, high temperature, rotational and axial screw movement). We developed a novel ultrasound system - based on reflection measurements - for the online determination of these important process parameters. Using available pressure-volume-temperature (pvT) data from a polymer it is possible to estimate the density and adiabatic compressibility of the material and therefore the pressure and temperature depending longitudinal ultrasound velocity. From the measured ultrasonic reflection time from the screw root and barrel wall and the pressure it is possible to calculate the mean temperature in the screw channel or in the chamber in front of the screw (in opposition to flush mounted infrared or thermocouple probes). By means of the above described system we are able to measure axial profiles of the mean temperature in the screw chamber. The data gathered by the measurement system can be used to develop control strategies for the plastication process to reduce temperature gradients within the screw chamber or as input data for injection moulding simulation.

  2. Practical Experience of Discharge Measurement in Flood Conditions with ADP

    NASA Astrophysics Data System (ADS)

    Vidmar, A.; Brilly, M.; Rusjan, S.

    2009-04-01

    Accurate discharge estimation is important for an efficient river basin management and especially for flood forecasting. The traditional way of estimating the discharge in hydrological practice is to measure the water stage and to convert the recorded water stage values into discharge by using the single-valued rating curve .Relationship between the stage and discharge values of the rating curve for the extreme events are usually extrapolated by using different mathematical methods and are not directly measured. Our practice shows that by using the Accoustic Doppler Profiler (ADP) instrument we can record the actual relation between the water stage and the flow velocity at the occurrence of flood waves very successfully. Measurement in flood conditions it is not easy task, because of high water surface velocity and large amounts of sediments in the water and floating objects on the surface like branches, bushes, trees, piles and others which can also easily damage ADP instrument. We made several measurements in such extreme events on the Sava River down to the nuclear power plant Kr\\vsko where we have install fixed cable way. During the several measurement with traditional "moving-boat" measurement technique a mowing bed phenomenon was clearly seen. Measuring flow accurately using ADP that uses the "moving-boat" technique, the system needs a reference against which to relate water velocities to. This reference is river bed and must not move. During flood events we detected difficulty finding a static bed surface to which to relate water velocities. This is caused by motion of the surface layer of bed material or also sediments suspended in the water near bed very densely. So these traditional »moving-boat« measurement techniques that we normally use completely fail. Using stationary measurement method to making individual velocity profile measurements, using an Acoustic Doppler Profiler (ADP), at certain time at fixed locations across the width of a stream gave

  3. Development of a system to measure local measurement conditions around textile electrodes.

    PubMed

    Kim, Saim; Oliveira, Joana; Roethlingshoefer, Lisa; Leonhard, Steffen

    2010-01-01

    The three main influence factors on the interface between textile electrode an skin are: temperature, contact pressure and relative humidity. This paper presents first results of a prototype, which measures these local measurement conditions around textile electrodes. The wearable prototype is a data acquisition system based on a microcontroller with a flexible sensor sleeve. Validation measurements included variation of ambient temperature, contact pressures and sleeve material. Results show a good correlation with data found in literature. PMID:21096676

  4. Lab measurements to support modeling terahertz propagation in brownout conditions

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Grice, Phillip M.; Krizo, Matthew J.; Bartell, Richard J.; Haiducek, John D.; Cusumano, Salvatore J.

    2010-04-01

    Brownout, the loss of visibility caused by dust and debris introduced into the atmosphere by the downwash of a helicopter, currently represents a serious challenge to U.S. military operations in Iraq and Afghanistan, where it has been cited as a factor in the majority of helicopter accidents. Brownout not only reduces visibility, but can create visual illusions for the pilot and difficult conditions for crew beneath the aircraft. Terahertz imaging may provide one solution to this problem. Terahertz frequency radiation readily propagates through the dirt aerosols present in brownout, and therefore can provide an imaging capability to improve effective visibility for pilots, helping prevent the associated accidents. To properly model the success of such systems, it is necessary to determine the optical properties of such obscurants in the terahertz regime. This research attempts to empirically determine, and measure in the laboratory, the full complex index of refraction optical properties of dirt aerosols representative of brownout conditions. These properties are incorporated into the AFIT/CDE Laser Environmental Effects Definition and Reference (LEEDR) software, allowing this program to more accurately assess the propagation of terahertz radiation under brownout conditions than was done in the past with estimated optical properties.

  5. Inference of human affective states from psychophysiological measurements extracted under ecologically valid conditions

    PubMed Central

    Betella, Alberto; Zucca, Riccardo; Cetnarski, Ryszard; Greco, Alberto; Lanatà, Antonio; Mazzei, Daniele; Tognetti, Alessandro; Arsiwalla, Xerxes D.; Omedas, Pedro; De Rossi, Danilo; Verschure, Paul F. M. J.

    2014-01-01

    Compared to standard laboratory protocols, the measurement of psychophysiological signals in real world experiments poses technical and methodological challenges due to external factors that cannot be directly controlled. To address this problem, we propose a hybrid approach based on an immersive and human accessible space called the eXperience Induction Machine (XIM), that incorporates the advantages of a laboratory within a life-like setting. The XIM integrates unobtrusive wearable sensors for the acquisition of psychophysiological signals suitable for ambulatory emotion research. In this paper, we present results from two different studies conducted to validate the XIM as a general-purpose sensing infrastructure for the study of human affective states under ecologically valid conditions. In the first investigation, we recorded and classified signals from subjects exposed to pictorial stimuli corresponding to a range of arousal levels, while they were free to walk and gesticulate. In the second study, we designed an experiment that follows the classical conditioning paradigm, a well-known procedure in the behavioral sciences, with the additional feature that participants were free to move in the physical space, as opposed to similar studies measuring physiological signals in constrained laboratory settings. Our results indicate that, by using our sensing infrastructure, it is indeed possible to infer human event-elicited affective states through measurements of psychophysiological signals under ecological conditions. PMID:25309310

  6. A METHODOLOGY FOR ESTIMATING ARMY TRAINING AND TESTING AREA CARRYING CAPACITY (ATTACC) VEHICLE SEVERITY FACTORS AND LOCAL CONDITION FACTORS

    EPA Science Inventory

    The Army Training and Testing Area Carrying Capacity (ATTACC) program is a methodology for estimating training and testing land carrying capacity. The methodology is used to determine land rehabilitation and maintenance costs associated with land-based training. ATTACC is part of...

  7. "MARK I" MEASUREMENT METHODOLOGY FOR POLLUTION PREVENTION PROGRESS OCCURRING AS A RESULT OF PRODUCT DECISIONS

    EPA Science Inventory

    A methodology for assessing progress in pollution prevention resulting from product redesign, reformulation or replacement is described. The method compares the pollution generated by the original product with that from the modified or replacement product, taking into account, if...

  8. Suspended matter concentrations in coastal waters: Methodological improvements to quantify individual measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo

    2014-12-01

    Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to <60%. The determination errors for the TSM concentration was <40% when three different volume were used, and for the majority of the samples the error was <10%. When six different volumes were used and outliers removed, the error was always <25%, very often errors of only a few percent were obtained. The approach proposed here can determine the individual determination error for each sample, is independent of bias errors, can be used for TSM and PIM determination, and allows individual quality control for samples from coastal and estuarine waters. It should

  9. Optimization of Extraction Conditions for Maximal Phenolic, Flavonoid and Antioxidant Activity from Melaleuca bracteata Leaves Using the Response Surface Methodology.

    PubMed

    Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping

    2016-01-01

    Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the "tea tree oil" derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography-mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant. PMID

  10. Nuclear Structure Measurements of Fermium-254 and Advances in Target Production Methodologies

    NASA Astrophysics Data System (ADS)

    Gothe, Oliver Ralf

    The Berkeley Gas-filled Separator (BGS) has been upgraded with a new gas control system. It allows for accurate control of hydrogen and helium gas mixtures. This greatly increases the capabilities of the separator by reducing background signals in the focal plane detector for asymmetric nuclear reactions. It has also been shown that gas mixtures can be used to focus the desired reaction products into a smaller area, thereby increasing the experimental efficiency. A new electrodeposition cell has been developed to produce metal oxide targets for experiments at the BGS. The new cell has been characterized and was used to produce americium targets for the production of element 115 in the reaction 243Am(48Ca.3n) 288115. Additionally, a new method of producing targets for nuclear reactions was explored. A procedure for producing targets via Polymer Assisted Deposition (PAD) was developed and targets produced via this method were tested using the nuclear reaction 208Pb(40Ar.4 n)244Fm to determine their in-beam performance. It was determined that the silicon nitride backings used in this procedure are not feasible due to their crystal structures, and alternative backing materials have been tested and proposed. A previously unknown level in 254Fm has been identified at 985.7 keV utilizing a newly developed low background coincident apparatus. 254m was produced in the reaction 208Pb(48Ca. n)254No. Reaction products were guided to the two-clover low background detector setup via a recoil transfer chamber. The new level has been assigned a spin of 2- and has tentatively been identified as the octupole vibration in 254Fm. Transporting evaporation residues to a two-clover, low background detector setup can effectively be used to perform gamma-spectroscopy measurements of nuclei that are not accessible by current common methodologies. This technique provides an excellent addition to previously available tools such as in-beam spectroscopy and gamma-ray tracking arrays.

  11. Measurement in Learning Games Evolution: Review of Methodologies Used in Determining Effectiveness of "Math Snacks" Games and Animations

    ERIC Educational Resources Information Center

    Trujillo, Karen; Chamberlin, Barbara; Wiburg, Karin; Armstrong, Amanda

    2016-01-01

    This article captures the evolution of research goals and methodologies used to assess the effectiveness and impact of a set of mathematical educational games and animations for middle-school aged students. The researchers initially proposed using a mixed model research design of formative and summative measures, such as user-testing,…

  12. Use of Response Surface Methodology to Optimize Culture Conditions for Hydrogen Production by an Anaerobic Bacterial Strain from Soluble Starch

    NASA Astrophysics Data System (ADS)

    Kieu, Hoa Thi Quynh; Nguyen, Yen Thi; Dang, Yen Thi; Nguyen, Binh Thanh

    2016-05-01

    Biohydrogen is a clean source of energy that produces no harmful byproducts during combustion, being a potential sustainable energy carrier for the future. Therefore, biohydrogen produced by anaerobic bacteria via dark fermentation has attracted attention worldwide as a renewable energy source. However, the hydrogen production capability of these bacteria depends on major factors such as substrate, iron-containing hydrogenase, reduction agent, pH, and temperature. In this study, the response surface methodology (RSM) with central composite design (CCD) was employed to improve the hydrogen production by an anaerobic bacterial strain isolated from animal waste in Phu Linh, Soc Son, Vietnam (PL strain). The hydrogen production process was investigated as a function of three critical factors: soluble starch concentration (8 g L-1 to 12 g L-1), ferrous iron concentration (100 mg L-1 to 200 mg L-1), and l-cysteine concentration (300 mg L-1 to 500 mg L-1). RSM analysis showed that all three factors significantly influenced hydrogen production. Among them, the ferrous iron concentration presented the greatest influence. The optimum hydrogen concentration of 1030 mL L-1 medium was obtained with 10 g L-1 soluble starch, 150 mg L-1 ferrous iron, and 400 mg L-1 l-cysteine after 48 h of anaerobic fermentation. The hydrogen concentration produced by the PL strain was doubled after using RSM. The obtained results indicate that RSM with CCD can be used as a technique to optimize culture conditions for enhancement of hydrogen production by the selected anaerobic bacterial strain. Hydrogen production from low-cost organic substrates such as soluble starch using anaerobic fermentation methods may be one of the most promising approaches.

  13. Feasibility of density and viscosity measurements under ammonothermal conditions

    NASA Astrophysics Data System (ADS)

    Steigerwald, Thomas G.; Alt, Nicolas S. A.; Hertweck, Benjamin; Schluecker, Eberhard

    2014-10-01

    With an eye on numerical simulations of ammonothermal growth of group III-V bulk single crystals, precise data for viscosity and density are strongly needed. In this work, changes in viscosity depending on temperature and pressure are traced in the developed ball viscometer. There, the falling time is detected by acquiring the acoustic signal of the ball using a high temperature borne-noise acceleration sensor. The results for the viscosity of pure ammonia at ammonothermal conditions already show good accuracy. The apparatus is designed to measure the density in addition to the viscosity, by the substitution of the rolling ball material in later experiments. This is important because the density of the flowing fluid is not constant due to the solubility change of GaN in ammonia by the mineralizers obligatory in ammonothermal process.

  14. Theoretical prediction of lung nodule measurement accuracy under different acquisition and reconstruction conditions

    NASA Astrophysics Data System (ADS)

    Hsieh, Jiang; Karau, Kelly

    2004-04-01

    Utilization of computed tomography (CT) for lung cancer screening has attracted significant research interests in recent years. Images reconstructed from CT studies are used for lung nodule characterization and three-dimensional lung lesion sizing. Methodologies have been developed to automatically identify and characterize lung nodules. In this paper, we analyze the impact of acquisition and reconstruction parameters on the accuracy of quantitative lung nodule characterization. The two major data acquisition parameters that impact the accuracy of the lung nodule measurement are acquisition mode and slice aperture. Acquisition mode includes both axial and helical scans. The investigated reconstruction parameters are the reconstruction filters and field-of-view. We first develop theoretical models that predict the system response under various acquisition and reconstruction conditions. These models allow clinicians to compare results under different conditions and make appropriate acquisition and reconstruction decisions. To validate our model, extensive phantom experiments are conducted. Experiments have demonstrated that our analytical models accurately predict the performance parameters under various conditions. Our study indicates that acquisition and reconstruction parameters can significantly impact the accuracy of the nodule volume measurement. Consequently, when conducting quantitative analysis on lung nodules, especially in sequential growth studies, it is important to make appropriate adjustment and correction to maintain the desired accuracy and to ensure effective patient management.

  15. A novel methodology to measure methane bubble sizes in the water column

    NASA Astrophysics Data System (ADS)

    Hemond, H.; Delwiche, K.; Senft-Grupp, S.; Manganello, T.

    2014-12-01

    The fate of methane ebullition from lake sediments is dependent on initial bubble size. Rising bubbles are subject to dissolution, reducing the fraction of methane that ultimately enters the atmosphere while increasing concentrations of aqueous methane. Smaller bubbles not only rise more slowly, but dissolve more rapidly larger bubbles. Thus, understanding methane bubble size distributions in the water column is critical to predicting atmospheric methane emissions from ebullition. However, current methods of measuring methane bubble sizes in-situ are resource-intensive, typically requiring divers, video equipment, sonar, or hydroacoustic instruments. The complexity and cost of these techniques points to the strong need for a simple, autonomous device that can measure bubble size distributions and be deployed unattended over long periods of time. We describe a bubble sizing device that can be moored in the subsurface and can intercept and measure the size of bubbles as they rise. The instrument uses a novel optical measurement technique with infrared LEDs and IR-sensitive photodetectors combined with a custom-designed printed circuit board. An on-board microcomputer handles raw optical signals and stores the relevant information needed to calculate bubble volume. The electronics are housed within a pressure case fabricated from standard PVC fittings and are powered by size C alkaline batteries. The bill of materials cost is less than $200, allowing us to deploy multiple sensors at various locations within Upper Mystic Lake, MA. This novel device will provide information on how methane bubble sizes may vary both spatially and temporally. We present data from tests under controlled laboratory conditions and from deployments in Upper Mystic Lake.

  16. Measurements of optical underwater turbulence under controlled conditions

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Gladysz, S.; Almeida de Sá Barros, R.; Matt, S.; Nootz, G. A.; Josset, D. B.; Hou, W.

    2016-05-01

    Laser beam propagation underwater is becoming an important research topic because of high demand for its potential applications. Namely, ability to image underwater at long distances is highly desired for scientific and military purposes, including submarine awareness, diver visibility, and mine detection. Optical communication in the ocean can provide covert data transmission with much higher rates than that available with acoustic techniques, and it is now desired for certain military and scientific applications that involve sending large quantities of data. Unfortunately underwater environment presents serious challenges for propagation of laser beams. Even in clean ocean water, the extinction due to absorption and scattering theoretically limit the useful range to few attenuation lengths. However, extending the laser light propagation range to the theoretical limit leads to significant beam distortions due to optical underwater turbulence. Experiments show that the magnitude of the distortions that are caused by water temperature and salinity fluctuations can significantly exceed the magnitude of the beam distortions due to atmospheric turbulence even for relatively short propagation distances. We are presenting direct measurements of optical underwater turbulence in controlled conditions of laboratory water tank using two separate techniques involving wavefront sensor and LED array. These independent approaches will enable development of underwater turbulence power spectrum model based directly on the spatial domain measurements and will lead to accurate predictions of underwater beam propagation.

  17. The cost of a hospital ward in Europe: is there a methodology available to accurately measure the costs?

    PubMed

    Negrini, D; Kettle, A; Sheppard, L; Mills, G H; Edbrooke, D L

    2004-01-01

    Costing health care services has become a major requirement due to an increase in demand for health care and technological advances. Several studies have been published describing the computation of the costs of hospital wards. The objective of this article is to examine the methodologies utilised to try to describe the basic components of a standardised method, which could be applied throughout Europe. Cost measurement however is a complex matter and a lack of clarity exists in the terminology and the cost concepts utilised. The methods discussed in this review make it evident that there is a lack of standardized methodologies for the determination of accurate costs of hospital wards. A standardized costing methodology would facilitate comparisons, encourage economic evaluation within the ward and hence assist in the decision-making process with regard to the efficient allocation of resources. PMID:15366283

  18. A methodological evaluation of volumetric measurement techniques including three-dimensional imaging in breast surgery.

    PubMed

    Hoeffelin, H; Jacquemin, D; Defaweux, V; Nizet, J L

    2014-01-01

    Breast surgery currently remains very subjective and each intervention depends on the ability and experience of the operator. To date, no objective measurement of this anatomical region can codify surgery. In this light, we wanted to compare and validate a new technique for 3D scanning (LifeViz 3D) and its clinical application. We tested the use of the 3D LifeViz system (Quantificare) to perform volumetric calculations in various settings (in situ in cadaveric dissection, of control prostheses, and in clinical patients) and we compared this system to other techniques (CT scanning and Archimedes' principle) under the same conditions. We were able to identify the benefits (feasibility, safety, portability, and low patient stress) and limitations (underestimation of the in situ volume, subjectivity of contouring, and patient selection) of the LifeViz 3D system, concluding that the results are comparable with other measurement techniques. The prospects of this technology seem promising in numerous applications in clinical practice to limit the subjectivity of breast surgery. PMID:24511536

  19. Defining and Measuring Engagement and Learning in Science: Conceptual, Theoretical, Methodological, and Analytical Issues

    ERIC Educational Resources Information Center

    Azevedo, Roger

    2015-01-01

    Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring…

  20. What Do We Measure? Methodological versus Institutional Validity in Student Surveys

    ERIC Educational Resources Information Center

    Johnson, Jeffrey Alan

    2011-01-01

    This paper examines the tension in the process of designing student surveys between the methodological requirements of good survey design and the institutional needs for survey data. Building on the commonly used argumentative approach to construct validity, I build an interpretive argument for student opinion surveys that allows assessment of the…

  1. The Continued Salience of Methodological Issues for Measuring Psychiatric Disorders in International Surveys

    ERIC Educational Resources Information Center

    Tausig, Mark; Subedi, Janardan; Broughton, Christopher; Pokimica, Jelena; Huang, Yinmei; Santangelo, Susan L.

    2011-01-01

    We investigated the extent to which methodological concerns explicitly addressed by the designers of the World Mental Health Surveys persist in the results that were obtained using the WMH-CIDI instrument. We compared rates of endorsement of mental illness symptoms in the United States (very high) and Nepal (very low) as they were affected by…

  2. Measuring Sense of Community: A Methodological Interpretation of the Factor Structure Debate

    ERIC Educational Resources Information Center

    Peterson, N. Andrew; Speer, Paul W.; Hughey, Joseph

    2006-01-01

    Instability in the factor structure of the Sense of Community Index (SCI) was tested as a methodological artifact. Confirmatory factor analyses, tested with two data sets, supported neither the proposed one-factor nor the four-factor (needs fulfillment, group membership, influence, and emotional connection) SCI. Results demonstrated that the SCI…

  3. The Measure of a Nation: The USDA and the Rise of Survey Methodology

    ERIC Educational Resources Information Center

    Mahoney, Kevin T.; Baker, David B.

    2007-01-01

    Survey research has played a major role in American social science. An outgrowth of efforts by the United States Department of Agriculture in the 1930s, the Division of Program Surveys (DPS) played an important role in the development of survey methodology. The DPS was headed by the ambitious and entrepreneurial Rensis Likert, populated by young…

  4. Measuring Team Shared Understanding Using the Analysis-Constructed Shared Mental Model Methodology

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; O'Connor, Debra L.

    2008-01-01

    Teams are an essential part of successful performance in learning and work environments. Analysis-constructed shared mental model (ACSMM) methodology is a set of techniques where individual mental models are elicited and sharedness is determined not by the individuals who provided their mental models but by an analytical procedure. This method…

  5. Measuring the Regional Economic Importance of Early Care and Education: The Cornell Methodology Guide

    ERIC Educational Resources Information Center

    Ribeiro, Rosaria; Warner, Mildred

    2004-01-01

    This methodology guide is designed to help study teams answer basic questions about how to conduct a regional economic analysis of the child care sector. Specific examples are drawn from a local study, Tompkins County, NY, and two state studies, Kansas and New York, which the Cornell team conducted. Other state and local studies are also…

  6. Water depression storage under different tillage conditions: measuring and modelling

    NASA Astrophysics Data System (ADS)

    Giménez, R.; Campo, M. A.; González-Audicana, M.; Álvarez-Mozos, J.; Casalí, J.

    2012-04-01

    Water storage in surface depressions (DS) is an important process which affects infiltration, runoff and erosion. Since DS is driven by micro relief, in agricultural soils DS is much affected by tillage and by the direction of tillage rows in relation to the main slope. A direct and accurate measurement of DS requires making the soil surface waterproof -soil is very permeable especially under tillage- but preserving all details of the soil roughness including aggregates over the soil surface (micro-roughness). All this is a very laborious and time-consuming task. That is why hydrological and erosion models for DS estimation normally use either empirical relationships based on some roughness index or numerical approaches. The aim of this work was (i) to measure directly in the field the DS of a soil under different tillage conditions and (ii) to assess the performance of existing empirical 2D models and of a numerical 2D algorithm for DS estimation. Three types of tillage classes (mouldbard+roller, roller compacted and chisel) in 2 tillage directions (parallel and perpendicular to the main slope) were assessed in an experimental hillslope (10% slope) which defines then 6 treatments. Experiments were carried out in 12, 1-m2 micro-plots delimited by metal sheets; that is, a pair of repetitions for each treatment. In each plot, soil surface was gently impregnated with a waterproof, white paint but without altering micro-roughness. A known amount of water (stained with a blue dye) was poured all over the surface with a measuring cup. The excess water was captured in a gutter and measured. Soon after finishing the experiment, pictures of the surface was taken in order to analyze water storage pattern (from stained water) by image processing. Besides, longitudinal height profiles were measured using a laser profilemeter. Finally, infiltration rate was measured near the plot using a double ring infiltrometer. For all the treatments, DS ranged from 2 mm to 17 mm. For the

  7. A supervised vibration-based statistical methodology for damage detection under varying environmental conditions & its laboratory assessment with a scale wind turbine blade

    NASA Astrophysics Data System (ADS)

    Gómez González, A.; Fassois, S. D.

    2016-03-01

    The problem of vibration-based damage detection under varying environmental conditions and uncertainty is considered, and a novel, supervised, PCA-type statistical methodology is postulated. The methodology employs vibration data records from the healthy and damaged states of a structure under various environmental conditions. Unlike standard PCA-type methods in which a feature vector corresponding to the least important eigenvalues is formed in a single step, the postulated methodology uses supervised learning in which damaged-state data records are employed to sequentially form a feature vector by appending a transformed scalar element at a time under the condition that it optimally, among all remaining elements, improves damage detectability. This leads to the formulation of feature vectors with optimized sensitivity to damage, and thus high damage detectability. Within this methodology three particular methods, two non-parametric and one parametric, are formulated. These are validated and comparatively assessed via a laboratory case study focusing on damage detection on a scale wind turbine blade under varying temperature and the potential presence of sprayed water. Damage detection performance is shown to be excellent based on a single vibration response sensor and a limited frequency bandwidth.

  8. A new methodology of determining directional albedo models from Nimbus 7 ERB scanning radiometer measurements

    NASA Technical Reports Server (NTRS)

    House, Frederick B.

    1986-01-01

    The Nimbus 7 Earth Radiation Budget (ERB) data set is reviewed to examine its strong and weak points. In view of the timing of this report relative to the processing schedule of Nimbus 7 ERB observations, emphasis is placed on the methodology of interpreting the scanning radiometer data to develop directional albedo models. These findings enhance the value of the Nimbus 7 ERB data set and can be applied to the interpretation of both the scanning and nonscanning radiometric observations.

  9. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures.

    PubMed

    Karakitsios, Spyros P; Sarigiannis, Dimosthenis Α; Gotti, Alberto; Kassomenos, Pavlos A; Pilidis, Georgios A

    2013-01-15

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose-response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several "what if" scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1·10(-5) compared to 23.4·10(-5) for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10-20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support. PMID:23220388

  10. Martian dust threshold measurements: Simulations under heated surface conditions

    NASA Technical Reports Server (NTRS)

    White, Bruce R.; Greeley, Ronald; Leach, Rodman N.

    1991-01-01

    Diurnal changes in solar radiation on Mars set up a cycle of cooling and heating of the planetary boundary layer, this effect strongly influences the wind field. The stratification of the air layer is stable in early morning since the ground is cooler than the air above it. When the ground is heated and becomes warmer than the air its heat is transferred to the air above it. The heated parcels of air near the surface will, in effect, increase the near surface wind speed or increase the aeolian surface stress the wind has upon the surface when compared to an unheated or cooled surface. This means that for the same wind speed at a fixed height above the surface, ground-level shear stress will be greater for the heated surface than an unheated surface. Thus, it is possible to obtain saltation threshold conditions at lower mean wind speeds when the surface is heated. Even though the mean wind speed is less when the surface is heated, the surface shear stress required to initiate particle movement remains the same in both cases. To investigate this phenomenon, low-density surface dust aeolian threshold measurements have been made in the MARSWIT wind tunnel located at NASA Ames Research Center, Moffett Field, California. The first series of tests examined threshold values of the 100 micron sand material. At 13 mb surface pressure the unheated surface had a threshold friction speed of 2.93 m/s (and approximately corresponded to a velocity of 41.4 m/s at a height of 1 meter) while the heated surface equivalent bulk Richardson number of -0.02, yielded a threshold friction speed of 2.67 m/s (and approximately corresponded to a velocity of 38.0 m/s at a height of 1 meter). This change represents an 8.8 percent decrease in threshold conditions for the heated case. The values of velocities are well within the threshold range as observed by Arvidson et al., 1983. As the surface was heated the threshold decreased. At a value of bulk Richardson number equal to -0.02 the threshold

  11. A new perspective on binaural integration using response time methodology: super capacity revealed in conditions of binaural masking release

    PubMed Central

    Lentz, Jennifer J.; He, Yuan; Townsend, James T.

    2014-01-01

    This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)—most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy. PMID:25202254

  12. A new perspective on binaural integration using response time methodology: super capacity revealed in conditions of binaural masking release.

    PubMed

    Lentz, Jennifer J; He, Yuan; Townsend, James T

    2014-01-01

    This study applied reaction-time based methods to assess the workload capacity of binaural integration by comparing reaction time (RT) distributions for monaural and binaural tone-in-noise detection tasks. In the diotic contexts, an identical tone + noise stimulus was presented to each ear. In the dichotic contexts, an identical noise was presented to each ear, but the tone was presented to one of the ears 180° out of phase with respect to the other ear. Accuracy-based measurements have demonstrated a much lower signal detection threshold for the dichotic vs. the diotic conditions, but accuracy-based techniques do not allow for assessment of system dynamics or resource allocation across time. Further, RTs allow comparisons between these conditions at the same signal-to-noise ratio. Here, we apply a reaction-time based capacity coefficient, which provides an index of workload efficiency and quantifies the resource allocations for single ear vs. two ear presentations. We demonstrate that the release from masking generated by the addition of an identical stimulus to one ear is limited-to-unlimited capacity (efficiency typically less than 1), consistent with less gain than would be expected by probability summation. However, the dichotic presentation leads to a significant increase in workload capacity (increased efficiency)-most specifically at lower signal-to-noise ratios. These experimental results provide further evidence that configural processing plays a critical role in binaural masking release, and that these mechanisms may operate more strongly when the signal stimulus is difficult to detect, albeit still with nearly 100% accuracy. PMID:25202254

  13. Understanding Physical Conditions in High-redshift Galaxies Through C I Fine Structure Lines: Data and Methodology

    NASA Astrophysics Data System (ADS)

    Jorgenson, Regina A.; Wolfe, Arthur M.; Prochaska, J. Xavier

    2010-10-01

    We probe the physical conditions in high-redshift galaxies, specifically, the damped Lyα systems (DLAs) using neutral carbon (C I) fine structure lines and molecular hydrogen (H2). We report five new detections of C I and analyze the C I in an additional two DLAs with previously published data. We also present one new detection of H2 in a DLA. We present a new method of analysis that simultaneously constrains both the volume density and the temperature of the gas, as opposed to previous studies that a priori assumed a gas temperature. We use only the column density of C I measured in the fine structure states and the assumption of ionization equilibrium in order to constrain the physical conditions in the gas. We present a sample of 11 C I velocity components in six DLAs and compare their properties to those derived by the global C II* technique. The resulting median values for this sample are langn(H I)rang = 69 cm-3, langTrang = 50 K, and langlog(P/k)rang = 3.86 cm-3 K, with standard deviations, σ_{n(H I)} = 134 cm-3, σ T = 52 K, and σlog(P/k) = 3.68 cm-3 K. This can be compared with the integrated median values for the same DLAs: langn(H I)rang = 2.8 cm-3, langTrang = 139 K, and langlog(P/k)rang = 2.57 cm-3 K, with standard deviations σ_{n(H I)} = 3.0 cm-3, σ T = 43 K, and σlog(P/k) = 0.22 cm-3 K. Interestingly, the pressures measured in these high-redshift C I clouds are similar to those found in the Milky Way. We conclude that the C I gas is tracing a higher-density, higher-pressure region, possibly indicative of post-shock gas or a photodissociation region on the edge of a molecular cloud. We speculate that these clouds may be direct probes of the precursor sites of star formation in normal galaxies at high redshift. Some of the data presented herein were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space

  14. UNDERSTANDING PHYSICAL CONDITIONS IN HIGH-REDSHIFT GALAXIES THROUGH C I FINE STRUCTURE LINES: DATA AND METHODOLOGY

    SciTech Connect

    Jorgenson, Regina A.; Wolfe, Arthur M.; Prochaska, J. Xavier

    2010-10-10

    We probe the physical conditions in high-redshift galaxies, specifically, the damped Ly{alpha} systems (DLAs) using neutral carbon (C I) fine structure lines and molecular hydrogen (H{sub 2}). We report five new detections of C I and analyze the C I in an additional two DLAs with previously published data. We also present one new detection of H{sub 2} in a DLA. We present a new method of analysis that simultaneously constrains both the volume density and the temperature of the gas, as opposed to previous studies that a priori assumed a gas temperature. We use only the column density of C I measured in the fine structure states and the assumption of ionization equilibrium in order to constrain the physical conditions in the gas. We present a sample of 11 C I velocity components in six DLAs and compare their properties to those derived by the global C II* technique. The resulting median values for this sample are (n(H I)) = 69 cm{sup -3}, (T) = 50 K, and (log(P/k)) = 3.86 cm{sup -3} K, with standard deviations, {sigma}{sub n(H{sub i})} = 134 cm{sup -3}, {sigma}{sub T} = 52 K, and {sigma}{sub log(P/k)} = 3.68 cm{sup -3} K. This can be compared with the integrated median values for the same DLAs: (n(H I)) = 2.8 cm{sup -3}, (T) = 139 K, and (log(P/k)) = 2.57 cm{sup -3} K, with standard deviations {sigma}{sub n(H{sub i})} = 3.0 cm{sup -3}, {sigma}{sub T} = 43 K, and {sigma}{sub log(P/k)} = 0.22 cm{sup -3} K. Interestingly, the pressures measured in these high-redshift C I clouds are similar to those found in the Milky Way. We conclude that the C I gas is tracing a higher-density, higher-pressure region, possibly indicative of post-shock gas or a photodissociation region on the edge of a molecular cloud. We speculate that these clouds may be direct probes of the precursor sites of star formation in normal galaxies at high redshift.

  15. Methodological challenges in measuring vaccine effectiveness using population cohorts in low resource settings.

    PubMed

    King, C; Beard, J; Crampin, A C; Costello, A; Mwansambo, C; Cunliffe, N A; Heyderman, R S; French, N; Bar-Zeev, N

    2015-09-11

    Post-licensure real world evaluation of vaccine implementation is important for establishing evidence of vaccine effectiveness (VE) and programme impact, including indirect effects. Large cohort studies offer an important epidemiological approach for evaluating VE, but have inherent methodological challenges. Since March 2012, we have conducted an open prospective cohort study in two sites in rural Malawi to evaluate the post-introduction effectiveness of 13-valent pneumococcal conjugate vaccine (PCV13) against all-cause post-neonatal infant mortality and monovalent rotavirus vaccine (RV1) against diarrhoea-related post-neonatal infant mortality. Our study sites cover a population of 500,000, with a baseline post-neonatal infant mortality of 25 per 1000 live births. We conducted a methodological review of cohort studies for vaccine effectiveness in a developing country setting, applied to our study context. Based on published literature, we outline key considerations when defining the denominator (study population), exposure (vaccination status) and outcome ascertainment (mortality and cause of death) of such studies. We assess various definitions in these three domains, in terms of their impact on power, effect size and potential biases and their direction, using our cohort study for illustration. Based on this iterative process, we discuss the pros and cons of our final per-protocol analysis plan. Since no single set of definitions or analytical approach accounts for all possible biases, we propose sensitivity analyses to interrogate our assumptions and methodological decisions. In the poorest regions of the world where routine vital birth and death surveillance are frequently unavailable and the burden of disease and death is greatest We conclude that provided the balance between definitions and their overall assumed impact on estimated VE are acknowledged, such large scale real-world cohort studies can provide crucial information to policymakers by providing

  16. Methodological challenges in measuring vaccine effectiveness using population cohorts in low resource settings

    PubMed Central

    King, C.; Beard, J.; Crampin, A.C.; Costello, A.; Mwansambo, C.; Cunliffe, N.A.; Heyderman, R.S.; French, N.; Bar-Zeev, N.

    2015-01-01

    Post-licensure real world evaluation of vaccine implementation is important for establishing evidence of vaccine effectiveness (VE) and programme impact, including indirect effects. Large cohort studies offer an important epidemiological approach for evaluating VE, but have inherent methodological challenges. Since March 2012, we have conducted an open prospective cohort study in two sites in rural Malawi to evaluate the post-introduction effectiveness of 13-valent pneumococcal conjugate vaccine (PCV13) against all-cause post-neonatal infant mortality and monovalent rotavirus vaccine (RV1) against diarrhoea-related post-neonatal infant mortality. Our study sites cover a population of 500,000, with a baseline post-neonatal infant mortality of 25 per 1000 live births. We conducted a methodological review of cohort studies for vaccine effectiveness in a developing country setting, applied to our study context. Based on published literature, we outline key considerations when defining the denominator (study population), exposure (vaccination status) and outcome ascertainment (mortality and cause of death) of such studies. We assess various definitions in these three domains, in terms of their impact on power, effect size and potential biases and their direction, using our cohort study for illustration. Based on this iterative process, we discuss the pros and cons of our final per-protocol analysis plan. Since no single set of definitions or analytical approach accounts for all possible biases, we propose sensitivity analyses to interrogate our assumptions and methodological decisions. In the poorest regions of the world where routine vital birth and death surveillance are frequently unavailable and the burden of disease and death is greatest We conclude that provided the balance between definitions and their overall assumed impact on estimated VE are acknowledged, such large scale real-world cohort studies can provide crucial information to policymakers by providing

  17. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology

    PubMed Central

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800

  18. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology.

    PubMed

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800

  19. Grapevine Yield and Leaf Area Estimation Using Supervised Classification Methodology on RGB Images Taken under Field Conditions

    PubMed Central

    Diago, Maria-Paz; Correa, Christian; Millán, Borja; Barreiro, Pilar; Valero, Constantino; Tardaguila, Javier

    2012-01-01

    The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine’s leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management. PMID:23235443

  20. Grapevine yield and leaf area estimation using supervised classification methodology on RGB images taken under field conditions.

    PubMed

    Diago, Maria-Paz; Correa, Christian; Millán, Borja; Barreiro, Pilar; Valero, Constantino; Tardaguila, Javier

    2012-01-01

    The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine's leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management. PMID:23235443

  1. Shape measurement by a multi-view methodology based on the remote tracking of a 3D optical scanner

    NASA Astrophysics Data System (ADS)

    Barone, Sandro; Paoli, Alessandro; Viviano Razionale, Armando

    2012-03-01

    Full field optical techniques can be reliably used for 3D measurements of complex shapes by multi-view processes, which require the computation of transformation parameters relating different views into a common reference system. Although, several multi-view approaches have been proposed, the alignment process is still the crucial step of a shape reconstruction. In this paper, a methodology to automatically align 3D views has been developed by integrating a stereo vision system and a full field optical scanner. In particular, the stereo vision system is used to remotely track the optical scanner within a working volume. The tracking system uses stereo images to detect the 3D coordinates of retro-reflective infrared markers rigidly connected to the scanner. Stereo correspondences are established by a robust methodology based on combining the epipolar geometry with an image spatial transformation constraint. The proposed methodology has been validated by experimental tests regarding both the evaluation of the measurement accuracy and the 3D reconstruction of an industrial shape.

  2. Measurements methodology for evaluation of Digital TV operation in VHF high-band

    NASA Astrophysics Data System (ADS)

    Pudwell Chaves de Almeida, M.; Vladimir Gonzalez Castellanos, P.; Alfredo Cal Braz, J.; Pereira David, R.; Saboia Lima de Souza, R.; Pereira da Soledade, A.; Rodrigues Nascimento Junior, J.; Ferreira Lima, F.

    2016-07-01

    This paper describes the experimental setup of field measurements carried out for evaluating the operation of the ISDB-TB (Integrated Services Digital Broadcasting, Terrestrial, Brazilian version) standard digital TV in the VHF-highband. Measurements were performed in urban and suburban areas in a medium-sized Brazilian city. Besides the direct measurements of received power and environmental noise, a measurement procedure involving the injection of Gaussian additive noise was employed to achieve the signal to noise ratio threshold at each measurement site. The analysis includes results of static reception measurements for evaluating the received field strength and the signal to noise ratio thresholds for correct signal decoding.

  3. Experimental Measurements And Evaluation Of Indoor Microclimate Conditions

    NASA Astrophysics Data System (ADS)

    Kraliková, Ružena; Sokolová, Hana

    2015-07-01

    The paper deals with monitoring of workplace where technological equipment produces heat during hot summer days. The thermo-hygric microclimate measurement took place during daily work shift, and was carried out at 5 choosen measuring points. Since there was radiation heat presented in workplace and workers worked at different places, the thermal environment was classified as a heterogeneous and unstationary area. The measurement, result processing and interpretation was carried out according to the valid legislation of Slovak Republic.

  4. A Novel Methodology for Measurements of an LED's Heat Dissipation Factor

    NASA Astrophysics Data System (ADS)

    Jou, R.-Y.; Haung, J.-H.

    2015-12-01

    Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.

  5. Creating ‘obesogenic realities’; do our methodological choices make a difference when measuring the food environment?

    PubMed Central

    2013-01-01

    Background The use of Geographical Information Systems (GIS) to objectively measure ‘obesogenic’ food environment (foodscape) exposure has become common-place. This increase in usage has coincided with the development of a methodologically heterogeneous evidence-base, with subsequent perceived difficulties for inter-study comparability. However, when used together in previous work, different types of food environment metric have often demonstrated some degree of covariance. Differences and similarities between density and proximity metrics, and within methodologically different conceptions of density and proximity metrics need to be better understood. Methods Frequently used measures of food access were calculated for North East England, UK. Using food outlet data from local councils, densities of food outlets per 1000 population and per km2 were calculated for small administrative areas. Densities (counts) were also calculated based on population-weighted centroids of administrative areas buffered at 400/800/1000m street network and Euclidean distances. Proximity (street network and Euclidean distances) from these centroids to the nearest food outlet were also calculated. Metrics were compared using Spearman’s rank correlations. Results Measures of foodscape density and proximity were highly correlated. Densities per km2 and per 1000 population were highly correlated (rs = 0.831). Euclidean and street network based measures of proximity (rs = 0.865) and density (rs = 0.667-0.764, depending on neighbourhood size) were also highly correlated. Density metrics based on administrative areas and buffered centroids of administrative areas were less strongly correlated (rs = 0.299-0.658). Conclusions Density and proximity metrics were largely comparable, with some exceptions. Whilst results suggested a substantial degree of comparability across existing studies, future comparability could be ensured by moving towards a more standardised set of

  6. Conditioning of sexual proceptivity in female quail: Measures of conditioned place preference

    PubMed Central

    Gutiérrez, Germán; Domjan, Michael

    2011-01-01

    The present experiments were conducted to explore the nature of conditioned sexual proceptivity in female quail. Females exposed to males subsequently approached the area where the males were previously housed (Experiment 1). This increased preference for the male’s area reflected an increase in female sexual proceptivity and not an increase in non-directed locomotor activity (Experiment 2). These findings provide the first evidence that female quail show conditioned responses that may be considered to be proceptive responses toward male conspecifics. The proceptive responses are expressed as tonic changes in preference for areas where males have been observed in the past rather than as specific phasic conditioned responses. PMID:21664442

  7. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Jorge O.; Hackert, Chris L.; Collier, Hughbert A.; Bennett, Michael

    2002-01-29

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate NMR techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This is accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging are being linked with a balanced petrographical analysis of the core and theoretical model.

  8. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, J.O.

    2001-01-26

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate magnetic resonance (MR) techniques and acoustic measurements to improve predictability of the pay zone in two hydrocarbon reservoirs. This was accomplished by extracting the fluid property parameters using MR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurements were compared with petrographic analysis results to determine the relative roles of petrographic elements such as porosity type, mineralogy, texture, and distribution of clay and cement in creating permeability heterogeneity.

  9. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Ph.D., Jorge O.

    2002-06-10

    The objective of the project was to develop an advanced imaging method, including pore scale imaging, to integrate nuclear magnetic resonance (NMR) techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This will be accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging were linked with a balanced petrographical analysis of cores and theoretical modeling.

  10. A methodology for investigating interdependencies between measured throughfall, meteorological variables and canopy structure on a small catchment.

    NASA Astrophysics Data System (ADS)

    Maurer, Thomas; Gustavos Trujillo Siliézar, Carlos; Oeser, Anne; Pohle, Ina; Hinz, Christoph

    2016-04-01

    In evolving initial landscapes, vegetation development depends on a variety of feedback effects. One of the less understood feedback loops is the interaction between throughfall and plant canopy development. The amount of throughfall is governed by the characteristics of the vegetation canopy, whereas vegetation pattern evolution may in turn depend on the spatio-temporal distribution of throughfall. Meteorological factors that may influence throughfall, while at the same time interacting with the canopy, are e.g. wind speed, wind direction and rainfall intensity. Our objective is to investigate how throughfall, vegetation canopy and meteorological variables interact in an exemplary eco-hydrological system in its initial development phase, in which the canopy is very heterogeneous and rapidly changing. For that purpose, we developed a methodological approach combining field methods, raster image analysis and multivariate statistics. The research area for this study is the Hühnerwasser ('Chicken Creek') catchment in Lower Lusatia, Brandenburg, Germany, where after eight years of succession, the spatial distribution of plant species is highly heterogeneous, leading to increasingly differentiated throughfall patterns. The constructed 6-ha catchment offers ideal conditions for our study due to the rapidly changing vegetation structure and the availability of complementary monitoring data. Throughfall data were obtained by 50 tipping bucket rain gauges arranged in two transects and connected via a wireless sensor network that cover the predominant vegetation types on the catchment (locust copses, dense sallow thorn bushes and reeds, base herbaceous and medium-rise small-reed vegetation, and open areas covered by moss and lichens). The spatial configuration of the vegetation canopy for each measurement site was described via digital image analysis of hemispheric photographs of the canopy using the ArcGIS Spatial Analyst, GapLight and ImageJ software. Meteorological data

  11. Measuring Science Teachers' Stress Level Triggered by Multiple Stressful Conditions

    ERIC Educational Resources Information Center

    Halim, Lilia; Samsudin, Mohd Ali; Meerah, T. Subahan M.; Osman, Kamisah

    2006-01-01

    The complexity of science teaching requires science teachers to encounter a range of tasks. Some tasks are perceived as stressful while others are not. This study aims to investigate the extent to which different teaching situations lead to different stress levels. It also aims to identify the easiest and most difficult conditions to be regarded…

  12. Measurement of Phonated Intervals during Four Fluency-Inducing Conditions

    ERIC Educational Resources Information Center

    Davidow, Jason H.; Bothe, Anne K.; Andreatta, Richard D.; Ye, Jun

    2009-01-01

    Purpose: Previous investigations of persons who stutter have demonstrated changes in vocalization variables during fluency-inducing conditions (FICs). A series of studies has also shown that a reduction in short intervals of phonation, those from 30 to 200 ms, is associated with decreased stuttering. The purpose of this study, therefore, was to…

  13. From theory to 'measurement' in complex interventions: Methodological lessons from the development of an e-health normalisation instrument

    PubMed Central

    2012-01-01

    Background Although empirical and theoretical understanding of processes of implementation in health care is advancing, translation of theory into structured measures that capture the complex interplay between interventions, individuals and context remain limited. This paper aimed to (1) describe the process and outcome of a project to develop a theory-based instrument for measuring implementation processes relating to e-health interventions; and (2) identify key issues and methodological challenges for advancing work in this field. Methods A 30-item instrument (Technology Adoption Readiness Scale (TARS)) for measuring normalisation processes in the context of e-health service interventions was developed on the basis on Normalization Process Theory (NPT). NPT focuses on how new practices become routinely embedded within social contexts. The instrument was pre-tested in two health care settings in which e-health (electronic facilitation of healthcare decision-making and practice) was used by health care professionals. Results The developed instrument was pre-tested in two professional samples (N = 46; N = 231). Ratings of items representing normalisation ‘processes’ were significantly related to staff members’ perceptions of whether or not e-health had become ‘routine’. Key methodological challenges are discussed in relation to: translating multi-component theoretical constructs into simple questions; developing and choosing appropriate outcome measures; conducting multiple-stakeholder assessments; instrument and question framing; and more general issues for instrument development in practice contexts. Conclusions To develop theory-derived measures of implementation process for progressing research in this field, four key recommendations are made relating to (1) greater attention to underlying theoretical assumptions and extent of translation work required; (2) the need for appropriate but flexible approaches to outcomes measurement; (3

  14. Negotiating Measurement: Methodological and Interpersonal Considerations in the Choice and Interpretation of Instruments

    ERIC Educational Resources Information Center

    Braverman, Marc T.

    2013-01-01

    Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…

  15. Measurements of aerosol chemical composition in boreal forest summer conditions

    NASA Astrophysics Data System (ADS)

    ńijälä, M.; Junninen, H.; Ehn, M.; Petäjä, T.; Vogel, A.; Hoffmann, T.; Corrigan, A.; Russell, L.; Makkonen, U.; Virkkula, A.; Mäntykenttä, J.; Kulmala, M.; Worsnop, D.

    2012-04-01

    Boreal forests are an important biome, covering vast areas of the northern hemisphere and affecting the global climate change via various feedbacks [1]. Despite having relatively few anthropogenic primary aerosol sources, they always contain a non-negligible aerosol population [2]. This study describes aerosol chemical composition measurements using Aerodyne Aerosol Mass Spectrometer (C-ToF AMS, [3]), carried out at a boreal forest area in Hyytiälä, Southern Finland. The site, Helsinki University SMEAR II measurement station [4], is situated at a homogeneous Scots pine (Pinus sylvestris) forest stand. In addition to the station's permanent aerosol, gas phase and meteorological instruments, during the HUMPPA (Hyytiälä United Measurements of Photochemistry and Particles in Air) campaign in July 2010, a very comprehensive set of atmospheric chemistry measurement instrumentation was provided by the Max Planck Institute for chemistry, Johannes Gutenberg-University, University of California and the Finnish Meteorological institute. In this study aerosol chemical composition measurements from the campaign are presented. The dominant aerosol chemical species during the campaign were the organics, although periods with elevated amounts of particulate sulfates were also seen. The overall AMS measured particle mass concentrations varied from near zero to 27 μg/m observed during a forest fire smoke episode. The AMS measured aerosol mass loadings were found to agree well with DMPS derived mass concentrations (r2=0.998). The AMS data was also compared with three other aerosol instruments. The Marga instrument [5] was used to provide a quantitative semi-online measurement of inorganic chemical compounds in particle phase. Fourier Transform Infrared Spectroscopy (FTIR) analysis was performed on daily filter samples, enabling the identification and quantification of organic aerosol subspecies. Finally an Atmospheric Pressure Chemical Ionization Ion Trap Mass Spectrometer (APCI

  16. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    PubMed

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-01

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services. PMID:26928842

  17. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    NASA Astrophysics Data System (ADS)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  18. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-05-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  19. Advanced Ultrasonic Measurement Methodology for Non-Invasive Interrogation and Identification of Fluids in Sealed Containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-16

    The Hazardous Materials Response Unit (HMRU) and the Counterterrorism and Forensic Science Research Unit (CTFSRU), Laboratory Division, Federal Bureau of Investigation (FBI) have been mandated to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a portable, hand-held, hazardous materials acoustic inspection device (HAZAID) that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The HAZAID prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the HAZAID prototype. High bandwidth ultrasonic transducers combined with the advanced pulse compression technique allowed researchers to 1) impart large amounts of energy, 2) obtain high signal-to-noise ratios, and 3) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of this feasibility study demonstrated that the HAZAID experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  20. The Sensor Fish: Measuring Fish Passage in Severe Hydraulic Conditions

    SciTech Connect

    Carlson, Thomas J. ); Duncan, Joanne P. ); Gilbride, Theresa L. )

    2003-05-28

    This article describes PNNL's efforts to develop the Sensor Fish, a waterproof sensor package that travels thru the turbines of spillways of hydroelectric dam to collect pressure and acceleration data on the conditions experienced by live salmon smolts during dam passage. Sensor Fish development is sponsored by the DOE Advanced Hydropower Turbine Survival Program. The article also gave two recent examples of Sensor Fish use: turbine passage at a McNary Kaplan turbine and spill passage in topspill at Rock Island Dam.

  1. A Novel Instrument and Methodology for the In-Situ Measurement of the Stress in Thin Films

    NASA Technical Reports Server (NTRS)

    Broadway, David M.; Omokanwaye, Mayowa O.; Ramsey, Brian D.

    2014-01-01

    We introduce a novel methodology for the in-situ measurement of mechanical stress during thin film growth utilizing a highly sensitive non-contact variation of the classic spherometer. By exploiting the known spherical deformation of the substrate the value of the stress induced curvature is inferred by measurement of only one point on the substrate's surface-the sagittal. From the known curvature the stress can be calculated using the well-known Stoney equation. Based on this methodology, a stress sensor has been designed which is simple, highly sensitive, compact, and low cost. As a result of its compact nature, the sensor can be mounted in any orientation to accommodate a given deposition geometry without the need for extensive modification to an already existing deposition system. The technique employs the use of a double side polished substrate that offers good specular reflectivity and is isotropic in its mechanical properties, such as <111> oriented crystalline silicon or amorphous soda lime glass, for example. The measurement of the displacement of the uncoated side during deposition is performed with a high resolution (i.e. 5nm), commercially available, inexpensive, fiber optic sensor which can be used in both high vacuum and high temperature environments (i.e. 10(exp-7) Torr and 480oC, respectively). A key attribute of this instrument lies in its potential to achieve sensitivity that rivals other measurement techniques such as the micro cantilever method but, due to the comparatively larger substrate area, offers a more robust and practical alternative for subsequent measurement of additional characteristics of the film that can might be correlated to film stress. We present measurement results of nickel films deposited by magnetron sputtering which show good qualitative agreement to the know behavior of polycrystalline films previously reported by Hoffman.

  2. A New Methodology For In-Situ Residual Stress Measurement In MEMS Structures

    NASA Astrophysics Data System (ADS)

    Sebastiani, M.; Bemporad, E.; Melone, G.; Rizzi, L.; Korsunsky, A. M.

    2010-11-01

    In this paper, a new approach is presented for local residual stress measurement in MEMS structures. The newly proposed approach involves incremental focused ion beam (FIB) milling of annular trenches at material surface, combined with high resolution SEM imaging and Digital Image Correlation (DIC) analysis for the measurement of the strain relief over the surface of the remaining central pillar. The proposed technique allows investigating the average residual stress on suspended micro-structures, with a spatial resolution lower than 1 μm. Results are presented for residual stress measurement on double clamped micro-beams, whose layers are obtained by DC-sputtering (PVD) deposition. Residual stresses were also independently measured by the conventional curvature method (Stoney's equation) on a similar homogeneous coating obtained by the same deposition parameters and a comparison and discussion of obtained results is performed.

  3. A New Methodology For In-Situ Residual Stress Measurement In MEMS Structures

    SciTech Connect

    Sebastiani, M.; Bemporad, E.; Melone, G.; Rizzi, L.; Korsunsky, A. M.

    2010-11-24

    In this paper, a new approach is presented for local residual stress measurement in MEMS structures. The newly proposed approach involves incremental focused ion beam (FIB) milling of annular trenches at material surface, combined with high resolution SEM imaging and Digital Image Correlation (DIC) analysis for the measurement of the strain relief over the surface of the remaining central pillar. The proposed technique allows investigating the average residual stress on suspended micro-structures, with a spatial resolution lower than 1 {mu}m. Results are presented for residual stress measurement on double clamped micro-beams, whose layers are obtained by DC-sputtering (PVD) deposition. Residual stresses were also independently measured by the conventional curvature method (Stoney's equation) on a similar homogeneous coating obtained by the same deposition parameters and a comparison and discussion of obtained results is performed.

  4. Development of Methodologies For The Analysis of The Efficiency of Flood Reduction Measures In The Rhine Basin On The Basis of Reference Floods (deflood)

    NASA Astrophysics Data System (ADS)

    Krahe, P.; Herpertz, D.; Buiteveld, H.; Busch, N.; Engel, H.; Helbig, A.; Naef, F.; Wilke, K.

    After some years of extreme flooding in the 1990s extended efforts were made to im- prove flood protection by means of an integrated river basin management. Part of this strategy is the implementation of decentralised flood reduction measures (FRM). With this in mind, the CHR/IRMA-SPONGE Project DEFLOOD was initiated. By estab- lishing a set of methodological tools this project aims at making a step further towards a quantitative hydrological evaluation of the effects of local FRM on flood generation in large river basins. The basin of the River Mosel and in particular, the basin of its tributary Saar served as case study area for testing the methodological approach. A framework for an integrated river basin modelling approach (FIRM U Flood Reduc- tion) based on generation of hydrometeorological reference conditions, precipitation- runoff modelling and flood routing procedures was set up. In this approach interfaces to incorporate the results of scenario calculations by meso-scale hydrological mod- elling are defined in order to study the downstream propagation of the effect of decen- tralised flood reduction measures including the potential retention along minor rivers in large rivers. Examples for scenario calculations are given. Based on the experience gained the strategy for the use of the methodological framework within the context of river basin management practice are identified. The application of the methodol- ogy requires a set of actions which has to be installed in the Rhine/Meuse basins. The recommendations suggest that - beside progress in hydrological modelling - a base of knowledge needs to be built up and administered which encompasses hydrologically relevant information on the actual state and prospected developments in the River Rhine basin. Furthermore, problem-oriented hydrological process studies in selected small-scale river basins ought to be carried out. Based on these studies conceptual meso-scale modelling approaches can be improved and

  5. New methodology for thermal parameter measurements in solids using photothermal radiometry

    SciTech Connect

    Depriester, M.; Hus, P.; Delenclos, S.; Sahraoui, A. Hadj

    2005-07-15

    The photothermal radiometry (PTR) signal is analyzed in order to simultaneously obtain the thermal diffusivity and effusivity of solid materials. Analytical procedures that allow the determination of the thermal parameters via a frequency scan of the amplitude or the phase of the PTR signal are presented. The measurement procedures do not involve a multiparameter-fit optimization algorithm. The methods have been used for the measurement of thermophysical properties of vitreous carbon and lead-itanate-zirconate ceramic samples.

  6. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    NASA Technical Reports Server (NTRS)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  7. Doctoral Training in Statistics, Measurement, and Methodology in Psychology: Replication and Extension of Aiken, West, Sechrest, and Reno's (1990) Survey of PhD Programs in North America

    ERIC Educational Resources Information Center

    Aiken, Leona S.; West, Stephen G.; Millsap, Roger E.

    2008-01-01

    In a survey of all PhD programs in psychology in the United States and Canada, the authors documented the quantitative methodology curriculum (statistics, measurement, and research design) to examine the extent to which innovations in quantitative methodology have diffused into the training of PhDs in psychology. In all, 201 psychology PhD…

  8. Technical Guide Documenting Methodology, Indicators, and Data Sources for "Measuring Up 2004: The National and State Report Card on Higher Education." National Center #04-6

    ERIC Educational Resources Information Center

    Ryu, Mikyung

    2004-01-01

    This "Technical Guide" describes the methodology and concepts used to measure and grade the performance of the 50 states in the higher education arena. Part I presents the methodology for grading states and provides information on data collection and reporting. Part II explains the indicators that comprise each of the graded categories.…

  9. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  10. Measuring ICT Use and Contributing Conditions in Primary Schools

    ERIC Educational Resources Information Center

    Vanderlinde, Ruben; Aesaert, Koen; van Braak, Johan

    2015-01-01

    Information and communication technology (ICT) use became of major importance for primary schools across the world as ICT has the potential to foster teaching and learning processes. ICT use is therefore a central measurement concept (dependent variable) in many ICT integration studies. This data paper presents two datasets (2008 and 2011) that…

  11. Muscle stiffness measured under conditions simulating natural sound production.

    PubMed

    Dobrunz, L E; Pelletier, D G; McMahon, T A

    1990-08-01

    Isolated whole frog gastrocnemius muscles were electrically stimulated to peak twitch tension while held isometrically in a bath at 4 degrees C. A quartz hydrophone detected vibrations of the muscle by measuring the pressure fluctuations caused by muscle movement. A small steel collar was slipped over the belly of the muscle. Transient forces including plucks and steady sinusoidal driving were applied to the collar by causing currents to flow in a coil held near the collar. The instantaneous resonant frequencies measured by the pluck and driving techniques were the same at various times during a twitch contraction cycle. The strain produced by the plucking technique in the outermost fibers was less than 1.6 x 10(-4%), a strain three orders of magnitude less than that required to drop the tension to zero in quick-length-change experiments. Because the pressure transients recorded by the hydrophone during plucks and naturally occurring sounds were of comparable amplitude, strains in the muscle due to naturally occurring sound must also be of the order 10(-3%). A simple model assuming that the muscle is an elastic bar under tension was used to calculate the instantaneous elastic modulus E as a function of time during a twitch, given the tension and resonant frequency. The result for Emax, the peak value of E during a twitch, was typically 2.8 x 10(6) N/m2. The methods used here for measuring muscle stiffness are unusual in that the apparatus used for measuring stiffness is separate from the apparatus controlling and measuring force and length. PMID:2207252

  12. Conditionally-Sampled Turbulent and Non-turbulent Measurements of Entropy Generation Rate in the Transition Region of Boundary Layers

    SciTech Connect

    Edmond J. Walsh; Kevin P. Nolan; Donald M. McEligot; Ralph J. Volino; Adrian Bejan

    2007-05-01

    Conditionally-sampled boundary layer data for an accelerating transitional boundary layer have been analyzed to calculate the entropy generation rate in the transition region. By weighing the nondimensional dissipation coefficient for the laminar-conditioned-data and turbulent-conditioned-data with the intermittency factor the average entropy generation rate in the transition region can be determined and hence be compared to the time averaged data and correlations for steady laminar and turbulent flows. It is demonstrated that this method provides, for the first time, an accurate and detailed picture of the entropy generation rate during transition. The data used in this paper have been taken from detailed boundary layer measurements available in the literature. This paper provides, using an intermittency weighted approach, a methodology for predicting entropy generation in a transitional boundary layer.

  13. Comparative study of methodologies to measure in situ the intertidal benthic community metabolism during immersion

    NASA Astrophysics Data System (ADS)

    Ouisse, Vincent; Migné, Aline; Davoult, Dominique

    2014-01-01

    Methods used to estimate community primary production and respiration in intertidal environment are still subject of controversy. Underwater community respiration (CR), net production (NCP) were calculated from simultaneous in situ measures of change in oxygen (O2), dissolved inorganic carbon (DIC) and carbon dioxide (CO2) concentration in benthic chambers performed in February, April, July and November on a Zostera noltii bed. The CRQ (CRDIC/CRO2) and CPQ (NCPo2/NCPDIC) varied between 0.15 and 3.07 and between 0.03 and 6.83, respectively. Carbon fluxes calculated from CO2 measurement were greatly underestimated, representing only 0.4-5.9% of fluxes estimated from DIC measurement. Indeed, CO2 or HCO3- input or uptake by seagrass community affect the proportions of all the chemical components of dissolved inorganic carbon (DIC, the sum of free dissolved CO2, carbonic acid H2CO3, bicarbonate and carbonate ions, HCO3- and CO32-). Thus, CO2 method is not reliable. O2 measurement does not take into account anaerobic respiration through chemical oxidation and simultaneous O2 and DIC measurements should be favored to calculate CRQ and CPQ which need to be discussed in marine environment at community scale.

  14. Methodological challenges in measurements of functional ability in gerontological research. A review.

    PubMed

    Avlund, K

    1997-06-01

    This article addresses two important challenges in the measurement of functional ability in gerontological research: the first challenge is to connect measurements to a theoretical frame of reference which enhances our understanding and interpretation of the collected data; the second relates to validity in all stages of the research from operationalization to meaningful follow-up measurements in longitudinal studies. Advantages and disadvantages in different methods to do the measurements of functional ability are described with main focus on frame of reference, operationalization, practical procedure, validity, discriminatory power, and responsiveness. In measures of functional ability it is recommended: 1) always to consider the theoretical frame of reference as part of the validation process (e.g., the theory of "The Disablement Process"; 2) always to assess whether the included activities and categories are meaningful to all people in the study population before they are combined into an index and before tests for construct validity; 3) not to combine mobility, PADL and IADL in the same index/scale; 4) not to use IADL as a health-related functional ability measure or, if used, to ask whether problems with IADL or non-performance of IADL are caused by health-related factors; 5) always to make analyses of functional ability for men and women separately as patterns of functional ability and patterns of associations between other variables and functional ability often vary for men and women; and 6) to exclude the dead in analyses of change in functional ability if the focus is on predictors of deterioration in functional ability. PMID:9258374

  15. A methodology suitable for TEM local measurements of carbon concentration in retained austenite

    SciTech Connect

    Kammouni, A.; Saikaly, W. Dumont, M.; Marteau, C.; Bano, X.; Charai, A.

    2008-09-15

    Carbon concentration in retained austenite grains is of great importance determining the mechanical properties of hot-rolled TRansformation Induced Plasticity steels. Among the different techniques available to measure such concentrations, Kikuchi lines obtained in Transmission Electron Microscopy provide a relatively easy and accurate method. The major problem however is to be able to locate an austenitic grain in the observed Transmission Electron Microscopy thin foil. Focused Ion Beam in combination with Scanning Electron Microscopy was used to successfully prepare a thin foil for Transmission Electron Microscopy and carbon concentration measurements from a 700 nm retained austenite grain.

  16. A METHOD TO MEASURE PROTECTIVE CLOTHING PERMEATION UNDER INTERMITTENT CHEIMCAL CONTACT CONDITIONS

    EPA Science Inventory

    A preliminary method was developed to measure chemical permeation under intermittent chemical contact conditions. Protective clothing permeation is presently measured using ASTM Method F739-85. Because this test measures permeation when the clothing material is in continuous cont...

  17. ELECTRON MICROSCOPE MEASUREMENT OF AIRBORNE ASBESTOS CONCENTRATIONS - A PROVISIONAL METHODOLOGY MANUAL

    EPA Science Inventory

    This manual describes a provisional optimum electron microscope (EM) procedure for measuring the concentration of asbestos in air samples. The main features of the method include depositing an air sample on a polycarbonate membrane filter, examining an EM grid specimen in a trans...

  18. ELECTRON MICROSCOPE MEASUREMENT OF AIRBORNE ASBESTOS CONCENTRATIONS. A PROVISIONAL METHODOLOGY MANUAL

    EPA Science Inventory

    This manual describes a provisional optimum electron microscope (EM) procedure for measuring the concentration of asbestos in air samples. The main features of the method include depositing an air sample on a polycarbonate membrane filter, examining an EM grid specimen in a trans...

  19. METHODOLOGY FOR MEASURING PM 2.5 SEPARATOR CHARACTERISTICS USING AN AEROSIZER

    EPA Science Inventory

    A method is presented that enables the measurement of the particle size separation characteristics of an inertial separator in a rapid fashion. Overall penetration is determined for discrete particle sizes using an Aerosizer (Model LD, TSI, Incorporated, Particle Instruments/Am...

  20. Measuring the Impact of University Technology Transfer: A Guide to Methodologies, Data Needs, and Sources

    ERIC Educational Resources Information Center

    Lowe, Robert A.; Quick, Suzanne K.

    2005-01-01

    This paper discusses measures that capture the impact of university technology transfer activities on a university?s local and regional economies (economic impact). Such assessments are of increasing interest to policy makers, researchers and technology transfer professionals, yet there have been few published discussions of the merits of various…

  1. Rigorous Measures of Implementation: A Methodological Framework for Evaluating Innovative STEM Programs

    ERIC Educational Resources Information Center

    Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.

    2011-01-01

    The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…

  2. METHODOLOGICAL APPROACH FOR MEASURING PRIORITY DBPS IN REVERSE OSMOSIS CONCENTRATED DRINKING WATER

    EPA Science Inventory

    Many disinfection by-products (DBPs) are formed when drinking water is chlorinated, but only a few are routinely measured or regulated. Various studies have revealed a plethora of DBPs for which sensitive and quantitative analytical methods have always been a major limiting facto...

  3. The Sidewalk Survey: A Field Methodology to Measure Late-Night College Drinking

    ERIC Educational Resources Information Center

    Johnson, Mark B.; Lange, James E.; Voas, Robert B.; Clapp, John D.; Lauer, Elizabeth; Snowden, Cecelia B.

    2006-01-01

    Alcohol use is highly prevalent among U.S. college students, and alcohol-related problems are often considered the most serious public health threat on American college campuses. Although empirical examinations of college drinking have relied primarily on self-report measures, several investigators have implemented field studies to obtain…

  4. A novel, non-invasive transdermal fluid sampling methodology: IGF-I measurement following exercise

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This study tested the hypothesis that transdermal fluid (TDF) provides a more sensitive and accurate measure of exercise-induced increases in insulin-like growth factor-I (IGF-I) than serum, and that these increases are detectable proximal, but not distal, to the exercising muscle. A novel, noninvas...

  5. Production of a new non-specific nuclease from Yersinia enterocolitica subsp. palearctica: optimization of induction conditions using response surface methodology

    PubMed Central

    Fang, Xiu-Juan; Tang, Zhen-Xing; Li, Zhen-Hua; Zhang, Zhi-Liang; Shi, Lu-E

    2014-01-01

    A new non-specific nuclease from Yersinia enterocolitica subsp. palearctica (Y. NSN) was expressed in Escherichia coli (E. coli) BL 21 StarTM (DE3)plysS. Induction conditions, including isopropyl-β-D-thiogalactoside (IPTG) concentration, cell density (OD600), induction time and induction temperature, were optimized using response surface methodology. Statistical analysis of the results revealed that induction temperature and all the quadratic terms of variables had significant effects on enzyme activity of Y. NSN. The optimal induction conditions were as follows: 1.5 mmol/L IPTG, OD600 of 0.80, induction time of 20.5 h, and induction temperature of 32 °C. Under the optimized conditions, the highest enzyme activity could be obtained. PMID:26019543

  6. Measuring functional connectivity using MEG: Methodology and comparison with fcMRI

    PubMed Central

    Brookes, Matthew J.; Hale, Joanne R.; Zumer, Johanna M.; Stevenson, Claire M.; Francis, Susan T.; Barnes, Gareth R.; Owen, Julia P.; Morris, Peter G.; Nagarajan, Srikantan S.

    2011-01-01

    Functional connectivity (FC) between brain regions is thought to be central to the way in which the brain processes information. Abnormal connectivity is thought to be implicated in a number of diseases. The ability to study FC is therefore a key goal for neuroimaging. Functional connectivity (fc) MRI has become a popular tool to make connectivity measurements but the technique is limited by its indirect nature. A multimodal approach is therefore an attractive means to investigate the electrodynamic mechanisms underlying hemodynamic connectivity. In this paper, we investigate resting state FC using fcMRI and magnetoencephalography (MEG). In fcMRI, we exploit the advantages afforded by ultra high magnetic field. In MEG we apply envelope correlation and coherence techniques to source space projected MEG signals. We show that beamforming provides an excellent means to measure FC in source space using MEG data. However, care must be taken when interpreting these measurements since cross talk between voxels in source space can potentially lead to spurious connectivity and this must be taken into account in all studies of this type. We show good spatial agreement between FC measured independently using MEG and fcMRI; FC between sensorimotor cortices was observed using both modalities, with the best spatial agreement when MEG data are filtered into the β band. This finding helps to reduce the potential confounds associated with each modality alone: while it helps reduce the uncertainties in spatial patterns generated by MEG (brought about by the ill posed inverse problem), addition of electrodynamic metric confirms the neural basis of fcMRI measurements. Finally, we show that multiple MEG based FC metrics allow the potential to move beyond what is possible using fcMRI, and investigate the nature of electrodynamic connectivity. Our results extend those from previous studies and add weight to the argument that neural oscillations are intimately related to functional

  7. Radio Weak Lensing Shear Measurement in the Visibility Domain - I. Methodology

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.; Makhathini, S.; Abdalla, F. B.

    2016-08-01

    The high sensitivity of the new generation of radio telescopes such as the Square Kilometre Array (SKA) will allow cosmological weak lensing measurements at radio wavelengths that are competitive with optical surveys. We present an adaptation to radio data of lensfit, a method for galaxy shape measurement originally developed and used for optical weak lensing surveys. This likelihood method uses an analytical galaxy model and makes a Bayesian marginalisation of the likelihood over uninteresting parameters. It has the feature of working directly in the visibility domain, which is the natural approach to adopt with radio interferometer data, avoiding systematics introduced by the imaging process. As a proof of concept, we provide results for visibility simulations of individual galaxies with flux density S ≥ 10μJy at the phase centre of the proposed SKA1-MID baseline configuration, adopting 12 frequency channels in the band 950 - 1190 MHz. Weak lensing shear measurements from a population of galaxies with realistic flux and scalelength distributions are obtained after natural gridding of the raw visibilities. Shear measurements are expected to be affected by `noise bias': we estimate the bias in the method as a function of signal-to-noise ratio (SNR). We obtain additive and multiplicative bias values that are comparable to SKA1 requirements for SNR > 18 and SNR > 30, respectively. The multiplicative bias for SNR >10 is comparable to that found in ground-based optical surveys such as CFHTLenS, and we anticipate that similar shear measurement calibration strategies to those used for optical surveys may be used to good effect in the analysis of SKA radio interferometer data.

  8. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  9. Conditions necessary for low-level measurements of reactive oxidants

    SciTech Connect

    Nakareseisoon, S.

    1988-01-01

    Chlorine dioxide and ozone are considered to be the alternatives to chlorine for the disinfection of drinking water supplies and also for the treatment of wastewaters prior to discharge. Chlorine dioxide, under normal circumstances, is reduced to chlorite ion which is toxic. The recommended seven-day suggested no-adverse-response levels (SNARL's) of chlorite ion is 0.007 mg/l (7 ppb). Chlorite ion at these low levels cannot be satisfactorily determined by existing methods, and so, it became necessary to develop an analytical method for determining ppb levels of chlorite ion. Such a method can be developed using differential pulse polarography (DPP). The electrochemical reduction of chlorite ion has been studied between pH 3.7-14 and in an ionic strength range of 0.05-3.0 M. The optimum conditions are pH 4.1-4.4 and an ionic strength of 0.45 M. The current under these conditions is a linear function of chlorite ion concentration ranging from 2.77 {times} 10{sup {minus}7} to 2.80 {times} 10{sup {minus}4} M (19 ppb to 19 ppm). The imprecision is better than {plus minus} 1.0% and {plus minus} 3.4% at concentrations of 2.87 {times} 10{sup {minus}5} M and 1.74 {times} 10{sup {minus}6} M, respectively, with a detection limit of 1 {times} 10{sup {minus}7} M (7 ppb). The rate of ozone decomposition has been studied in highly basic solutions (8-15 NaOH), where ozone becomes stable. The mechanism of ozone regeneration was proposed to explain the observed kinetic and to clarify the contradiction concerning the very slow observed rate of ozone decomposition in basic solution.

  10. Evaluating the RiskMetrics methodology in measuring volatility and Value-at-Risk in financial markets

    NASA Astrophysics Data System (ADS)

    Pafka, Szilárd; Kondor, Imre

    2001-10-01

    We analyze the performance of RiskMetrics, a widely used methodology for measuring market risk. Based on the assumption of normally distributed returns, the RiskMetrics model completely ignores the presence of fat tails in the distribution function, which is an important feature of financial data. Nevertheless, it was commonly found that RiskMetrics performs satisfactorily well, and therefore the technique has become widely used in the financial industry. We find, however, that the success of RiskMetrics is the artifact of the choice of the risk measure. First, the outstanding performance of volatility estimates is basically due to the choice of a very short (one-period ahead) forecasting horizon. Second, the satisfactory performance in obtaining Value-at-Risk by simply multiplying volatility with a constant factor is mainly due to the choice of the particular significance level.

  11. Auditory evoked potential measurement methodology for odontocetes and a comparison of measured thresholds with those obtained using psychophysical techniques

    NASA Astrophysics Data System (ADS)

    Nachtigall, Paul E.; Yuen, Michelle; Mooney, T. Aran; Taylor, Kristen

    2005-04-01

    Most measurements of the hearing capabilities of toothed whales and dolphins have been taken using traditional psychophysical procedures in which the animals have been maintained in laboratory environments and trained to behaviorally report the sensation or difference of acoustic stimuli. Because of the advantage of rapid data collection, increased opportunities, and new methods, Auditory Evoked Potentials (AEPs) have become increasingly used to measure audition. The use of this new procedure calls to question the comparability of the established literature and the new results collected with AEPs. The results of behavioral and AEP methods have been directly compared with basic audiogram measurements and have been shown to produce similar (but not exactly the same) values when the envelope following response procedure has been used and the length of the stimulus is taken into account. The AEP methods allow possible audiometric opportunities beyond those available with conventional psychophysics including: (1) the measurement of stranded dolphins and whales that may never be kept in laboratories, (2) the testing of stranded animals for hearing deficits perhaps caused by overexposure to noise, and (3) passive testing of hearing mechanisms while animals actively echolocate. [Work supported by the Office of Naval Research and NOAA-NMFS.

  12. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    NASA Astrophysics Data System (ADS)

    Renbaum-Wolff, L.; Grayson, J. W.; Bertram, A. K.

    2013-01-01

    Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions). The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η) ranging between 10-3 and 103 Pascal seconds (Pa s) in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter) are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  13. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    NASA Astrophysics Data System (ADS)

    Renbaum-Wolff, L.; Grayson, J. W.; Bertram, A. K.

    2012-10-01

    Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions). The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η) ranging between 10-3 and 103 Pascal seconds (Pa s) in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter) are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  14. New Measurements of Activation Volume in Olvine Under Anhydrous Conditions

    SciTech Connect

    Durham, W.; Mei, S; Kohlstedt, D; Wang, L; Dixon, N

    2009-01-01

    A new cell assembly for the deformation-DIA (D-DIA) shows promise for limiting the water content of samples and providing a more mechanically stable environment for deformation. The 6-mm cubic cell consists of a 6-mm diameter mullite sphere cradled in a web of unfired pyrophyllite. The pyrophyllite flows during initial compression of the D-DIA to form gaskets between the six anvils while the mullite flows to become a nearly cubic-shaped pressure medium. Measurements on olivine indicate more than one order of magnitude drop in water content to <40 ppm H/Si compared with the boron-epoxy medium. Improved mechanical stability is achieved by elimination of the thermocouple from the assembly and determination of temperature from calibration curves of furnace power vs. temperature. Three samples of polycrystalline orthopyroxene-buffer San Carlos olivine have been deformed in high-temperature creep in the new cell, at pressures of 2.7-4.9 GPa and temperatures near 1473 K. Strength is consistent with that measured in the gas-apparatus at lower pressures. Over the pressure range investigated we resolve an activation volume for creep of dry olivine of V* = 9.5 {+-} 7 x 10-6 m3/mol.

  15. [New methodology for heavy metals measurement in water samples by PGNAA-XRF].

    PubMed

    Jia, Wen-Bao; Zhang, Yan; Hei, Da-Qian; Ling, Yong-Sheng; Shan, Qing; Cheng, Can

    2014-11-01

    In the present paper, a new combined detection method was proposed using prompt gamma neutron activation analysis (PGNAA) and characteristic X-ray fluorescence to improve the heavy metals measurement accuracy for in-situ environmental water rejects analysis by PGNAA technology. Especially, the characteristic X-ray fluorescence (XRF) of heavy metals is induced by prompt gamma-ray directly instead of the traditional excitation sources. Thus, a combined measurement facility with an 241 AmBe neutron source, a BGO detector and a NaI-Be detector was developed to analyze the pollutants in water. The two detectors were respectively used to record prompt gamma-ray and characteristic X-ray fluorescence of heavy metals. The prompt gamma-ray intensity (I(γ)) and characteristic X-ray fluorescence intensity (I(x)) was determined by MCNP calculations for different concentration (c(i)) of chromium (Cr), cadmium (Cd), mercury (Hg) and lead (Pb), respectively. The simulation results showed that there was a good linear relationship between I(γ), I(x) and (c(i)), respectively. The empirical formula of combined detection method was given based on the above calculations. It was found that the combined detection method was more sensitive for high atomic number heavy metals like Hg and Pb measurement than low atomic number like Cr and Cd by comparing and analyzing I(γ) and I(x). The limits of detection for Hg and Pb by the combined measurement instrument were 17.4 and 24.2 mg x kg(-1), respectively. PMID:25752071

  16. Analysis methodology of movable emittance-meter measurements for low energy electron beams.

    PubMed

    Mostacci, A; Bacci, A; Boscolo, M; Chiadroni, E; Cianchi, A; Filippetto, D; Migliorati, M; Musumeci, P; Ronsivalle, C; Rossi, A R

    2008-01-01

    The design of photoinjectors for modern free electron laser linac relies heavily on particular beam behavior in the few meters after the gun. To experimentally characterize it a movable emittance meter was proposed and built [L. Catani et al., Rev. Sci. Instrum. 77, 093301 (2006)] based on the beam slicing technique. This paper addresses all the aspects of analysis of the data acquired with the emittance meter and common to any slit based emittance measurement for low energy beams. PMID:18248027

  17. [Nitrous oxide emissions from municipal solid waste landfills and its measuring methodology: a review].

    PubMed

    Jia, Ming-Sheng; Wang, Xiao-Jun; Chen, Shao-Hua

    2014-06-01

    Nitrous oxide (N2O) is one of three major greenhouse gases and the dominant ozone-depleting substance. Landfilling is the major approach for the treatment and disposal of municipal solid waste (MSW), while MSW landfills can be an important anthropogenic source for N2O emissions. Measurements at lab-scale and full-scale landfills have demonstrated that N2O can be emitted in substantial amounts in MSW landfills; however, a large variation in reported emission values exists. Currently, the mechanisms of N2O production and emission in landfills and its contribution to global warming are still lack of sufficient studies. Meanwhile, obtaining reliable N2O fluxes data in landfills remains a question with existing in-situ measurement techniques. This paper summarized relevant literature data on this issue and analyzed the potential production and emission mechanisms of N2O in traditional anaerobic sanitary landfill by dividing it into the MSW buried and the cover soil. The corresponding mechanisms in nitrogen removal bioreactor landfills were analyzed. Finally, the applicability of existing in-situ approaches measuring N2O fluxes in landfills, such as chamber and micrometeorological methods, was discussed and areas in which further research concerning N2O emissions in landfills was urgently required were proposed as well. PMID:25223043

  18. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  19. Vertical profiles of aerosol volume from high-spectral-resolution infrared transmission measurements. I. Methodology.

    PubMed

    Eldering, A; Irion, F W; Chang, A Y; Gunson, M R; Mills, F P; Steele, H M

    2001-06-20

    The wavelength-dependent aerosol extinction in the 800-1250-cm(-1) region has been derived from ATMOS (atmospheric trace molecule spectroscopy) high-spectral-resolution IR transmission measurements. Using models of aerosol and cloud extinction, we have performed weighted nonlinear least-squares fitting to determine the aerosol-volume columns and vertical profiles of stratospheric sulfate aerosol and cirrus cloud volume. Modeled extinction by use of cold-temperature aerosol optical constants for a 70-80% sulfuric-acid-water solution shows good agreement with the measurements, and the derived aerosol volumes for a 1992 occultation are consistent with data from other experiments after the eruption of Mt. Pinatubo. The retrieved sulfuric acid aerosol-volume profiles are insensitive to the aerosol-size distribution and somewhat sensitive to the set of optical constants used. Data from the nonspherical cirrus extinction model agree well with a 1994 mid-latitude measurement indicating the presence of cirrus clouds at the tropopause. PMID:18357329

  20. Consistent Methodologies for Determining, Relating and Disseminating Light Stable Isotopic Measurement Results: The Carbon Dioxide Example

    NASA Astrophysics Data System (ADS)

    Klinedinst, D. B.; Verkouteren, R. M.

    2001-05-01

    In conjunction with the International Atomic Energy Agency (IAEA), the Atmospheric Chemistry Group (ACG) of National Institute of Standards and Technology (NIST) has coordinated an international CO2 isotope ratio mass spectrometry (IRMS) intercomparison exercise. The results of this exercise, specifically designed to overcome inherent deficiencies revealed by previous intercomparisons, achieved a 2 to 3 fold reduction (improvement) in the reproducibility of reported results across laboratories. Concurrently, the ACG developed and deployed an interactive Web-based data processing interface [http://www.nist.gov/widps-co2]. The interface has open architecture and a transparent, downloadable source code. This data processing system leverages the results of the intercomparison exercise and provides a consistent means by which raw CO2 measurement results are related to the internationally accepted Vienna Pee Dee Belemnite (VPDB) scale. Prominent features of the CO2 intercomparison exercise included: mandatory chemical and operational procedures, reporting of discretionary factors, direct determination of the cross contamination effect within the ion source, the reporting of raw measurement results and centralized data processing. The data reduction interface uses IAEA defined standard procedures for stable isotope measurements and data processing. It incorporates currently defined reference values for selected IAEA and NIST CO2 Reference Materials (RMs). On a routine basis, users can also determine and use assigned values for secondary laboratory standards as input. One or two point (i.e., normalized) realization of the VPDB scale is provided as are optional inputs for the oxygen isotope fractionation factor(α ). We attribute the success of the CO2 intercomparison exercise primarily to the centralized data processing using raw measurements rather than customary result-based data. The centralized processing, in essence, eliminates inconsistencies between integrated

  1. High frequency measurement of P- and S-wave velocities on crystalline rock massif surface - methodology of measurement

    NASA Astrophysics Data System (ADS)

    Vilhelm, Jan; Slavík, Lubomír

    2014-05-01

    For the purpose of non-destructive monitoring of rock properties in the underground excavation it is possible to perform repeated high-accuracy P- and S-wave velocity measurements. This contribution deals with preliminary results gained during the preparation of micro-seismic long-term monitoring system. The field velocity measurements were made by pulse-transmission technique directly on the rock outcrop (granite) in Bedrichov gallery (northern Bohemia). The gallery at the experimental site was excavated using TBM (Tunnel Boring Machine) and it is used for drinking water supply, which is conveyed in a pipe. The stable measuring system and its automatic operation lead to the use of piezoceramic transducers both as a seismic source and as a receiver. The length of measuring base at gallery wall was from 0.5 to 3 meters. Different transducer coupling possibilities were tested namely with regard of repeatability of velocity determination. The arrangement of measuring system on the surface of the rock massif causes better sensitivity of S-transducers for P-wave measurement compared with the P-transducers. Similarly P-transducers were found more suitable for S-wave velocity determination then P-transducers. The frequency dependent attenuation of fresh rock massif results in limited frequency content of registered seismic signals. It was found that at the distance between the seismic source and receiver from 0.5 m the frequency components above 40 kHz are significantly attenuated. Therefore for the excitation of seismic wave 100 kHz transducers are most suitable. The limited frequency range should be also taken into account for the shape of electric impulse used for exciting of piezoceramic transducer. The spike pulse generates broad-band seismic signal, short in the time domain. However its energy after low-pass filtration in the rock is significantly lower than the energy of seismic signal generated by square wave pulse. Acknowledgments: This work was partially

  2. Thermal conductivity measurements of particulate materials under Martian conditions

    NASA Technical Reports Server (NTRS)

    Presley, M. A.; Christensen, P. R.

    1993-01-01

    The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to

  3. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    NASA Astrophysics Data System (ADS)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.; Mangat, Pawitter; Goldberg, Kenneth A.; Benk, Markus P.; Wojdyla, Antoine; Smith, Bruce W.

    2016-04-01

    An approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials, Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.

  4. Hydrogen isotope measurement of bird feather keratin, one laboratory's response to evolving methodologies.

    PubMed

    Fan, Majie; Dettman, David L

    2015-01-01

    Hydrogen in organic tissue resides in a complex mixture of molecular contexts. Some hydrogen, called non-exchangeable (H(non)), is strongly bound, and its isotopic ratio is fixed when the tissue is synthesized. Other pools of hydrogen, called exchangeable hydrogen (H(ex)), constantly exchange with ambient water vapor. The measurement of the δ(2)H(non) in organic tissues such as hair or feather therefore requires an analytical process that accounts for exchangeable hydrogen. In this study, swan feather and sheep wool keratin were used to test the effects of sample drying and capsule closure on the measurement of δ(2)H(non) values, and the rate of back-reaction with ambient water vapor. Homogenous feather or wool keratins were also calibrated at room temperature for use as control standards to correct for the effects of exchangeable hydrogen on feathers. Total δ(2)H values of both feather and wool samples showed large changes throughout the first ∼6 h of drying. Desiccant plus low vacuum seems to be more effective than room temperature vacuum pumping for drying samples. The degree of capsule closure affects exchangeable hydrogen equilibration and drying, with closed capsules responding more slowly. Using one control keratin standard to correct for the δ(2)H(ex) value for a batch of samples leads to internally consistent δ(2)H(non) values for other calibrated keratins run as unknowns. When placed in the context of other recent improvements in the measurement of keratin δ(2)H(non) values, we make recommendations for sample handing, data calibration and the reporting of results. PMID:25358407

  5. Collocating satellite-based radar and radiometer measurements - methodology and usage examples.

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-05-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat. First, a simple method is presented to obtain those collocations. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to its centrepoint. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relationship between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations are available for public use.

  6. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-02-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) onboard NOAA-18 and the Cloud Profiling Radar (CPR) onboard the CloudSat CPR. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  7. Collocating satellite-based radar and radiometer measurements - methodology and usage examples

    NASA Astrophysics Data System (ADS)

    Holl, G.; Buehler, S. A.; Rydberg, B.; Jiménez, C.

    2010-06-01

    Collocations between two satellite sensors are occasions where both sensors observe the same place at roughly the same time. We study collocations between the Microwave Humidity Sounder (MHS) on-board NOAA-18 and the Cloud Profiling Radar (CPR) on-board CloudSat. First, a simple method is presented to obtain those collocations and this method is compared with a more complicated approach found in literature. We present the statistical properties of the collocations, with particular attention to the effects of the differences in footprint size. For 2007, we find approximately two and a half million MHS measurements with CPR pixels close to their centrepoints. Most of those collocations contain at least ten CloudSat pixels and image relatively homogeneous scenes. In the second part, we present three possible applications for the collocations. Firstly, we use the collocations to validate an operational Ice Water Path (IWP) product from MHS measurements, produced by the National Environment Satellite, Data and Information System (NESDIS) in the Microwave Surface and Precipitation Products System (MSPPS). IWP values from the CloudSat CPR are found to be significantly larger than those from the MSPPS. Secondly, we compare the relation between IWP and MHS channel 5 (190.311 GHz) brightness temperature for two datasets: the collocated dataset, and an artificial dataset. We find a larger variability in the collocated dataset. Finally, we use the collocations to train an Artificial Neural Network and describe how we can use it to develop a new MHS-based IWP product. We also study the effect of adding measurements from the High Resolution Infrared Radiation Sounder (HIRS), channels 8 (11.11 μm) and 11 (8.33 μm). This shows a small improvement in the retrieval quality. The collocations described in the article are available for public use.

  8. Conceptual and methodological challenges to measuring political commitment to respond to HIV

    PubMed Central

    2011-01-01

    Background Researchers have long recognized the importance of a central government’s political “commitment” in order to mount an effective response to HIV. The concept of political commitment remains ill-defined, however, and little guidance has been given on how to measure this construct and its relationship with HIV-related outcomes. Several countries have experienced declines in HIV infection rates, but conceptual difficulties arise in linking these declines to political commitment as opposed to underlying social and behavioural factors. Methods This paper first presents a critical review of the literature on existing efforts to conceptualize and measure political commitment to respond to HIV and the linkages between political commitment and HIV-related outcomes. Based on the elements identified in this review, the paper then develops and presents a framework to assist researchers in making choices about how to assess a government's level of political commitment to respond to HIV and how to link political commitment to HIV-related outcomes. Results The review of existing studies identifies three components of commitment (expressed, institutional and budgetary commitment) as different dimensions along which commitment can be measured. The review also identifies normative and ideological aspects of commitment and a set of variables that mediate and moderate political commitment that need to be accounted for in order to draw valid inferences about the relationship between political commitment and HIV-related outcomes. The framework summarizes a set of steps that researchers can follow in order to assess a government's level of commitment to respond to HIV and suggests ways to apply the framework to country cases. Conclusions Whereas existing studies have adopted a limited and often ambiguous conception of political commitment, we argue that conceiving of political commitment along a greater number of dimensions will allow researchers to draw a more complete

  9. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  10. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  11. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  12. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  13. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  14. An evaluation of advantages and cost measurement methodology for leasing in the health care industry.

    PubMed

    Henry, J B; Roenfeldt, R L

    1977-01-01

    Lease financing in hospitals is growing rapidly. Many articles published on the topic of lease financing point only to the benefits that may be derived. Very few articles actually analyze the pros and cons of leasing from a financial cost measurement point of view, which includes real world parameters. This article critically evaluates two articles published in this issue which lead the reader to believe leasing for the most part is a bargain when compared to debt financing. The authors discuss some misconceptions in these articles and point out some facts viewed from a financial analyst's position. PMID:840066

  15. Practical appraisal of sustainable development-Methodologies for sustainability measurement at settlement level

    SciTech Connect

    Moles, Richard; Foley, Walter; Morrissey, John; O'Regan, Bernadette

    2008-02-15

    This paper investigates the relationships between settlement size, functionality, geographic location and sustainable development. Analysis was carried out on a sample of 79 Irish settlements, located in three regional clusters. Two methods were selected to model the level of sustainability achieved in settlements, namely, Metabolism Accounting and Modelling of Material and Energy Flows (MA) and Sustainable Development Index Modelling. MA is a systematic assessment of the flows and stocks of material within a system defined in space and time. The metabolism of most settlements is essentially linear, with resources flowing through the urban system. The objective of this research on material and energy flows was to provide information that might aid in the development of a more circular pattern of urban metabolism, vital to sustainable development. In addition to MA, a set of forty indicators were identified and developed. These target important aspects of sustainable development: transport, environmental quality, equity and quality of life issues. Sustainability indices were derived through aggregation of indicators to measure dimensions of sustainable development. Similar relationships between settlement attributes and sustainability were found following both methods, and these were subsequently integrated to provide a single measure. Analysis identified those attributes of settlements preventing, impeding or promoting progress towards sustainability.

  16. Phoretic Force Measurement for Microparticles Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. J.; Zheng, R.

    1999-01-01

    This theoretical and experimental investigation of the collisional interactions between gas molecules and solid and liquid surfaces of microparticles involves fundamental studies of the transfer of energy, mass and momentum between gas molecules and surfaces. The numerous applications include particle deposition on semiconductor surfaces and on surfaces in combustion processes, containerless processing, the production of nanophase materials, pigments and ceramic precursors, and pollution abatement technologies such as desulfurization of gaseous effluents from combustion processes. Of particular emphasis are the forces exerted on microparticles present in a nonuniform gas, that is, in gaseous surroundings involving temperature and concentration gradients. These so-called phoretic forces become the dominant forces when the gravitational force is diminished, and they are strongly dependent on the momentum transfer between gas molecules and the surface. The momentum transfer, in turn, depends on the gas and particle properties and the mean free path and kinetic energy of the gas molecules. The experimental program involves the particle levitation system shown. A micrometer size particle is held between two heat exchangers enclosed in a vacuum chamber by means of ac and dc electric fields. The ac field keeps the particle centered on the vertical axis of the chamber, and the dc field balances the gravitational force and the thermophoretic force. Some measurements of the thermophoretic force are presented in this paper.

  17. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Jorge O.; Hackert, Chris L.; Ni, Qingwen; Collier, Hughbert A.

    2000-09-22

    This report contains eight sections. Some individual subsections contain lists of references as well as figures and conclusions when appropriate. The first section includes the introduction and summary of the first-year project efforts. The next section describes the results of the project tasks: (1) implementation of theoretical relations between effect dispersion and the stochastic medium, (2) imaging analyses using core and well log data, (3) construction of dispersion and attenuation models at the core and borehole scales in poroelastic media, (4) petrophysics and a catalog of core and well log data from Siberia Ridge field, (5) acoustic/geotechnical measurements and CT imaging of core samples from Florida carbonates, and (6) development of an algorithm to predict pore size distribution from NMR core data. The last section includes a summary of accomplishments, technology transfer activities and follow-on work for Phase II.

  18. New contactless eddy current non-destructive methodology for electric conductivity measurement

    NASA Astrophysics Data System (ADS)

    Bouchala, T.; Abdelhadi, B.; Benoudjit, A.

    2015-01-01

    In this paper, a new method of contactless electric conductivity measurement is developed. This method is essentially based on the association of the coupled electric field forward model, which we have recently developed, with a simple and efficient research algorithm. The proposed method is very fast because 1.3 s are sufficient to calculate electric conductivity, in a CPU of 2 GHz and RAM of 3 GB, for a starting research interval of 1.72-17.2 %IACS and tolerance of 1.72 × 10- 5 %IACS. The study of the calculation time according to mesh density and starting interval width has showed that an optimal choice has to be made in order to improve the rapidity while preserving its precision. Considering its rapidity and its simplicity of implementation, this method is more adapted, in comparison to direct current techniques using Van der Pauw geometry, for automated applications.

  19. Measurements of alpha particle energy using nuclear tracks in solids methodology.

    PubMed

    Espinosa, G; Amero, C; Gammage, R B

    2002-01-01

    In this paper we present a method for the measurement of alpha particle energy using polycarbonate materials as nuclear track detectors (NTDs). This method is based on the interaction of the radiation with the solid-state materials, using the relationship between the energy deposited in the material by the ionising particle and the track developed after an established chemical process. The determination of the geometrical parameters of the formed track, such as major axis, minor axis and overall track length, permit determination of the energy of the alpha particle. The track analysis is performed automatically using a digital image system, and the data are processed in a PC with commercial software. In this experiment 148Gd, 238U, 230Th, 239Pu and 244Cm alpha particle emitters were used. The values for alpha particle energy resolution, the linear response to energy, the confidence in the results and the automatisation of the procedure make this method a promising analysis system. PMID:12382812

  20. Workload capacity spaces: a unified methodology for response time measures of efficiency as workload is varied.

    PubMed

    Townsend, James T; Eidels, Ami

    2011-08-01

    Increasing the number of available sources of information may impair or facilitate performance, depending on the capacity of the processing system. Tests performed on response time distributions are proving to be useful tools in determining the workload capacity (as well as other properties) of cognitive systems. In this article, we develop a framework and relevant mathematical formulae that represent different capacity assays (Miller's race model bound, Grice's bound, and Townsend's capacity coefficient) in the same space. The new space allows a direct comparison between the distinct bounds and the capacity coefficient values and helps explicate the relationships among the different measures. An analogous common space is proposed for the AND paradigm, relating the capacity index to the Colonius-Vorberg bounds. We illustrate the effectiveness of the unified spaces by presenting data from two simulated models (standard parallel, coactive) and a prototypical visual detection experiment. A conversion table for the unified spaces is provided. PMID:21607804

  1. Experimental verification of a theory of the influence of measurement conditions on temperature measurement accuracy with IR systems

    NASA Astrophysics Data System (ADS)

    Chrzanowski, Krzysztof

    1996-07-01

    A theory of the influence of measurement conditions on temperature measurement accuracy with infrared systems has been recently presented. A comparison study of the shortwave (3-5- mu m) and longwave (8-12- mu m) measuring IR cameras was conducted on the basis of this theory. The results of the simulations show that the shortwave systems in typical measurement conditions generally offer better accuracy in temperature measurement than do the longwave systems. Some experiments that use a commercially available IR camera were carried out to verify the theory. The results of these experiments and a discussion about the theory limitations are presented. temperature measurement.

  2. The Ocean Colour Climate Change Initiative: I. A Methodology for Assessing Atmospheric Correction Processors Based on In-Situ Measurements

    NASA Technical Reports Server (NTRS)

    Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic; Platt, Trevor; Regner, Peter; Sathyendranath, Shubha; Steinmetz, Francois; Swinton, John

    2015-01-01

    The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.

  3. Methodology for evaluating lateral boundary conditions in the regional chemical transport model MATCH (v5.5.0) using combined satellite and ground-based observations

    NASA Astrophysics Data System (ADS)

    Andersson, E.; Kahnert, M.; Devasthale, A.

    2015-11-01

    Hemispheric transport of air pollutants can have a significant impact on regional air quality, as well as on the effect of air pollutants on regional climate. An accurate representation of hemispheric transport in regional chemical transport models (CTMs) depends on the specification of the lateral boundary conditions (LBCs). This study focuses on the methodology for evaluating LBCs of two moderately long-lived trace gases, carbon monoxide (CO) and ozone (O3), for the European model domain and over a 7-year period, 2006-2012. The method is based on combining the use of satellite observations at the lateral boundary with the use of both satellite and in situ ground observations within the model domain. The LBCs are generated by the global European Monitoring and Evaluation Programme Meteorological Synthesizing Centre - West (EMEP MSC-W) model; they are evaluated at the lateral boundaries by comparison with satellite observations of the Terra-MOPITT (Measurements Of Pollution In The Troposphere) sensor (CO) and the Aura-OMI (Ozone Monitoring Instrument) sensor (O3). The LBCs from the global model lie well within the satellite uncertainties for both CO and O3. The biases increase below 700 hPa for both species. However, the satellite retrievals below this height are strongly influenced by the a priori data; hence, they are less reliable than at, e.g. 500 hPa. CO is, on average, underestimated by the global model, while O3 tends to be overestimated during winter, and underestimated during summer. A regional CTM is run with (a) the validated monthly climatological LBCs from the global model; (b) dynamical LBCs from the global model; and (c) constant LBCs based on in situ ground observations near the domain boundary. The results are validated against independent satellite retrievals from the Aqua-AIRS (Atmospheric InfraRed Sounder) sensor at 500 hPa, and against in situ ground observations from the Global Atmospheric Watch (GAW) network. It is found that (i) the use of

  4. Development of a cognitive bias methodology for measuring low mood in chimpanzees

    PubMed Central

    Nettle, Daniel

    2015-01-01

    There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure ‘pessimistic’ bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes). The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107–240 trials), and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index) that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess longitudinal changes in

  5. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  6. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2014-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-inch chord, 2-D straight wing with NACA 23012 airfoil section. For six ice accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 x 10(exp 6) and a Mach number of 0.18 with an 18-inch chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For four of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3% with corresponding differences in stall angle of approximately one degree or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice

  7. Methodology to apportion ambient air measurements to investigate potential effects on air quality near waste incinerators

    SciTech Connect

    Mukerjee, S.; Fox, D.L.; Stevens, R.K.; Shy, C.M.; Vescio, N.

    1993-01-01

    Ambient air samples at four sites located near two incinerators (a biomedical waste and a municipal incinerator) in the vicinity of Charlotte, North Carolina were acquired as part of a health effects study that is examining potential, short-term, lung dysfunctions associated with incinerator and other source emissions. Ambient monitoring was performed for one month intervals at a treatment and control community site for each of the two incinerator locations. Twelve-hour ambient samples were acquired by means of a Versatile Air Pollution Sampler (VAPS) which enabled sampling for fine (< 2.5 micrometers) and coarse (2.5 - 10 micrometers) particulate matter, acid-gases by diffusion sampling and fine carbon sampling on quartz filters. X-ray Fluorescence Spectroscopy (XRF) was used on the coarse and fine particulate filters to measure metals while Ion Chromatography (IC) analyzed acid gases. The Chemical Mass Balance Receptor Model (CMB) was then used on the average ambient data from each wind vector to apportion the contribution of ambient pollutants which were attributable to the sources of interest from a given wind direction.

  8. Binding interaction between sorafenib and calf thymus DNA: spectroscopic methodology, viscosity measurement and molecular docking.

    PubMed

    Shi, Jie-Hua; Chen, Jun; Wang, Jing; Zhu, Ying-Yao

    2015-02-01

    The binding interaction of sorafenib with calf thymus DNA (ct-DNA) was studied using UV-vis absorption spectroscopy, fluorescence emission spectroscopy, circular dichroism (CD), viscosity measurement and molecular docking methods. The experimental results revealed that there was obvious binding interaction between sorafenib and ct-DNA. The binding constant (Kb) of sorafenib with ct-DNA was 5.6×10(3) M(-1) at 298 K. The enthalpy and entropy changes (ΔH(0) and ΔS(0)) in the binding process of sorafenib with ct-DNA were -27.66 KJ mol(-1) and -21.02 J mol(-1) K(-1), respectively, indicating that the main binding interaction forces were van der Waals force and hydrogen bonding. The docking results suggested that sorafenib preferred to bind on the minor groove of A-T rich DNA and the binding site of sorafenib was 4 base pairs long. The conformation change of sorafenib in the sorafenib-DNA complex was obviously observed and the change was close relation with the structure of DNA, implying that the flexibility of sorafenib molecule played an important role in the formation of the stable sorafenib-ct-DNA complex. PMID:25311519

  9. Biochemical quantification of sympathetic nervous activity in humans using radiotracer methodology: fallibility of plasma noradrenaline measurements

    SciTech Connect

    Esler, M.; Leonard, P.; O'Dea, K.; Jackman, G.; Jennings, G.; Korner, P.

    1982-01-01

    We have developed radiotracer techniques for studying noradrenaline kinetics, to assess better sympathetic nervous system function in humans. Tritiated l-noradrenaline was infused intravenously (0.35 microCi/m2/min) to plateau plasma concentration. Noradrenaline plasma clearance was calculated from plasma tritiated noradrenaline concentration at steady state, and the rate of spillover of noradrenaline to plasma derived from plasma noradrenaline specific radioactivity. Mean noradrenaline spillover at rest in 34 normal subjects was 0.33 micrograms/m2/min (range 0.17-0.61 micrograms/m2/min). Predictably, noradrenaline spillover was reduced in patients with subnormal sympathetic nervous system activity, 0.16 +/- 0.09 micrograms/m2/min in eight patients with idiopathic peripheral autonomic insufficiency, and 0.11 +/- 0.07 micrograms/m2/min (mean +/- SD) in six patients with essential hypertension treated with clonidine (0.45 mg daily). Noradrenaline line plasma clearance in normal subjects was 1.32 +/- 0.28 L/m2/min. Clearance fell with age, causing the previously described rise in plasma noradrenaline concentration with aging. Unexpected effects of drugs were encountered, for example chronic beta-adrenergic blockade in patients with essential hypertension reduced noradrenaline clearance. Plasma noradrenaline concentration measurements were not in agreement with noradrenaline release rate values, and do not reliably indicate sympathetic nervous system activity, in instances such as these where noradrenaline clearance is abnormal.

  10. Binding interaction between sorafenib and calf thymus DNA: Spectroscopic methodology, viscosity measurement and molecular docking

    NASA Astrophysics Data System (ADS)

    Shi, Jie-Hua; Chen, Jun; Wang, Jing; Zhu, Ying-Yao

    2015-02-01

    The binding interaction of sorafenib with calf thymus DNA (ct-DNA) was studied using UV-vis absorption spectroscopy, fluorescence emission spectroscopy, circular dichroism (CD), viscosity measurement and molecular docking methods. The experimental results revealed that there was obvious binding interaction between sorafenib and ct-DNA. The binding constant (Kb) of sorafenib with ct-DNA was 5.6 × 103 M-1 at 298 K. The enthalpy and entropy changes (ΔH0 and ΔS0) in the binding process of sorafenib with ct-DNA were -27.66 KJ mol-1 and -21.02 J mol-1 K-1, respectively, indicating that the main binding interaction forces were van der Waals force and hydrogen bonding. The docking results suggested that sorafenib preferred to bind on the minor groove of A-T rich DNA and the binding site of sorafenib was 4 base pairs long. The conformation change of sorafenib in the sorafenib-DNA complex was obviously observed and the change was close relation with the structure of DNA, implying that the flexibility of sorafenib molecule played an important role in the formation of the stable sorafenib-ct-DNA complex.

  11. Arduino-based control system for measuring ammonia in air using conditionally-deployed diffusive samplers

    NASA Astrophysics Data System (ADS)

    Ham, J. M.; Williams, C.; Shonkwiler, K. B.

    2012-12-01

    Arduino microcontrollers, wireless modules, and other low-cost hardware were used to develop a new type of air sampler for monitoring ammonia at strong areal sources like dairies, cattle feedlots, and waste treatment facilities. Ammonia was sampled at multiple locations on the periphery of an operation using Radiello diffusive passive samplers (Cod. RAD168- and RAD1201-Sigma-Aldrich). However, the samplers were not continuously exposed to the air. Instead, each sampling station included two diffusive samplers housed in specialized tubes that sealed the cartridges from the atmosphere. If a user-defined set of wind and weather conditions were met, the Radiellos were deployed into the air using a micro linear actuator. Each station was solar-powered and controlled by Arduinos that were linked to a central weather station using Xbee wireless modules (Digi International Inc.). The Arduinos also measured the total time of exposure using hall-effect sensors to verify the position of the cartridge (i.e., deployed or retracted). The decision to expose or retract the samplers was made every five minutes based on wind direction, wind speed, and time of day. Typically, the diffusive samplers were replaced with fresh cartridges every two weeks and the used samplers were analyzed in the laboratory using ion chromatography. Initial studies were conducted at a commercial dairy in northern Colorado. Ammonia emissions along the Front Range of Colorado can be transported into the mountains where atmospheric deposition of nitrogen can impact alpine ecosystems. Therefore, low-cost air quality monitoring equipment is needed that can be widely deployed in the region. Initial work at the dairy showed that ammonia concentrations ranged between 600 to 1200 ppb during the summer; the highest concentrations were downwind of a large anaerobic lagoon. Time-averaged ammonia concentrations were also used to approximate emissions using inverse dispersion models. This methodology provides a

  12. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia.

    PubMed

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by ektacytometry (LORRCA MaxSis, Mechatronics, The Netherlands) in 6 healthy individuals and 49 SCA patients and tested the effects of different heights of the RBC diffraction patterns, obtained by altering the camera gain of the LORRCA, on the result of RBC deformability measurements, expressed as Elongation Index (EI). Results indicate that the pattern of RBCs from control subjects adopts an elliptical shape under shear stress, whereas the pattern of RBCs from individuals with SCA adopts a diamond shape arising from the superposition of elliptical and circular patterns. The latter represent rigid RBCs. While the EI measures did not change with the variations of the RBC diffraction pattern heights in the control subjects, we observed a decrease of EI when the RBC diffraction pattern height is increased in the SCA group. The differences in SCA EI values measured at 5 Pa between the different diffraction pattern heights correlated with the percent of hemoglobin S and the percent of sickled RBC observed by microscopy. Our study confirms that the camera gain or aperture of the ektacytometer should be used to standardize the size of the RBC diffraction pattern height when measuring RBC deformability in sickle cell patients and underscores the potential clinical utility of this technique. PMID:26444610

  13. Antioxidant Activity of Selected Thyme (Thymus L.) Species and Study of the Equivalence of Different Measuring Methodologies.

    PubMed

    Orłowska, Marta; Kowalska, Teresa; Sajewicz, Mieczysław; Pytlakowska, Katarzyna; Bartoszek, Mariola; Polak, Justyna; Waksmundzka-Hajnos, Monika

    2015-01-01

    This study presents the results of comparative evaluation of the antioxidant activity of the phenolic fraction exhaustively extracted with aqueous methanol from 18 different thyme (Thymus L.) specimens and species. This evaluation is made with use of the same free radical source (DPPH• radical), three different free radical scavenging models (gallic acid, ascorbic acid, and Trolox), and three different measuring techniques (the dot blot test, UV-Vis spectrophotometry, and electron paramagnetic resonance spectroscopy, EPR). A comparison of the equivalence of these three different measuring techniques (performed with use of hierarchical clustering with Euclidean distance as a similarity measure and Ward's linkage) is particularly important in view of the fact that different laboratories use different antioxidant activity measuring techniques, which makes any interlaboratory comparison hardly possible. The results obtained confirm a semiquantitative equivalence among the three compared methodologies, and a proposal is made of a simple and cost-effective dot blot test that uses the DPPH• radical and provides differentiation of antioxidant activity of herbal matter comparable with the results of the UV-Vis spectrophotometry and EPR. PMID:26268966

  14. Treatment of wastewater effluents from paper-recycling plants by coagulation process and optimization of treatment conditions with response surface methodology

    NASA Astrophysics Data System (ADS)

    Birjandi, Noushin; Younesi, Habibollah; Bahramifar, Nader

    2014-09-01

    In the present study, a coagulation process was used to treat paper-recycling wastewater with alum coupled with poly aluminum chloride (PACl) as coagulants. The effect of each four factors, viz. the dosages of alum and PACl, pH and chemical oxygen demand (COD), on the treatment efficiency was investigated. The influence of these four parameters was described using response surface methodology under central composite design. The efficiency of reducing turbidity, COD and the sludge volume index (SVI) were considered the responses. The optimum conditions for high treatment efficiency of paper-recycling wastewater under experimental conditions were reached with numerical optimization of coagulant doses and pH, with 1,550 mg/l alum and 1,314 mg/l PACl and 9.5, respectively, where the values for reduction of 80.02 % in COD, 83.23 % in turbidity, and 140 ml/g in SVI were obtained.

  15. Measurement of heat stress conditions at cow level and comparison to climate conditions at stationary locations inside a dairy barn.

    PubMed

    Schüller, Laura K; Heuwieser, Wolfgang

    2016-08-01

    The objectives of this study were to examine heat stress conditions at cow level and to investigate the relationship to the climate conditions at 5 different stationary locations inside a dairy barn. In addition, we compared the climate conditions at cow level between primiparous and multiparous cows for a period of 1 week after regrouping. The temperature-humidity index (THI) differed significantly between all stationary loggers. The lowest THI was measured at the window logger in the experimental stall and the highest THI was measured at the central logger in the experimental stall. The THI at the mobile cow loggers was 2·33 THI points higher than at the stationary loggers. Furthermore, the mean daily THI was higher at the mobile cow loggers than at the stationary loggers on all experimental days. The THI in the experimental pen was 0·44 THI points lower when the experimental cow group was located inside the milking parlour. The THI measured at the mobile cow loggers was 1·63 THI points higher when the experimental cow group was located inside the milking parlour. However, there was no significant difference for all climate variables between primiparous and multiparous cows. These results indicate, there is a wide range of climate conditions inside a dairy barn and especially areas with a great distance to a fresh air supply have an increased risk for the occurrence of heat stress conditions. Furthermore, the heat stress conditions are even higher at cow level and cows not only influence their climatic environment, but also generate microclimates within different locations inside the barn. Therefore climate conditions should be obtained at cow level to evaluate the heat stress conditions that dairy cows are actually exposed to. PMID:27600964

  16. A methodology for velocity field measurement in multiphase high-pressure flow of CO2 and water in micromodels

    NASA Astrophysics Data System (ADS)

    Kazemifar, Farzan; Blois, Gianluca; Kyritsis, Dimitrios C.; Christensen, Kenneth T.

    2015-04-01

    This paper presents a novel methodology for capturing instantaneous, temporally and spatially resolved velocity fields in an immiscible multiphase flow of liquid/supercritical CO2 and water through a porous micromodel. Of interest is quantifying pore-scale flow processes relevant to geological CO2 sequestration and enhanced oil recovery, and in particular, at thermodynamic conditions relevant to geological reservoirs. A previously developed two-color microscopic particle image velocimetry approach is combined with a high-pressure apparatus, facilitating flow quantification of water interacting with supercritical CO2. This technique simultaneously resolves (in space and time) the aqueous phase velocity field as well as the dynamics of the menisci. The method and the experimental apparatus are detailed, and the results are presented to demonstrate its unique capabilities for studying pore-scale dynamics of CO2-water interactions. Simultaneous identification of the boundary between the two fluid phases and quantification of the instantaneous velocity field in the aqueous phase provides a step change in capability for investigating multiphase flow physics at the pore scale at reservoir-relevant conditions.

  17. New Methodology for Evaluating Optimal Pricing for Primary Regulation of Deregulated Power Systems under Steady State Condition

    NASA Astrophysics Data System (ADS)

    Satyaramesh, P. V.; RadhaKrishna, C.

    2013-06-01

    A generalized pricing structure for procurement of power under frequency ancillary service is developed in this paper. It is a frequency linked-price model and suitable for deregulation market environment. This model takes into consideration: governor characteristics and frequency characteristics of generator as additional parameters in load flow method. The main objective of the new approach proposed in this paper is to establish bidding price structure for frequency regulation services in competitive ancillary electrical markets under steady state condition. Lot of literatures are available for calculating the frequency deviations with respect to load changes by using dynamic simulation methods. But in this paper, the model computes the frequency deviations for additional requirements of power under steady state with considering power system network topology. An attempt is also made in this paper to develop optimal bidding price structure for the frequency-regulated systems. It gives a signal to traders or bidders that the power demand can be assessed more accurately much closer to real time and helps participants bid more accurate quantities on day-ahead market. The recent trends of frequency linked-price model existing in Indian power systems issues required for attention are also dealt in this paper. Test calculations have been performed on 30-bus system. The paper also explains adoptability of 33 this model to practical Indian power system. The results presented are analyzed and useful conclusions are drawn.

  18. The COSMIN checklist for evaluating the methodological quality of studies on measurement properties: A clarification of its content

    PubMed Central

    2010-01-01

    Background The COSMIN checklist (COnsensus-based Standards for the selection of health status Measurement INstruments) was developed in an international Delphi study to evaluate the methodological quality of studies on measurement properties of health-related patient reported outcomes (HR-PROs). In this paper, we explain our choices for the design requirements and preferred statistical methods for which no evidence is available in the literature or on which the Delphi panel members had substantial discussion. Methods The issues described in this paper are a reflection of the Delphi process in which 43 panel members participated. Results The topics discussed are internal consistency (relevance for reflective and formative models, and distinction with unidimensionality), content validity (judging relevance and comprehensiveness), hypotheses testing as an aspect of construct validity (specificity of hypotheses), criterion validity (relevance for PROs), and responsiveness (concept and relation to validity, and (in) appropriate measures). Conclusions We expect that this paper will contribute to a better understanding of the rationale behind the items, thereby enhancing the acceptance and use of the COSMIN checklist. PMID:20298572

  19. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    PubMed Central

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  20. Biodegradation of cyanide by a new isolated strain under alkaline conditions and optimization by response surface methodology (RSM)

    PubMed Central

    2014-01-01

    Background Biodegradation of free cyanide from industrial wastewaters has been proven as a viable and robust method for treatment of wastewaters containing cyanide. Results Cyanide degrading bacteria were isolated from a wastewater treatment plant for coke-oven-gas condensate by enrichment culture technique. Five strains were able to use cyanide as the sole nitrogen source under alkaline conditions and among them; one strain (C2) was selected for further studies on the basis of the higher efficiency of cyanide degradation. The bacterium was able to tolerate free cyanide at concentrations of up to 500 ppm which makes it a good potentially candidate for the biological treatment of cyanide contaminated residues. Cyanide degradation corresponded with growth and reached a maximum level 96% during the exponential phase. The highest growth rate (1.23 × 108) was obtained on day 4 of the incubation time. Both glucose and fructose were suitable carbon sources for cyanotrophic growth. No growth was detected in media with cyanide as the sole carbon source. Four control factors including, pH, temperature, agitation speed and glucose concentration were optimized according to central composite design in response surface method. Cyanide degradation was optimum at 34.2°C, pH 10.3 and glucose concentration 0.44 (g/l). Conclusions Bacterial species degrade cyanide into less toxic products as they are able to use the cyanide as a nitrogen source, forming ammonia and carbon dioxide as end products. Alkaliphilic bacterial strains screened in this study evidentially showed the potential to possess degradative activities that can be harnessed to remediate cyanide wastes. PMID:24921051

  1. Optimization of Reflux Conditions for Total Flavonoid and Total Phenolic Extraction and Enhanced Antioxidant Capacity in Pandan (Pandanus amaryllifolius Roxb.) Using Response Surface Methodology

    PubMed Central

    Ghasemzadeh, Ali; Jaafar, Hawa Z. E.

    2014-01-01

    Response surface methodology was applied to optimization of the conditions for reflux extraction of Pandan (Pandanus amaryllifolius Roxb.) in order to achieve a high content of total flavonoids (TF), total phenolics (TP), and high antioxidant capacity (AC) in the extracts. Central composite experimental design with three factors and three levels was employed to consider the effects of the operation parameters, including the methanol concentration (MC, 40%–80%), extraction temperature (ET, 40–70°C), and liquid-to-solid ratio (LS ratio, 20–40 mL/g) on the properties of the extracts. Response surface plots showed that increasing these operation parameters induced the responses significantly. The TF content and AC could be maximized when the extraction conditions (MC, ET, and LS ratio) were 78.8%, 69.5°C, and 32.4 mL/g, respectively, whereas the TP content was optimal when these variables were 75.1%, 70°C, and 31.8 mL/g, respectively. Under these optimum conditions, the experimental TF and TP content and AC were 1.78, 6.601 mg/g DW, and 87.38%, respectively. The optimized model was validated by a comparison of the predicted and experimental values. The experimental values were found to be in agreement with the predicted values, indicating the suitability of the model for optimizing the conditions for the reflux extraction of Pandan. PMID:25147852

  2. Guidelines for measuring the physical, chemical, and biological condition of wilderness ecosystems. General technical report (Final)

    SciTech Connect

    Fox, D.G.; Bernabo, J.C.; Hood, B.

    1987-11-01

    Guidelines include a large number of specific measures to characterize the existing condition of wilderness resources. Measures involve the atmospheric environment, water chemistry and biology, geology and soils, and flora. Where possible, measures are coordinated with existing long-term monitoring programs. Application of the measures will allow more effective evaluation of proposed new air-pollution sources.

  3. Influence of measurement conditions and system parameters on accuracy of remote temperature measurement with dualspectral IR systems

    NASA Astrophysics Data System (ADS)

    Chrzanowski, K.

    1996-04-01

    A theory of the influence of measurement conditions and system parameters on the accuracy of remote temperature measurements with dualspectral IR systems has been developed. An analysis of the influence of the disturbances of the measurement process caused by system noise, spectrally variable emissivity of the tested object, radiation reflected by the object, limited transmittance of the atmosphere, and radiation emitted by the filters and the optics on the accuracy of the dualspectral IR systems has been made using this theory. The results show that the accuracy definitively depends on the measurement conditions and the system parameters.

  4. Examining methodological variation in response inhibition: The effects of outcome measures and task characteristics on age-related differences.

    PubMed

    Klenberg, Liisa; Närhi, Vesa; Korkman, Marit; Hokkanen, Laura

    2015-01-01

    This study addressed methodological issues common to developmental studies on response inhibition. Age-related differences were investigated using two Stroop-like tasks with different levels of complexity and comparing different outcome measures in a sample of 340 children and adolescents aged 7-15 years. First, speed and accuracy of task performance were examined; the results showing that improvement in speed continued until age 13 in both the basic naming task and the two inhibition tasks. Improvement in accuracy was less consistent and continued until age 9 or 13 years. Second, two different algorithms were employed to control for the effects of basic processes in inhibition tasks. The difference algorithm indicated age-related differences similar to those for speed. The ratio algorithm, however, suggested earlier deceleration of development of response inhibition at 9 or 11 years of age. Factors related to the cognitive requirements and presented stimuli also had an effect on the results. The present findings shed light on the inconsistencies in the developmental studies of response inhibition and demonstrated that the selection of outcome measures and task characteristics are critical because they affect the way development is depicted. PMID:25175830

  5. Intra-rater reliability when using a tympanic thermometer under different self-measurement conditions.

    PubMed

    Yoo, Won-Gyu

    2016-07-01

    [Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269

  6. Intra-rater reliability when using a tympanic thermometer under different self-measurement conditions

    PubMed Central

    Yoo, Won-gyu

    2016-01-01

    [Purpose] This study investigated intra-rater reliability when using a tympanic thermometer under different self-measurement conditions. [Subjects and Methods] Ten males participated. Intra-rater reliability was assessed by comparing the values under three conditions of measurement using a tympanic thermometer. Intraclass correlation coefficients were used to assess intra-rater reliability. [Results] According to the intraclass correlation coefficient analysis, reliability could be ranked according to the conditions of measurement. [Conclusion] The results showed that self-measurement of body temperature is more precise when combined with common sense and basic education about the anatomy of the eardrum. PMID:27512269

  7. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials

    PubMed Central

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-01-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials. PMID:27194379

  8. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials

    NASA Astrophysics Data System (ADS)

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-05-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials.

  9. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials.

    PubMed

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-01-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials. PMID:27194379

  10. A scuba diving direct sediment sampling methodology on benthic transects in glacial lakes: procedure description, safety measures, and tests results.

    PubMed

    Pardo, Alfonso

    2014-11-01

    This work presents an in situ sediment sampling method on benthic transects, specifically intended for scientific scuba diver teams. It was originally designed and developed to sample benthic surface and subsurface sediments and subaqueous soils in glacial lakes up to a maximum depth of 25 m. Tests were conducted on the Sabocos and Baños tarns (i.e., cirque glacial lakes) in the Spanish Pyrenees. Two 100 m transects, ranging from 24.5 to 0 m of depth in Sabocos and 14 m to 0 m deep in Baños, were conducted. In each test, 10 sediment samples of 1 kg each were successfully collected and transported to the surface. This sampling method proved operative even in low visibility conditions (<2 m). Additional ice diving sampling tests were conducted in Sabocos and Truchas tarns. This sampling methodology can be easily adapted to accomplish underwater sampling campaigns in nonglacial lakes and other continental water or marine environments. PMID:24943883

  11. A new methodology for non-contact accurate crack width measurement through photogrammetry for automated structural safety evaluation

    NASA Astrophysics Data System (ADS)

    Jahanshahi, Mohammad R.; Masri, Sami F.

    2013-03-01

    In mechanical, aerospace and civil structures, cracks are important defects that can cause catastrophes if neglected. Visual inspection is currently the predominant method for crack assessment. This approach is tedious, labor-intensive, subjective and highly qualitative. An inexpensive alternative to current monitoring methods is to use a robotic system that could perform autonomous crack detection and quantification. To reach this goal, several image-based crack detection approaches have been developed; however, the crack thickness quantification, which is an essential element for a reliable structural condition assessment, has not been sufficiently investigated. In this paper, a new contact-less crack quantification methodology, based on computer vision and image processing concepts, is introduced and evaluated against a crack quantification approach which was previously developed by the authors. The proposed approach in this study utilizes depth perception to quantify crack thickness and, as opposed to most previous studies, needs no scale attachment to the region under inspection, which makes this approach ideal for incorporation with autonomous or semi-autonomous mobile inspection systems. Validation tests are performed to evaluate the performance of the proposed approach, and the results show that the new proposed approach outperforms the previously developed one.

  12. Optimization of the Expression Conditions of CGA-N46 in Bacillus subtilis DB1342(p-3N46) by Response Surface Methodology.

    PubMed

    Li, Rui-Fang; Wang, Bin; Liu, Shuai; Chen, Shi-Hua; Yu, Guang-Hai; Yang, Shuo-Ye; Huang, Liang; Yin, Yan-Li; Lu, Zhi-Fang

    2016-09-01

    CGA-N46 is a small antifungal-derived peptide and consists of the 31st-76th amino acids of the N-terminus of human chromogranin A. Polycistronic expression of recombinant CGA-N46 in Bacillus subtilis DB1342 was used to improve its production, but the yield of CGA-N46 was still low. In the present study, response surface methodology (RSM) was used to optimize culture medium composition and growth conditions of the engineered strain B. subtilis DB1342(p-3N46) for the further increase in CGA-N46 yield. The results of two-level factorial experiments indicated that dextrin and tryptone were significant factors affecting CGA-N46 expression. Central composite design (CCD) was used to determine the ideal conditions of each significant factors. From the results of CCD, the optimal medium composition was predicted to be dextrin 16.6 g/L, tryptone 19.2 g/L, KH2PO4·H2O 6 g/L, pH 6.5. And the optimal culture process indicated inoculation of B. subtilis DB1342(p-3N46) seed culture into fresh culture medium at 5 % (v/v), followed by expression of CGA-N46 for 56 hours at 30 °C induced by 2 % (v/v) sucrose after one hour of shaking culture. To test optimal CGA-N46 peptide expression, the yeast growth inhibition assay was employed and it was found that under optimal culture conditions, CGA-N46 inhibited the growth of Candida albican by 42.17, 30.86 % more than that in the pre-optimization conditions. In summary, RSM can be used to optimize expression conditions of CGA-N46 in engineered strains B. subtilis DB1342(p-3N46). PMID:26341498

  13. Three-dimensional welding residual stresses evaluation based on the eigenstrain methodology via X-ray measurements at the surface

    NASA Astrophysics Data System (ADS)

    Ogawa, Masaru

    2014-12-01

    In order to assure structural integrity for operating welded structures, it is necessary to evaluate crack growth rate and crack propagation direction for each observed crack non-destructively. Here, three dimensional (3D) welding residual stresses must be evaluated to predict crack propagation. Today, X-ray diffraction is used and the ultrasonic method has been proposed as non-destructive method to measure residual stresses. However, it is impossible to determine residual stress distributions in the thickness direction. Although residual stresses through a depth of several tens of millimeters can be evaluated non-destructively by neutron diffraction, it cannot be used as an on-site measurement technique. This is because neutron diffraction is only available in special irradiation facilities. Author pays attention to the bead flush method based on the eigenstrain methodology. In this method, 3D welding residual stresses are calculated by an elastic Finite Element Method (FEM) analysis from eigenstrains which are evaluated by an inverse analysis from released strains by strain gauges in the removal of the reinforcement of the weld. Here, the removal of the excess metal can be regarded as non-destructive treatment because toe of weld which may become crack starters can be eliminated. The effectiveness of the method has been proven for welded plates and pipes even with relatively lower bead height. In actual measurements, stress evaluation accuracy becomes poorer because measured values of strain gauges are affected by processing strains on the machined surface. In the previous studies, the author has developed the bead flush method that is free from the influence of the affecting strains by using residual strains on surface by X-ray diffraction. However, stress evaluation accuracy is not good enough because of relatively poor measurement accuracy of X-ray diffraction. In this study, a method to improve the estimation accuracy of residual stresses in this method is

  14. Towards uniform accelerometry analysis: a standardization methodology to minimize measurement bias due to systematic accelerometer wear-time variation.

    PubMed

    Katapally, Tarun R; Muhajarine, Nazeem

    2014-05-01

    Accelerometers are predominantly used to objectively measure the entire range of activity intensities - sedentary behaviour (SED), light physical activity (LPA) and moderate to vigorous physical activity (MVPA). However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants), jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within 'valid' data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com) were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA). Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time's influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and MVPA as outcome

  15. Measurement and modeling the coefficient of restitution of char particles under simulated entrained flow gasifier conditions

    NASA Astrophysics Data System (ADS)

    Gibson, LaTosha M.

    predict the coefficient of restitution (COR) which is the ratio of the rebound velocity to the impacting velocity (which is a necessary boundary condition for Discrete Phase Models). However, particle-wall impact models do not use actual geometries of char particles and motion of char particles due to gasifier operating conditions. This work attempts to include the surface geometry and rotation of the particles. To meet the objectives of this work, the general methodology used for this work involved (1) determining the likelihood of particle becoming entrapped, (2) assessing the limitations of particle-wall impact models for the COR through cold flow experiments in order to adapt them to the non-ideal conditions (surface and particle geometry) within a gasifier, (3) determining how to account for the influence of the carbon and the ash composition in the determination of the sticking probability of size fractions and specific gravities within a PSD and within the scope of particle wall impact models, and (4) using a methodology that quantifies the sticking probability (albeit a criterion or parameter) to predict the partitioning of a PSD into slag and flyash based on the proximate analysis. In this study, through sensitivity analysis the scenario for particle becoming entrapped within a slag layer was ruled out. Cold flow educator experiments were performed to measure the COR. Results showed a variation in the coefficient of restitution as a function of rebound angle due rotation of particles from the educator prior to impact. The particles were then simply dropped in "drop" experiments (without educator) to determine the influence of sphericity on particle rotation and therefore, the coefficient of restitution. The results showed that in addition to surface irregularities, the particle shape and orientation of the particle prior to impacting the target surface contributed to this variation of the coefficient of restitution as a function of rebounding angle. Oblique

  16. Comparison of artificial neural network (ANN) and response surface methodology (RSM) in optimization of the immobilization conditions for lipase from Candida rugosa on Amberjet(®) 4200-Cl.

    PubMed

    Fatiha, Benamia; Sameh, Bouchagra; Youcef, Saihi; Zeineddine, Djeghaba; Nacer, Rebbani

    2013-01-01

    Candida rugosa lipase (CRL) is an important industrial enzyme that is successfully utilized in a variety of hydrolysis and esterification reactions. This work describes the optimization of immobilization conditions (enzyme/support ratio, immobilization temperature, and buffer concentration) of CRL on the anionic resin Amberjet® 4200-Cl, using enantioselectivity (E) as the reference parameter. The model reaction used for this purpose is the acylation of (R,S)-1-phenylethanol. Optimal conditions for immobilization have been investigated through a response surface methodology (RSM) and artificial neural network (ANN). The coefficient of determination (R(2)) and the root mean square error (RMSE) values between the calculated and estimated responses were respectively equal to 0.99 and 0.06 for the ANN training set, 0.97 and 0.2 for the ANN testing set, and 0.94 and 0.4 for the RSM training set. Both models provided good quality predictions, yet the ANN showed a clear superiority over RSM for both data fitting and estimation capabilities. PMID:23215653

  17. Process optimization of deposition conditions of PbS thin films grown by a successive ionic layer adsorption and reaction (SILAR) method using response surface methodology

    NASA Astrophysics Data System (ADS)

    Yücel, Ersin; Yücel, Yasin; Beleli, Buse

    2015-07-01

    In this study, lead sulfide (PbS) thin films were synthesized by a successive ionic layer adsorption and reaction (SILAR) method with different pH, dipping time and dipping cycles. Response surface methodology (RSM) and central composite design (CCD) were successfully used to optimize the PbS films deposition parameters and understand the significance and interaction of the factors affecting the film quality. 5-level-3-factor central composite design was employed to evaluate the effects of the deposition parameters (pH, dipping time and dipping cycles) on the response (the optical band gap of the films). Data obtained from RSM were subjected to the analysis of variance (ANOVA) and analyzed using a second order polynomial equation. The optimal conditions for the PbS films deposition have been found to be: pH of 9.1, dipping time of 10 s and dipping cycles of 10 cycles. The predicted band gap of PbS film was 2.13 eV under the optimal conditions. Verification experiment (2.24 eV) confirmed the validity of the predicted model. The film structures were characterized by X-ray diffractometer (XRD). Morphological properties of the films were studied with a scanning electron microscopy (SEM). The optical properties of the films were investigated using a UV-visible spectrophotometer.

  18. Optimization on condition of epigallocatechin-3-gallate (EGCG) nanoliposomes by response surface methodology and cellular uptake studies in Caco-2 cells

    NASA Astrophysics Data System (ADS)

    Luo, Xiaobo; Guan, Rongfa; Chen, Xiaoqiang; Tao, Miao; Ma, Jieqing; Zhao, Jin

    2014-06-01

    The major component in green tea polyphenols, epigallocatechin-3-gallate (EGCG), has been demonstrated to prevent carcinogenesis. To improve the effectiveness of EGCG, liposomes were used as a carrier in this study. Reverse-phase evaporation method besides response surface methodology is a simple, rapid, and beneficial approach for liposome preparation and optimization. The optimal preparation conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 4.00, EGCG concentration of 4.88 mg/mL, Tween 80 concentration of 1.08 mg/mL, and rotary evaporation temperature of 34.51°C. Under these conditions, the experimental encapsulation efficiency and size of EGCG nanoliposomes were 85.79% ± 1.65% and 180 nm ± 4 nm, which were close with the predicted value. The malondialdehyde value and the release test in vitro indicated that the prepared EGCG nanoliposomes were stable and suitable for more widespread application. Furthermore, compared with free EGCG, encapsulation of EGCG enhanced its inhibitory effect on tumor cell viability at higher concentrations.

  19. Modelling and optimising of physicochemical features of walnut-oil beverage emulsions by implementation of response surface methodology: effect of preparation conditions on emulsion stability.

    PubMed

    Homayoonfal, Mina; Khodaiyan, Faramarz; Mousavi, Mohammad

    2015-05-01

    The major purpose of this study is to apply response surface methodology to model and optimise processing conditions for the preparation of beverage emulsions with maximum emulsion stability and viscosity, minimum particle size, turbidity loss rate, size index and peroxide value changes. A three-factor, five-level central composite design was conducted to estimate the effects of three independent variables: ultrasonic time (UT, 5-15 min), walnut-oil content (WO, 4-10% (w/w)) and Span 80 content (S80, 0.55-0.8). The results demonstrated the empirical models were satisfactorily (p < 0.0001) fitted to the experimental data. Evaluation of responses by analysis of variance indicated high coefficient determination values. The overall optimisation of preparation conditions was an UT of 14.630 min, WO content of 8.238% (w/w), and S80 content of 0.782% (w/w). Under this optimum region, responses were found to be 219.198, 99.184, 0.008, 0.008, 2.43 and 16.65 for particle size, emulsion stability, turbidity loss rate, size index, viscosity and peroxide value changes, respectively. PMID:25529732

  20. Dynamic measurement of physical conditions in daily life by body area network sensing system

    NASA Astrophysics Data System (ADS)

    Takayama, S.; Tanaka, T.; Takahashi, N.; Matsuda, Y.; Kariya, K.

    2010-07-01

    This paper shows the measurement system to monitor physical conditions dynamically in dairy life. The measurement system for physical conditions in motion must be wearable and wireless connected. Body area network sensing system (BANSS) is a kind of the system to realize the conditions. BANSS is the system constructed with host system and plural sensing nodes. Sensing node is constructed with sensors, analogue/digital convertor(ADC), peripheral interface component(PIC), memory and near field communication device(NFCD). The NFCD in this system is Zigbee. Zigbee is the most suitable to construct wireless network system easily. BANSS is not only the system to measure physical parameters. BANSS informs current physical conditions and advises to keep suitable physical strength. As an application of BANSS, the system managing heart rate in walking is shown. By using this system, users can exercise in condition of a constant physical strength.

  1. Multi-harmonic measurements of line shape under low absorption conditions

    NASA Astrophysics Data System (ADS)

    Lan, L. J.; Ding, Y. J.; Peng, Z. M.; Du, Y. J.; Liu, Y. F.; Li, Z.

    2014-06-01

    We propose a method that employs the ratios of the 2nd and 4th harmonics at the line center to measure line shape under low absorption conditions. To verify this method, the transition of CO2 at 6,982.0678 cm-1 is selected to measure line shape by using the proposed method and direct absorption spectroscopy in laboratory conditions. The results from both methods have a high degree of consistency. This satisfactory agreement indicates the validity of the proposed method.

  2. Estimating Conditional Standard Errors of Measurement for Tests Composed of Testlets.

    ERIC Educational Resources Information Center

    Lee, Guemin

    The primary purpose of this study was to investigate the appropriateness and implication of incorporating a testlet definition into the estimation of the conditional standard error of measurement (SEM) for tests composed of testlets. The five conditional SEM estimation methods used in this study were classified into two categories: item-based and…

  3. Experiential self-referential and selfless processing in mindfulness and mental health: Conceptual model and implicit measurement methodology.

    PubMed

    Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit

    2016-07-01

    We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record PMID:27078181

  4. Mass spectrometry for the measurement of intramyocardial gas tensions: methodology and application to the study of myocardial ischemia.

    PubMed

    Khuri, S F; O'Riordan, J; Flaherty, J T; Brawley, R K; Donahoo, J S; Gott, V L

    1975-01-01

    The methodology for use of the mass spectrometer for the measurement of intramyocardial gas tensions in the canine preparation is described. Baseling studies were carried out initially in 36 animals, and control levels for myocardial oxygen tension and myocardial carbon dioxide tension were 19 mm Hg (S.D. 6 mm Hg) and 43 mm Hg (S.D. 10 mm Hg), respectively. Myocardial oxygen tension was not altered significantly by varying the arterial oxygen tension between 65 and 300 mm Hg. However, myocardial carbon dioxide tension increased linearly with increased arterial carbon dioxide tension. In 15 dogs placed on total cardiopulmonary bypass, a perfusion pressure 40-60 mm lower than the control mean arterial pressure resulted in myocardial ischemia with a decrease in myocardial oxygen tension and an increase in myocardial carbon dioxide tension. A subsequent increase in perfusion pressure to control levels resulted in resolution of ischemia and return of myocardial oxygen and carbon dioxide tensions to their control level. In another series of open-chest dogs on cardiopulmonary bypass, a proximal constriction applied to the left coronary circumflex artery resulted in a marked decrease in myocardial oxygen tensions and a marked increase in myocardial carbon dioxide tensions in the region supplied by the constricted vessel. In yet another series of open-chest dogs, it was found that incremental decreases in coronary flow established by constriction of the circumflex artery resulted in an exponential increase in both myocardial carbon dioxide tensions and ST-segment elevation as determined by a 25-gauge multi-contact plunge electrode placed in the posterior left ventricular wall. It appears that mass spectrometry techniques for evaluating myocardial ischemia have several advantages over myocardial biopsy techniques for assay of ATP and lactate, and also over the technique of coronary sinus lactate determination. PMID:1209001

  5. Measuring fish body condition with or without parasites: does it matter?

    PubMed

    Lagrue, C; Poulin, R

    2015-10-01

    A fish body condition index was calculated twice for each individual fish, including or excluding parasite mass from fish body mass, and index values were compared to test the effects of parasite mass on measurement of body condition. Potential correlations between parasite load and the two alternative fish condition index values were tested to assess how parasite mass may influence the perception of the actual effects of parasitism on fish body condition. Helminth parasite mass was estimated in common bully Gobiomorphus cotidianus from four New Zealand lakes and used to assess the biasing effects of parasite mass on body condition indices. Results showed that the inclusion or exclusion of parasite mass from fish body mass in index calculations significantly influenced correlation patterns between parasite load and fish body condition indices. When parasite mass was included, there was a positive correlation between parasite load and fish body condition, seemingly indicating that fish in better condition supported higher parasite loads. When parasite mass was excluded, there was no correlation between parasite load and fish body condition, i.e. there was no detectable effect of helminth parasites on fish condition or fish condition on parasite load. Fish body condition tended to be overestimated when parasite mass was not accounted for; results showed a positive correlation between relative parasite mass and the degree to which individual fish condition was overestimated. Regardless of the actual effects of helminth parasites on fish condition, parasite mass contained within a fish should be taken into account when estimating fish condition. Parasite tissues are not host tissues and should not be included in fish mass when calculating a body condition index, especially when looking at potential effects of helminth infections on fish condition. PMID:26283054

  6. Multi-Population Invariance with Dichotomous Measures: Combining Multi-Group and MIMIC Methodologies in Evaluating the General Aptitude Test in the Arabic Language

    ERIC Educational Resources Information Center

    Sideridis, Georgios D.; Tsaousis, Ioannis; Al-harbi, Khaleel A.

    2015-01-01

    The purpose of the present study was to extend the model of measurement invariance by simultaneously estimating invariance across multiple populations in the dichotomous instrument case using multi-group confirmatory factor analytic and multiple indicator multiple causes (MIMIC) methodologies. Using the Arabic version of the General Aptitude Test…

  7. Measuring Value Added in Higher Education: A Proposed Methodology for Developing a Performance Indicator Based on the Economic Value Added to Graduates

    ERIC Educational Resources Information Center

    Rodgers, Timothy

    2007-01-01

    The 2003 UK higher education White Paper suggested that the sector needed to re-examine the potential of the value added concept. This paper describes a possible methodology for developing a performance indicator based on the economic value added to graduates. The paper examines how an entry-quality-adjusted measure of a graduate's "expected"…

  8. Objective measures for predicting speech intelligibility in noisy conditions based on new band-importance functions

    PubMed Central

    Ma, Jianfen; Hu, Yi; Loizou, Philipos C.

    2009-01-01

    The articulation index (AI), speech-transmission index (STI), and coherence-based intelligibility metrics have been evaluated primarily in steady-state noisy conditions and have not been tested extensively in fluctuating noise conditions. The aim of the present work is to evaluate the performance of new speech-based STI measures, modified coherence-based measures, and AI-based measures operating on short-term (30 ms) intervals in realistic noisy conditions. Much emphasis is placed on the design of new band-importance weighting functions which can be used in situations wherein speech is corrupted by fluctuating maskers. The proposed measures were evaluated with intelligibility scores obtained by normal-hearing listeners in 72 noisy conditions involving noise-suppressed speech (consonants and sentences) corrupted by four different maskers (car, babble, train, and street interferences). Of all the measures considered, the modified coherence-based measures and speech-based STI measures incorporating signal-specific band-importance functions yielded the highest correlations (r=0.89–0.94). The modified coherence measure, in particular, that only included vowel∕consonant transitions and weak consonant information yielded the highest correlation (r=0.94) with sentence recognition scores. The results from this study clearly suggest that the traditional AI and STI indices could benefit from the use of the proposed signal- and segment-dependent band-importance functions. PMID:19425678

  9. Optimization of hydrolysis conditions for the production of glucomanno-oligosaccharides from konjac using β-mannanase by response surface methodology.

    PubMed

    Chen, Junfan; Liu, Desheng; Shi, Bo; Wang, Hai; Cheng, Yongqiang; Zhang, Wenjing

    2013-03-01

    Glucomanno-oligosaccharides (GMO), usually produced from hydrolysis of konjac tubers with a high content of glucomannan, have a positive effect on Bifidobacterium as well as a variety of other physiological activities. Response surface methodology (RSM) was employed to optimize the hydrolysis time, hydrolysis temperature, pH and enzyme to substrate ratio (E/S) to obtain a high GMO yield from konjac tubers. From the signal-factor experiments, it was concluded that the change in the direct reducing sugar (DRS) is consistent with total reducing sugar (TRS) but contrary to the degree of polymerization (DP). DRS was used as an indicator of the content of GMO in the RSM study. The optimum RSM operating conditions were: reaction time of 3.4 h, reaction temperature of 41.0°C, pH of 7.1 and E/S of 0.49. The results suggested that the enzymatic hydrolysis was enhanced by temperature, pH and incubation time. Model validation showed good agreement between experimental results and the predicted responses. PMID:23465904

  10. Optimal Conditions for Continuous Immobilization of Pseudozyma hubeiensis (Strain HB85A) Lipase by Adsorption in a Packed-Bed Reactor by Response Surface Methodology.

    PubMed

    Bussamara, Roberta; Dall'agnol, Luciane; Schrank, Augusto; Fernandes, Kátia Flávia; Vainstein, Marilene Henning

    2012-01-01

    This study aimed to develop an optimal continuous process for lipase immobilization in a bed reactor in order to investigate the possibility of large-scale production. An extracellular lipase of Pseudozyma hubeiensis (strain HB85A) was immobilized by adsorption onto a polystyrene-divinylbenzene support. Furthermore, response surface methodology (RSM) was employed to optimize enzyme immobilization and evaluate the optimum temperature and pH for free and immobilized enzyme. The optimal immobilization conditions observed were 150 min incubation time, pH 4.76, and an enzyme/support ratio of 1282 U/g support. Optimal activity temperature for free and immobilized enzyme was found to be 68°C and 52°C, respectively. Optimal activity pH for free and immobilized lipase was pH 4.6 and 6.0, respectively. Lipase immobilization resulted in improved enzyme stability in the presence of nonionic detergents, at high temperatures, at acidic and neutral pH, and at high concentrations of organic solvents such as 2-propanol, methanol, and acetone. PMID:22315670

  11. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  12. Optimal Conditions for Continuous Immobilization of Pseudozyma hubeiensis (Strain HB85A) Lipase by Adsorption in a Packed-Bed Reactor by Response Surface Methodology

    PubMed Central

    Bussamara, Roberta; Dall'Agnol, Luciane; Schrank, Augusto; Fernandes, Kátia Flávia; Vainstein, Marilene Henning

    2012-01-01

    This study aimed to develop an optimal continuous process for lipase immobilization in a bed reactor in order to investigate the possibility of large-scale production. An extracellular lipase of Pseudozyma hubeiensis (strain HB85A) was immobilized by adsorption onto a polystyrene-divinylbenzene support. Furthermore, response surface methodology (RSM) was employed to optimize enzyme immobilization and evaluate the optimum temperature and pH for free and immobilized enzyme. The optimal immobilization conditions observed were 150 min incubation time, pH 4.76, and an enzyme/support ratio of 1282 U/g support. Optimal activity temperature for free and immobilized enzyme was found to be 68°C and 52°C, respectively. Optimal activity pH for free and immobilized lipase was pH 4.6 and 6.0, respectively. Lipase immobilization resulted in improved enzyme stability in the presence of nonionic detergents, at high temperatures, at acidic and neutral pH, and at high concentrations of organic solvents such as 2-propanol, methanol, and acetone. PMID:22315670

  13. Effects of acetic acid injection and operating conditions on NO emission in a vortexing fluidized bed combustor using response surface methodology

    SciTech Connect

    Fuping Qian; Chiensong Chyang; Weishen Yen

    2009-07-15

    The effects of acetic acid injection and operating conditions on NO emission were investigated in a pilot scale vortexing fluidized bed combustor (VFBC), an integration of circular freeboard and a rectangular combustion chamber. Operating conditions, such as the stoichiometric oxygen in the combustion chamber, the bed temperature and the injecting location of acetic acid, were determined by means of response surface methodology (RSM), which enables the examination of parameters with a moderate number of experiments. In RSM, NO emission concentration after acetic acid injection and NO removal percentage at the exit of the VFBC are used as the objective function. The results show that the bed temperature has a more important effect on the NO emission than the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber. Meanwhile, the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber have a more important effect on the NO removal percentage than the bed temperature. NO emission can be decreased by injecting the acetic acid into the combustion chamber, and NO emission decreases with the height of the acetic acid injecting location above the distributor. On the other hand, NO removal percentage increases with the height of the acetic acid injecting location, and NO emission increases with the stoichiometric oxygen in the combustion chamber and the bed temperature. NO removal percentage increases with the stoichiometric oxygen, and increases first, then decreases with the bed temperature. Also, a higher NO removal percentage could be obtained at 850{sup o}C. 26 refs., 12 figs., 8 tabs.

  14. Sensor and actuator conditioning for multiscale measurement systems on example of confocal microscopy

    NASA Astrophysics Data System (ADS)

    Lyda, W.; Zimmermann, J.; Burla, A.; Regin, J.; Osten, W.; Sawodny, O.; Westkämper, E.

    2009-06-01

    Multi-scale measurement systems utilise multiple sensors which differ in resolution and measurement field to pursue an active exploration strategy. The different sensor scales are linked by indicator algorithms for further measurement initiation. A major advantage of this strategy is a reduction of the conflict between resolution, time and field. This reduction is achieved by task specific conditioning of sensors, indicator algorithms and actuators using suitable uncertainty models. This contribution is focused on uncertainty models of sensors and actuators using the example of a prototype multi-scale measurement system. The influence of the sensor parameters, object characteristics and measurement conditions on the measurement reliability is investigated exemplary for the middle-scale sensor, a confocal microscope.

  15. Financial Measures Project. New Developments in Measuring Financial Conditions of Colleges and Universities: Papers Presented at a Working Conference.

    ERIC Educational Resources Information Center

    New York Community Trust, NY.

    Papers presented at a working conference on new developments in measuring financial conditions of colleges and universities included the following: "Using Financial Indicators for Public Policy Purposes," by George W. Bonham; "Conceptual Advances in Specifying Financial Indicators: Cash Flows in the Short and Long Run," by Hans H. Jenny; "Another…

  16. Photovoltaic module energy rating methodology development

    SciTech Connect

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  17. PLANE-INTEGRATED OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROMETRY METHODOLOGY FOR ANAEROBIC SWINE LAGOON EMISSION MEASUREMENTS

    EPA Science Inventory

    Emissions of ammonia and methane from an anaerobic lagoon at a swine animal feeding operation were evaluated five times over a period of two years. The plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) methodology was used to transect the plume at ...

  18. Evaluating of scale-up methodologies of gas-solid spouted beds for coating TRISO nuclear fuel particles using advanced measurement techniques

    NASA Astrophysics Data System (ADS)

    Ali, Neven Y.

    The work focuses on implementing for the first time advanced non-invasive measurement techniques to evaluate the scale-up methodology of gas-solid spouted beds for hydrodynamics similarity that has been reported in the literature based on matching dimensionless groups and the new mechanistic scale up methodology that has been developed in our laboratory based on matching the radial profile of gas holdup since the gas dynamics dictate the hydrodynamics of the gas-solid spouted beds. These techniques are gamma-ray computed tomography (CT) to measure the cross-sectional distribution of the phases' holdups and their radial profiles along the bed height and radioactive particle tracking (RPT) to measure in three-dimension (3D) solids velocity and their turbulent parameters. The measured local parameters and the analysis of the results obtained in this work validate our new methodology of scale up of gas-solid spouted beds by comparing for the similarity the phases' holdups and the dimensionless solids velocities and their turbulent parameters that are non-dimensionalized using the minimum spouting superficial gas velocity. However, the scale-up methodology of gas-solid spouted beds that is based on matching dimensionless groups has not been validated for hydrodynamics similarity with respect to the local parameters such as phases' holdups and dimensionless solids velocities and their turbulent parameters. Unfortunately, this method was validated in the literature by only measuring the global parameters. Thus, this work confirms that validation of the scale-up methods of gas-solid spouted beds for hydrodynamics similarity should reside on measuring and analyzing the local hydrodynamics parameters.

  19. Methodology for Using 3-Dimensional Sonography to Measure Fetal Adrenal Gland Volumes in Pregnant Women With and Without Early Life Stress.

    PubMed

    Kim, Deborah; Epperson, C Neill; Ewing, Grace; Appleby, Dina; Sammel, Mary D; Wang, Eileen

    2016-09-01

    Fetal adrenal gland volumes on 3-dimensional sonography have been studied as potential predictors of preterm birth. However, no consistent methodology has been published. This article describes the methodology used in a study that is evaluating the effects of maternal early life stress on fetal adrenal growth to allow other researchers to compare methodologies across studies. Fetal volumetric data were obtained in 36 women at 20 to 22 and 28 to 30 weeks' gestation. Two independent examiners measured multiple images of a single fetal adrenal gland from each sonogram. Intra- and inter-rater consistency was examined. In addition, fetal adrenal volumes between male and female fetuses were reported. The intra- and inter-rater reliability was satisfactory when the mean of 3 measurements from each rater was used. At 20 weeks' gestation, male fetuses had larger average adjusted adrenal volumes than female fetuses (mean, 0.897 versus 0.638; P = .004). At 28 weeks' gestation, the fetal weight was more influential in determining values for adjusted fetal adrenal volume (0.672 for male fetuses versus 0.526 for female fetuses; P = .034). This article presents a methodology for assessing fetal adrenal volume using 3-dimensional sonography that can be used by other researchers to provide more consistency across studies. PMID:27562975

  20. Improvement of the accuracy of continuous hematocrit measurement under various blood flow conditions

    NASA Astrophysics Data System (ADS)

    Kim, Myounggon; Yang, Sung

    2014-04-01

    We propose an accurate method for continuous hematocrit (HCT) measurement of flowing blood under varying plasma conditions of electrical conductivity, osmolality, and flow rate. Two parameters, namely the hematocrit estimation parameter (HEP) and normalized difference, are proposed to reduce the HCT measurement error. HEP was demonstrated in a previous work. The results of multiple linear regression analysis showed that the two parameters were strongly correlated with the reference HCT measured by microcentrifugation. The measurement error was less than 9% despite significant simultaneous variations in the plasma properties and shear rate.

  1. Methodological Adaptations for Reliable Measurement of Radium and Radon Isotopes in Hydrothermal Fluids of Extreme Chemical Diversity in Yellowstone National Park, Wyoming, USA

    NASA Astrophysics Data System (ADS)

    Role, A.; Sims, K. W. W.; Scott, S. R.; Lane-Smith, D. R.

    2015-12-01

    To quantitatively model fluid residence times, water-rock-gas interactions, and fluid flow rates in the Yellowstone (YS) hydrothermal system we are measuring short-lived isotopes of Ra (228Ra, 226Ra, 224Ra, 223Ra) and Rn (222Rn, and 220Rn) in hydrothermal fluids and gases. While these isotopes have been used successfully in investigations of water residence times, mixing, and groundwater discharge in oceanic, coastal and estuarine environments, the well-established techniques for measuring Ra and Rn isotopes were developed for seawater and dilute groundwaters which have near neutral pH, moderate temperatures, and a limited range of chemical composition. Unfortunately, these techniques, as originally developed are not suitable for the extreme range of compositions found in YS waters, which have pH ranging from <1 - 10, Eh -.208 to .700 V, water temperatures from ambient to 93 degree C, and high dissolved CO2 concentrations. Here we report on our refinements of these Ra and Rn methods for the extreme conditions found in YS. Our methodologies are now enabling us to quantitatively isolate Ra from fluids that cover a large range of chemical compositions and conduct in-situ Rn isotope measurements that accommodate variable temperatures and high CO2 (Lane-Smith and Sims, 2013, Acta Geophys. 61). These Ra and Rn measurements are now allowing us to apply simple models to quantify hot spring water residence times and aquifer to surface travel times. (224Ra/223Ra) calculations provide estimates of water-rock reaction zone to discharge zone of 4 to 14 days for Yellowstone hot springs and (224Ra/228Ra) shallow aquifer to surface travel times from 2 to 7 days. Further development of more sophisticated models that take into account water-rock-gas reactions and water mixing (shallow groundwater, surface run-off, etc.) will allow us to estimate the timescales of these processes more accurately and thus provide a heretofore-unknown time component to the YS hydrothermal system.

  2. Reconstruction of photon number conditioned states using phase randomized homodyne measurements

    NASA Astrophysics Data System (ADS)

    Chrzanowski, H. M.; Assad, S. M.; Bernu, J.; Hage, B.; Lund, A. P.; Ralph, T. C.; Lam, P. K.; Symul, T.

    2013-05-01

    We experimentally demonstrate the reconstruction of a photon number conditioned state without using a photon number discriminating detector. By using only phase randomized homodyne measurements, we reconstruct up to the three photon subtracted squeezed vacuum state. The reconstructed Wigner functions of these states show regions of pronounced negativity, signifying the non-classical nature of the reconstructed states. The techniques presented allow for complete characterization of the role of a conditional measurement on an ensemble of states, and might prove useful in systems where photon counting still proves technically challenging.

  3. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    NASA Astrophysics Data System (ADS)

    Egert, Amanda; Klotz, James; McLeod, Kyle; Harmon, David

    2014-10-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Exp. 2, the steers (n = 8) were pair-fed the basal diet of Exp. 1 and dosed with endophyte-free (E-) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine / kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 hour period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of rumen

  4. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    PubMed Central

    Egert, Amanda M.; Klotz, James L.; McLeod, Kyle R.; Harmon, David L.

    2014-01-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Experiment 2, the steers (n = 8) were pair-fed the basal diet of Experiment 1 and dosed with endophyte-free (E−) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine/kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 h period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of

  5. Optimization of Microwave-Assisted Extraction Conditions for Five Major Bioactive Compounds from Flos Sophorae Immaturus (Cultivars of Sophora japonica L.) Using Response Surface Methodology.

    PubMed

    Liu, Jin-Liang; Li, Long-Yun; He, Guang-Hua

    2016-01-01

    Microwave-assisted extraction was applied to extract rutin; quercetin; genistein; kaempferol; and isorhamnetin from Flos Sophorae Immaturus. Six independent variables; namely; solvent type; particle size; extraction frequency; liquid-to-solid ratio; microwave power; and extraction time were examined. Response surface methodology using a central composite design was employed to optimize experimental conditions (liquid-to-solid ratio; microwave power; and extraction time) based on the results of single factor tests to extract the five major components in Flos Sophorae Immaturus. Experimental data were fitted to a second-order polynomial equation using multiple regression analysis. Data were also analyzed using appropriate statistical methods. Optimal extraction conditions were as follows: extraction solvent; 100% methanol; particle size; 100 mesh; extraction frequency; 1; liquid-to-solid ratio; 50:1; microwave power; 287 W; and extraction time; 80 s. A rapid and sensitive ultra-high performance liquid chromatography method coupled with electrospray ionization quadrupole time-of-flight tandem mass spectrometry (EIS-Q-TOF MS/MS) was developed and validated for the simultaneous determination of rutin; quercetin; genistein; kaempferol; and isorhamnetin in Flos Sophorae Immaturus. Chromatographic separation was accomplished on a Kinetex C18 column (100 mm × 2.1 mm; 2.6 μm) at 40 °C within 5 min. The mobile phase consisted of 0.1% aqueous formic acid and acetonitrile (71:29; v/v). Isocratic elution was carried out at a flow rate of 0.35 mL/min. The constituents of Flos Sophorae Immaturus were simultaneously identified by EIS-Q-TOF MS/MS in multiple reaction monitoring mode. During quantitative analysis; all of the calibration curves showed good linear relationships (R² > 0.999) within the tested ranges; and mean recoveries ranged from 96.0216% to 101.0601%. The precision determined through intra- and inter-day studies showed an RSD% of <2.833%. These results

  6. Effect of Complex Working Conditions on Nurses Who Exert Coercive Measures in Forensic Psychiatric Care.

    PubMed

    Gustafsson, Niclas; Salzmann-Erikson, Martin

    2016-09-01

    Nurses who exert coercive measures on patients within psychiatric care are emotionally affected. However, research on their working conditions and environment is limited. The purpose of the current study was to describe nurses' experiences and thoughts concerning the exertion of coercive measures in forensic psychiatric care. The investigation was a qualitative interview study using unstructured interviews; data were analyzed with inductive content analysis. Results described participants' thoughts and experiences of coercive measures from four main categories: (a) acting against the patients' will, (b) reasoning about ethical justifications, (c) feelings of compassion, and (d) the need for debriefing. The current study illuminates the working conditions of nurses who exert coercive measures in clinical practice with patients who have a long-term relationship with severe symptomatology. The findings are important to further discuss how nurses and leaders can promote a healthier working environment. [Journal of Psychosocial Nursing and Mental Health Services, 54(9), 37-43.]. PMID:27576227

  7. Measuring environmental impact by real time laser differential displacement technique in simulated climate conditions

    NASA Astrophysics Data System (ADS)

    Tornari, Vivi; Bernikola, Eirini; Tsigarida, Nota; Hatzigiannakis, Kostas; Andrianakis, Michalis; Leissner, Johanna

    2015-06-01

    Environmental impact on artworks has always been a big issues for preservation of Cultural Heritage. Nowadays with the climate change it is experienced a slow but steady process of temperature increase affecting relative humidity which fluctuates while materials attempt to keep moisture balance. During repetitive equilibrium courses fatigue accumulates endangering the structural integrity prior to fracture. Assessing the risk imposed by the fluctuation allow preventive actions to take place and avoid interventive restoration action after fracture. A methodology is presented employing full-field interferometry by surface probing illumination based on direct realtime recording of surface images from delicate hygroscopic surfaces as they deform to dimensionally respond to relative humidity (RH) changes. The developed methodology aims to develop an early stage risk indicator tool to allow preventive measures directly through surface readings. The presented study1 aiming to experimentally highlight acclimatisation structural phenomena and to verify assumed standards in RH safety range based on the newly introduced concept of deformation threshold value is described and demonstrated with indicative results.

  8. Statistical Measurements of Contact Conditions of 478 Transport-airplane Landings During Routine Daytime Operations

    NASA Technical Reports Server (NTRS)

    Silsby, Norman S

    1955-01-01

    Statistical measurements of contact conditions have been obtained, by means of a special photographic technique, of 478 landings of present-day transport airplanes made during routine daylight operations in clear air at the Washington National Airport. From the measurements, sinking speeds, rolling velocities, bank angles, and horizontal speeds at the instant before contact have been evaluated and a limited statistical analysis of the results has been made and is reported in this report.

  9. Laser vibrometry vibration measurements on vehicle cabins in running conditions: helicopter mock-up application

    NASA Astrophysics Data System (ADS)

    Revel, Gian Marco; Castellini, Paolo; Chiariotti, Paolo; Tomasini, Enrico Primo; Cenedese, Fausto; Perazzolo, Alessandro

    2011-10-01

    The present work deals with the analysis of problems and potentials of laser vibrometer measurements inside vehicle cabins in running conditions, with particular reference to helicopters where interior vibro-acoustic issues are very important. This paper describes the results of a systematic measurement campaign performed on an Agusta A109MKII mock-up. The aim is to evaluate the applicability of scanning laser Doppler vibrometer (SLDV) for tests in simulated flying conditions and to understand how performances of the technique are affected when the laser head is placed inside the cabin, thus being subjected to interfering inputs. First a brief description of the performed test cases and the used measuring set-ups are given. Comparative tests between the SLDV and accelerometers are presented, analyzing the achievable performances for the specific application. Results obtained measuring with the SLDV placed inside the helicopter cabin during operative excitation conditions are compared with those performed with the laser lying outside the mock-up, these last being considered as ``reference measurements.'' Finally, in order to give an estimate of the uncertainty level on measured signals, a study linking the admitted percentage of noise content on vibrometer signals due to laser head vibration levels will be introduced.

  10. Measuring Youth Sexual Risk-Taking in Port-au-Prince, Haiti: Programmatic Recommendations from Different Methodological Approaches

    PubMed Central

    Speizer, Ilene S.; Beauvais, Harry; Gómez, Anu Manchikanti; Outlaw, Theresa Finn; Roussel, Barbara

    2013-01-01

    No studies have examined the applicability of varying methods for identifying youth at high risk of unintended pregnancies and contracting HIV. This study compares sociodemographic characteristics and sexual behaviors of youth (ages 15-24) in Port-au-Prince, Haiti surveyed using three different study methodologies. The three study methodologies are compared in terms of their utility for identifying high risk youth and utility for program planning. The three study methodologies are: a representative sample of youth from the 2005/6 Demographic and Health Survey (HDHS); a 2004 facility-based study; and a 2006/7 venue-based study that used the Priorities for Local AIDS Control Efforts (PLACE) method. The facility-based and PLACE studies included larger proportions of single, sexually experienced youth and youth who knew someone with HIV/AIDS than the HDHS. More youth in the PLACE study had multiple sexual partners in the last year and received money or gifts for sex compared to youth in facilities. At first and last sex, more PLACE youth used contraception including condoms. Pregnancy experience was most common in the facility-based data; however, more ever-pregnant PLACE youth reported ever terminating a pregnancy. Program managers seeking to target prevention activities should consider using facility- or venue-based methods to identify and understand the behaviors of high-risk youth. PMID:23012724

  11. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  12. Methodological study on exposure date of Tiankeng by AMS measurement of in situ produced cosmogenic 36Cl

    NASA Astrophysics Data System (ADS)

    Kejun, Dong; Shizhuo, Li; Ming, He; Sasa, Kimikazu; Matsushi, Yuki; Baojian, Huang; Xiangdong, Ruan; Yongjing, Guan; Takahashi, Tsutomu; Sueki, Keisuke; Chaoli, Li; Shaoyong, Wu; Xianggao, Wang; Hongtao, Shen; Nagashima, Yasuo; Shan, Jiang

    2013-01-01

    Tiankeng is a typical Karst relief of the late Quaternary Period. Studies on the exposure ages of Tiankeng are very important in geographical research to elucidate the formation condition, the developing process, and the features of biological species. 36Cl on the surface layer of the rupture cross-section of Tiankeng is largely produced by cosmogenic high-energy neutron induced reactions 40Ca(n, αp) and 39K(n, α), and has accumulated since the formation of the Tiankeng. Low-energy neutron reaction 35Cl(n, γ) contributes a small portion of 36Cl. In this work, the concentration of the cosmogenic 36Cl in rock samples taken from Dashiwei Tiankeng, Leye County, Guangxi Zhuang Autonomous Region, China, was measured jointly by Accelerator Mass Spectrometry (AMS) laboratories of CIAE and University of Tsukuba in an effort to estimate the formation time (or exposure age) of the Tiankeng. The results show that the exposure time of Da Shiwei Tiankeng is about 26 ± 9.6 ka (without erosion correction). The sampling strategy and procedures, experimental set-up, and preliminary results will be presented in detail.

  13. Membrane resistance and current distribution measurements under various operating conditions in a polymer electrolyte fuel cell

    NASA Astrophysics Data System (ADS)

    Brett, D. J. L.; Atkins, S.; Brandon, N. P.; Vasileiadis, N.; Vesovic, V.; Kucernak, A. R.

    The ability to make spatially resolved measurements in a fuel cell provides one of the most useful ways in which to monitor and optimise their performance. Localised membrane resistance and current density measurements for a single channel polymer electrolyte fuel cell are presented for a range of operating conditions. The current density distribution results are compared with an analytical model that exhibited generally good agreement across a broad range of operating conditions. However, under conditions of high air flow rate, an increase in current is observed along the channel which is not predicted by the model. Under such circumstances, localised electrochemical impedance measurements show a decrease in membrane resistance along the channel. This phenomenon is attributed to drying of the electrolyte at the start of the channel and is more pronounced with increasing operating temperature. Under conditions of reactant depletion, an increase in electrolyte resistance with decreasing current is observed. This is due to the hydrating effect of product water and electro-osmotic drag through the membrane when ionic current is flowing. Localised conduction is shown to be an effective means of conditioning previously unused membrane electrode assemblies by forcing passage of ionic current through the electrolyte.

  14. Measurements of the boundary layer and near wake of a supercritical airfoil at cruise conditions

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Spaid, F. W.

    1981-01-01

    The flow behavior within the upper-surface boundary layer and near wake of a supercritical airfoil operating at cruise conditions is discussed. Experimental results obtained from wind-tunnel tests are presented which provide a more detailed description of the flow in these regions than was previously available. Mean streamwise velocity profiles measured by pitot-pressure-probe and laser-velocimeter techniques were found to be in excellent agreement. Other mean-flow properties obtained by the laser-velocimeter technique were the local flow angles in the viscous layers and the static pressures at the edges of the boundary layer and wake. The data set also includes measurements of the turbulence intensity and turbulent Reynolds-stress distributions as obtained by the laser-velocimeter technique. To assess the effects of the shock wave, a less extensive set of measurements was realized at a subcritical test condition. The two test conditions (Mach number at free-stream conditions = 0.72, airfoil section lift coefficient = 0.76 and Mach number of free-stream conditions = 0.5, airfoil section lift coefficient = 0.75) provide a good test for state-of-the-art prediction methods because the upper-surface-boundary layer is separated just upstream of the trailing edge in both cases.

  15. Measuring free-living physical activity in COPD patients: Deriving methodology standards for clinical trials through a review of research studies.

    PubMed

    Byrom, Bill; Rowe, David A

    2016-03-01

    This article presents a review of the research literature to identify the methodology used and outcome measures derived in the use of accelerometers to measure free-living activity in patients with COPD. Using this and existing empirical validity evidence we further identify standards for use, and recommended clinical outcome measures from continuous accelerometer data to describe pertinent measures of sedentary behaviour and physical activity in this and similar patient populations. We provide measures of the strength of evidence to support our recommendations and identify areas requiring continued research. Our findings support the use of accelerometry in clinical trials to understand and measure treatment-related changes in free-living physical activity and sedentary behaviour in patient populations with limited activity. PMID:26806669

  16. Measurement of dielectric properties of pumpable food materials under static and continuous flow conditions

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Continuous flow microwave sterilization is an emerging technology which has the potential to replace the conventional heating processes for viscous and pumpable food products. Dielectric properties of pumpable food products were measured by a new approach (under continuous flow conditions) at a temp...

  17. Measurement of dielectric properties of pumpable food materials under static and continuous flow conditions.

    PubMed

    Kumar, P; Coronel, P; Simunovic, J; Truong, V D; Sandeep, K P

    2007-05-01

    Continuous flow microwave sterilization is an emerging technology that has the potential to replace the conventional heating processes for viscous and pumpable food products. Dielectric properties of pumpable food products were measured by a new approach (under continuous flow conditions) at a temperature range of 20 to 130 degrees C and compared with those measured by the conventional approach (under static conditions). The food products chosen for this study were skim milk, green pea puree, carrot puree, and salsa con queso. Second-order polynomial correlations for the dependence of dielectric properties at 915 MHz of the food products on temperature were developed. Dielectric properties measured under static and continuous flow conditions were similar for homogeneous food products such as skim milk and vegetable puree, but they were significantly different for salsa con queso, which is a multiphase food product. The results from this study suggest that, for a multiphase product, dielectric properties measured under continuous flow conditions should be used for designing a continuous flow microwave heating system. PMID:17995769

  18. Reliability of the Kinetic Measures under Different Heel Conditions during Normal Walking

    ERIC Educational Resources Information Center

    Liu, Yuanlong; Wang, Yong Tai

    2004-01-01

    The purpose of this study was to determine and compare the reliability of 3 dimension reaction forces and impulses in walking with 3 different heel shoe conditions. These results suggest that changing the height of the heels affects mainly the reliability of the ground reaction force and impulse measures on the medial and lateral dimension and not…

  19. Consistency of Self-Ratings Across Measurement Conditions and Test Administrations.

    ERIC Educational Resources Information Center

    Froberg, Debra G.

    1984-01-01

    Using participants in a continuing education conference, an experiment was conducted to examine: (1) the consistency of self-ratings of ability across four different measurement conditions; and, (2) the relative leniency of retrospective pretest self-ratings compared to pretest self-ratings. Implications for evaluators are discussed. (Author/BS)

  20. Measurements of magnetic fields produced by a 30 kWe class Arcjet Power Conditioning Unit

    NASA Astrophysics Data System (ADS)

    Bromaghim, Daron

    1993-01-01

    The magnetic fields produced by a 30 kWe class Arcjet Power Conditioning Unit were measured using a Gauss meter and an axial magnetic field probe. These tests were conducted at Rocket Research Company in Redmond, WA from 11-12 Jun 1992, using a 30 kWe class Arcjet Power Conditioning Unit (PCU), and associated hardware. The PCU is an engineering model designed by Pacific Electrodynamics for the Electric Propulsion Experiment (ESEX) to be flown on the Advanced Research and Global Observation Satellite (ARGOS). Measurements were recorded at various locations around the PCU at ambient conditions and at power levels of 26, 20, 10, and 6.5 kWe. The results indicate the magnetic field presence is at a level, considering the 'worst case' scenario of test, well below any that would result in damaging effects to the spacecraft or to any other experiment.

  1. The manometric sorptomat—an innovative volumetric instrument for sorption measurements performed under isobaric conditions

    NASA Astrophysics Data System (ADS)

    Kudasik, Mateusz

    2016-03-01

    The present paper discusses the concept of measuring the process of sorption by means of the volumetric method, developed in such a way as to allow measurements performed under isobaric conditions. On the basis of the concept in question, a prototype of a sorption instrument was built: the manometric sorptomat. The paper provides a detailed description of the idea of the instrument, and of the way it works. In order to evaluate the usefulness of the device in sorption measurements carried out under laboratory conditions, comparative studies were conducted, during which the results of sorption measurements obtained with the developed instrument were compared with the results Mateusz obtained with a reference device. The objects of comparison were the sorption capacities of hard coal samples, calculated on the basis of the established courses of the methane sorption process. The results were regarded as compatible if the compared values fell within the range of the measurement uncertainty of the two devices. For the sake of the comparative studies, fifteen granular samples of hard coal—representing the 0.20-0.25 mm grain fraction and coming from various mines of the Upper Silesian Coal Basin—were used. After comparing the results obtained with the original manometric sorptomat with the results obtained with the gravimetric reference device, it was observed that the compatibility of measurements of sorption capacities was over 90%, based on the defined criterion of the measurement compatibility.

  2. Removal Rates of Aqueous Cr(VI) by Zero-Valent Iron Measured Under Flow Conditions

    SciTech Connect

    Kaplan, D.I.

    2002-05-10

    Studies were undertaken to measure the rate of Cr(VI) removal from the aqueous phase by zero-valent iron, Fe(0), under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a groundwater remediation technology that replaces the sand in a filter pack of a conventional well with a reactive material, such as Fe(0). The pseudo-first-order rate coefficients measured under flow conditions were comparable to those previously measured under batch conditions that had significantly greater ratios of solution volume to Fe(0) surface area. Between the range of 20 and 100 weight percent Fe(0), there was little measurable change in the reaction kinetics. Thus, it may be possible to include sand into the reactive filter packs in the event it is necessary to increase filter pack porosity or to decrease the accumulation of secondary reaction products that may lead to filter pack plugging. Background water chemistry had only marginal effects on reaction rate coefficients. The reaction rates measured in this study indicated that an Fe(0) filter pack could be used to lower Cr(VI) concentrations by several orders of magnitude in a once-through mode of operation of the Reactive Well Technology.

  3. Testing methodologies

    SciTech Connect

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  4. A SYSTEMATIC REVIEW OF OUTCOME TOOLS USED TO MEASURE LOWER LEG CONDITIONS

    PubMed Central

    Shultz, Susan; Olszewski, Amanda; Ramsey, Olivia; Schmitz, Michelle; Wyatt, Verrelle

    2013-01-01

    Background Context: A variety of self‐report and physical performance‐based outcome measures are commonly used to assess progress and recovery in the lower leg, ankle, and foot. A requisite attribute of any outcome measure is its ability to detect change in a condition, a construct known as “responsiveness”. There is a lack of consistency in how responsiveness is defined in all outcome measures. Purpose: The purpose of this study was to review the currently used recovery outcome measures for lower leg, ankle and foot conditions in order to determine and report recommended responsiveness values. Methods: A systematic literature search that included electronic searches of PubMed, CINAHL and SportDiscus as well extensive cross‐referencing was performed in January, 2013. Studies were included if each involved: 1) a prospective, longitudinal study of any design; 2) any condition associated with the lower leg, ankle or foot; 3) a measure of responsiveness; and 4) was an acceptable type of outcome measure (eg. self‐report, physical performance, or clinician report). The quality of the included articles was assessed by two independent authors using the responsiveness sub‐component of the Consensus‐based Standards for the selection of health Measurement Instruments (COSMIN). Results: Sixteen different studies met the inclusion criteria for this systematic review. The most commonly used outcome measures were the Foot and Ankle Ability Measure and the Lower Extremity Functional Scale. Responsiveness was calculated in a variety of methods including effect size, standardized response mean, minimal clinically important difference/importance, minimal detectable change, and minimal important change. Conclusion: Based on the findings of this systematic review there is a lack of consistency for reporting responsiveness among recovery measures used in the lower leg, ankle or foot studies. It is possible that the variability of conditions that involve the lower leg

  5. Measuring Bathymetry, Runup, and Beach Volume Change during Storms: New Methodology Quantifies Substantial Changes in Cross-Shore Sediment Flux

    NASA Astrophysics Data System (ADS)

    Brodie, K. L.; McNinch, J. E.

    2009-12-01

    Accurate predictions of beach change during storms are contingent upon a correct understanding of wave-driven sediment exchange between the beach and nearshore during high energy conditions. Conventional storm data sets use “pre” (often weeks to months prior) and “post” (often many days after the storm in calm conditions) collections of beach topography and nearshore bathymetry to characterize the effects of the storm. These data have led to a common theory for wave-driven event response of the nearshore system, wherein bars and shorelines are smoothed and straightened by strong alongshore currents into two-dimensional, linear forms. Post-storm, the shoreline accretes, bars migrate onshore, and three-dimensional shapes begin to build as low-energy swell returns. Unfortunately, these approaches have left us with a knowledge gap of the extent and timing of erosion and accretion during storms, arguably the most important information both for scientists trying to model storm damage or inundation, and homeowners trying to manage their properties. This work presents the first spatially extensive (10 km alongshore) and temporally high-resolution (dt = 12 hours) quantitative data set of beach volume and nearshore bathymetry evolution during a Nor’easter on North Carolina’s Outer Banks. During the Nor’easter, significant wave height peaked at 3.4 m, and was greater than 2 m for 37 hours, as measured by the Duck FRF 8 m array. Data were collected using CLARIS: Coastal Lidar and Radar Imaging System, a mobile system that couples simultaneous observations of beach topography from a Riegl laser scanner and nearshore bathymetry (out to ~1 km offshore) from X-Band radar-derived celerity measurements (BASIR). The merging of foreshore lidar elevations with 6-min averages of radar-derived swash runup also enables mapping of maximum-runup elevations alongshore during the surveys. Results show that during the storm, neither the shoreline nor nearshore bathymetry returned

  6. Methodology to measure the transient effect of occlusion on skin penetration and stratum corneum hydration in vivo.

    PubMed

    Ryatt, K S; Mobayen, M; Stevenson, J M; Maibach, H I; Guy, R H

    1988-09-01

    The effect of short duration occlusion on skin penetration and stratum corneum water content was studied in vivo in eight human subjects. Percutaneous absorption of hexyl nicotinate was monitored non-invasively by laser Doppler velocimetry (LDV) following each of three randomly assigned pre-treatments: untreated control, 30 min occlusion with a polypropylene chamber and 30 min occlusion followed by exposure to ambient conditions for 1 h. Stratum corneum water content after the same pre-treatments was measured with the dielectric probe technique. The local vasodilatory effect of the nicotinic acid ester was quantified using LDV by the onset of increased blood flow, the time of maximal increase in response, the magnitude of the peak response and the area under the response-time curve. Each of these parameters was significantly different, immediately following occlusion, from the untreated control values. However, if the occluded site was exposed for 1 h prior to hexyl nicotinate application these parameters did not differ significantly from the controls. Stratum corneum water content (expressed as a percentage of a maximal value) showed the same behaviour: the pre-treatment control value was 31.8 +/- 4.8%; after 30 min occlusion, this had risen to 46.9 +/- 6.2%; 1 h later, the reading had returned to 32.1 +/- 6.2%. There was a significant correlation between stratum corneum water content and area under the LDV response-time curve. It appears, therefore, that this method may be useful for quantifying the relationship between increased stratum corneum hydration and enhanced percutaneous absorption in vivo in man. PMID:3179203

  7. Use of acoustic velocity methodology and remote sensing techniques to measure unsteady flow on the lower Yazoo River in Mississippi

    USGS Publications Warehouse

    Turnipseed, D. Phil; Cooper, Lance M.; Davis, Angela A.

    1998-01-01

    Methodologies have been developed for computing continuous discharge during varied, non-uniform low and medium flows on the Yazoo River at the U.S. Geological Survey streamgage below Steele Bayou near Long Lake, Mississippi, using acoustic signal processing and conventional streamgaging techniques. Procedures were also developed to compute locations of discharges during future high flow events when the stream reach is subject to hi-directional and reverse flow caused by rising stages on the Mississippi River using a combination of acoustic equipment and remote sensing technology. A description of the study area is presented. Selected results of these methods are presented for the period from March through September 1997.

  8. Methodological Issues in Mobile Computer-Supported Collaborative Learning (mCSCL): What Methods, What to Measure and When to Measure?

    ERIC Educational Resources Information Center

    Song, Yanjie

    2014-01-01

    This study aims to investigate (1) methods utilized in mobile computer-supported collaborative learning (mCSCL) research which focuses on studying, learning and collaboration mediated by mobile devices; (2) whether these methods have examined mCSCL effectively; (3) when the methods are administered; and (4) what methodological issues exist in…

  9. Patient empowerment in long-term conditions: development and preliminary testing of a new measure

    PubMed Central

    2013-01-01

    Background Patient empowerment is viewed by policy makers and health care practitioners as a mechanism to help patients with long-term conditions better manage their health and achieve better outcomes. However, assessing the role of empowerment is dependent on effective measures of empowerment. Although many measures of empowerment exist, no measure has been developed specifically for patients with long-term conditions in the primary care setting. This study presents preliminary data on the development and validation of such a measure. Methods We conducted two empirical studies. Study one was an interview study to understand empowerment from the perspective of patients living with long-term conditions. Qualitative analysis identified dimensions of empowerment, and the qualitative data were used to generate items relating to these dimensions. Study two was a cross-sectional postal study involving patients with different types of long-term conditions recruited from general practices. The survey was conducted to test and validate our new measure of empowerment. Factor analysis and regression were performed to test scale structure, internal consistency and construct validity. Results Sixteen predominately elderly patients with different types of long-term conditions described empowerment in terms of 5 dimensions (identity, knowledge and understanding, personal control, personal decision-making, and enabling other patients). One hundred and ninety seven survey responses were received from mainly older white females, with relatively low levels of formal education, with the majority retired from paid work. Almost half of the sample reported cardiovascular, joint or diabetes long-term conditions. Factor analysis identified a three factor solution (positive attitude and sense of control, knowledge and confidence in decision making and enabling others), although the structure lacked clarity. A total empowerment score across all items showed acceptable levels of internal

  10. Measurement of distributed strain and temperature based on higher order and higher mode Bragg conditions

    NASA Technical Reports Server (NTRS)

    Sirkis, James S. (Inventor); Sivanesan, Ponniah (Inventor); Venkat, Venki S. (Inventor)

    2001-01-01

    A Bragg grating sensor for measuring distributed strain and temperature at the same time comprises an optical fiber having a single mode operating wavelength region and below a cutoff wavelength of the fiber having a multimode operating wavelength region. A saturated, higher order Bragg grating having first and second order Bragg conditions is fabricated in the optical fiber. The first order of Bragg resonance wavelength of the Bragg grating is within the single mode operating wavelength region of the optical fiber and the second order of Bragg resonance wavelength is below the cutoff wavelength of the fiber within the multimode operating wavelength region. The reflectivities of the saturated Bragg grating at the first and second order Bragg conditions are less than two orders of magnitude of one another. In use, the first and second order Bragg conditions are simultaneously created in the sensor at the respective wavelengths and a signal from the sensor is demodulated with respect to each of the wavelengths corresponding to the first and second order Bragg conditions. Two Bragg conditions have different responsivities to strain and temperature, thus allowing two equations for axial strain and temperature to be found in terms of the measure shifts in the primary and second order Bragg wavelengths. This system of equations can be solved for strain and temperature.

  11. Towards the automatic identification of cloudiness condition by means of solar global irradiance measurements

    NASA Astrophysics Data System (ADS)

    Sanchez, G.; Serrano, A.; Cancillo, M. L.

    2010-09-01

    This study focuses on the design of an automatic algorithm for classification of the cloudiness condition based only on global irradiance measurements. Clouds are a major modulating factor for the Earth radiation budget. They attenuate the solar radiation and control the terrestrial radiation participating in the energy balance. Generally, cloudiness is a limiting factor for the solar radiation reaching the ground, highly contributing to the Earth albedo. Additionally it is the main responsible for the high variability shown by the downward irradiance measured at ground level. Being a major source for the attenuation and high-frequency variability of the solar radiation available for energy purposes in solar power plants, the characterization of the cloudiness condition is of great interest. This importance is even higher in Southern Europe, where very high irradiation values are reached during long periods within the year. Thus, several indexes have been proposed in the literature for the characterization of the cloudiness condition of the sky. Among these indexes, those exclusively involving global irradiance are of special interest since this variable is the most widely available measurement in most radiometric stations. Taking this into account, this study proposes an automatic algorithm for classifying the cloudiness condition of the sky into three categories: cloud-free, partially cloudy and overcast. For that aim, solar global irradiance was measured by Kipp&Zonen CMP11 pyranometer installed on the terrace of the Physics building in the Campus of Badajoz (Spain) of the University of Extremadura. Measurements were recorded at one-minute basis for a period of study extending from 23 November 2009 to 31 March 2010. The algorithm is based on the clearness index kt, which is calculated as the ratio between the solar global downward irradiance measured at ground and the solar downward irradiance at the top of the atmosphere. Since partially cloudy conditions

  12. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  13. Experimental measurements of heat transfer from an iced surface during artificial and natural cloud icing conditions

    NASA Technical Reports Server (NTRS)

    Kirby, Mark S.; Hansman, R. John, Jr.

    1988-01-01

    The heat transfer behavior of accreting ice surfaces in natural (flight test) and simulated (wind tunnel) cloud icing conditions were studied. Observations of wet and dry ice growth regimes as measured by ultrasonic pulse echo techniques were made. Observed wet and dry ice growth regimes at the stagnation point of a cylinder were compared with those predicted using a quasi steady state heat balance model. A series of heat transfer coefficients were employed by the model to infer the local heat transfer behavior of the actual ice surfaces. The heat transfer in the stagnation region was generally inferred to be higher in wind tunnel icing tests than in natural flight icing conditions.

  14. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    PubMed Central

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential Ecorr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706

  15. Development of a measurement system for the online inspection of microstructured surfaces in harsh industrial conditions

    NASA Astrophysics Data System (ADS)

    Mueller, Thomas; Langmann, Benjamin; Reithmeier, Eduard

    2014-05-01

    Microscopic imaging techniques are usually applied for the inspection of microstructured surfaces. These techniques require clean measurement conditions. Soilings, e.g. dust or splashing liquids, can disturb the measurement process or even damage instruments. Since these soilings occur in the majority of manufacturing processes, microscopic inspection usually must be carried out in a separate laboratory. We present a measurement system which allows for a microscopic inspection and a 3D reconstruction of microstructured surfaces in harsh industrial conditions. The measurement system also enables precise positioning, e.g. of a grinding wheel, with an accuracy of 5 μm. The main component of the measurement system is a CCD camera with a high-magnification telecentric lens. By means of this camera it is even possible to measure structures with dimensions in the range of 30 to 50 μm. The camera and the lens are integrated into a waterproof and dustproof enclosure. The inspection window of the enclosure has an air curtain which serves as a splash guard. The workpiece illumination is crucial in order to obtain good measurement results. The measuring system includes high-power LEDs which are integrated in a waterproof enclosure. The measurement system also includes a laser with a specially designed lens system to form an extremely narrow light section on the workpiece surface. It is possible to obtain a line width of 25 μm. This line and the camera with the high-magnification telecentric lens are used to perform a laser triangulation of the microstructured surface. This paper describes the system as well as the development and evaluation of the software for the automatic positioning of the workpiece and the automatic three-dimensional surface analysis.

  16. Methodological Measurement Fruitfulness of Exploratory Structural Equation Modeling (ESEM): New Approaches to Key Substantive Issues in Motivation and Engagement

    ERIC Educational Resources Information Center

    Marsh, Herbert W.; Liem, Gregory Arief D.; Martin, Andrew J.; Morin, Alexandre J. S.; Nagengast, Benjamin

    2011-01-01

    The most popular measures of multidimensional constructs typically fail to meet standards of good measurement: goodness of fit, measurement invariance, lack of differential item functioning, and well-differentiated factors that are not so highly correlated as to detract from their discriminant validity. Part of the problem, the authors argue, is…

  17. Development of a Hydraulic-driven Soil Penetrometer for Measuring Soil Compaction in Field Conditions

    NASA Astrophysics Data System (ADS)

    Tekin, Yucel; Okursoy, Rasim

    Soil compaction is an important physical limiting factor for root emergence and the growth of plants. Therefore it is essential to control soil compaction, which is normally caused by heavy traffic in fields during the growing season. Soil compaction in fields is usually measured by using standard soil cone penetrometers, which can be of several different types according to their design. Most of the time, especially in heavy soil conditions, measuring soil compaction with a standard hand penetrometer produces measurement errors if the cone of the penetrometer cannot be pushed into the soil at a standard rate. Obtaining data with hand penetrometers is also difficult and takes a long time and effort. For this reason, a three-point hitch-mounted and hydraulic-driven soil cone penetrometer has been designed in order to reduce time and effort and to reduce possible measurement errors in the sampling of soil compaction data for research purposes. The hydraulic penetrometer is mounted on the three-point hitch and a hydraulic piston pushes the standard penetrometer cone into the soil at a constant speed. Forces acting on the cone base are recorded with a computer-based 16-bit data acquisition system composed of a load cell, a portable computer, signal amplification and necessary control software for the sampling. As a conclusion, the designed and constructed three-point hitch-mounted hydraulic-driven standard soil cone penetrometer provides with quick and very accurate measurements about soil compaction in clay soil in heavy conditions.

  18. Measurement of a tree growth condition by the hetero-core optical fiber sensor

    NASA Astrophysics Data System (ADS)

    Uchida, Hoshito; Akita, Shohei; Nishiyama, Michiko; Kumekawa, Norikazu; Watanabe, Kazuhiro

    2011-04-01

    Condition and growth of trees are considered to be important in monitoring global circulation with heat and water, additionally growth of trees are affected by CO2 and air pollutants. On the other hand, since growth of plants is affected by surrounding climates, it is expected that real-time monitoring of crop plants growing makes possible quantitative agricultural management. This study proposed methods in measuring tree growth using hetero-core optical fiber sensors which are suitable for long-term, remote and real-time monitoring in wide area due to their features such as independence from temperature fluctuation and weather condition in addition to advantages of an optical fiber. Two types of sensors were used for that purpose. One of them was a dendrometer which measured radial changes of a tree stem and the other was elastic sensor which was to measure growth of smaller tree such as crop plant. In our experiment, it was demonstrated that the dendrometer was capable of measuring the differences of tree growing trend in period of different seasons such as growing rates 2.08 mm between spring and summer and 0.21 mm between autumn and winter, respectively. Additionally, this study had proposed the method of measuring crop plant growing by the elastic sensor because of its compact and light design and monotonious changes in optical loss to the amount of expansion and contraction.

  19. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  20. Electrode size and boundary condition independent measurement of the effective piezoelectric coefficient of thin films

    SciTech Connect

    Stewart, M.; Lepadatu, S.; McCartney, L. N.; Cain, M. G.; Wright, L.; Crain, J.; Newns, D. M.; Martyna, G. J.

    2015-02-01

    The determination of the piezoelectric coefficient of thin films using interferometry is hindered by bending contributions. Using finite element analysis (FEA) simulations, we show that the Lefki and Dormans approximations using either single or double-beam measurements cannot be used with finite top electrode sizes. We introduce a novel method for characterising piezoelectric thin films which uses a differential measurement over the discontinuity at the electrode edge as an internal reference, thereby eliminating bending contributions. This step height is shown to be electrode size and boundary condition independent. An analytical expression is derived which gives good agreement with FEA predictions of the step height.

  1. Quantitative measurement of tip sample forces by dynamic force spectroscopy in ambient conditions

    NASA Astrophysics Data System (ADS)

    Hölscher, H.; Anczykowski, B.

    2005-03-01

    We introduce a dynamic force spectroscopy technique enabling the quantitative measurement of conservative and dissipative tip-sample forces in ambient conditions. In difference to the commonly detected force-vs-distance curves dynamic force microscopy allows to measure the full range of tip-sample forces without hysteresis effects caused by a jump-to-contact. The approach is based on the specific behavior of a self-driven cantilever (frequency-modulation technique). Experimental applications on different samples (Fischer-sample, silicon wafer) are presented.

  2. Design and optimization of the sine condition test for measuring misaligned optical systems.

    PubMed

    Lampen, Sara; Dubin, Matthew; Burge, James H

    2013-10-10

    By taking a new look at an old concept, we have shown in our previous work how the Abbe sine condition can be used to measure linearly field-dependent aberrations in order to verify the alignment of optical systems. In this paper, we expand on this method and discuss the design choices involved in implementing the sine condition test (SCTest). Specifically, we discuss the two illumination options for the test: point source with a grating or flat-panel display, and we discuss the tradeoffs of the two approaches. Additionally, experimental results are shown using a flat-panel display to measure linearly field-dependent aberrations. Last, we elaborate on how to implement the SCTest on more complex optical systems, such as a three-mirror anastigmat and a double Gauss imaging lens system. PMID:24217726

  3. Measurement of exhaust emissions from two J-58 engines at simulated supersonic cruise flight conditions

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1976-01-01

    Emissions of total oxides of nitrogen, unburned hydrocarbons, carbon monoxide, and carbon dioxide from two J-58 afterburning turbojet engines at simulated high-altitude flight conditions are reported. Test conditions included flight speeds from Mach 2 to 3 at altitudes from 16 to 23 km. For each flight condition, exhaust measurements were made for four or five power levels from maximum power without afterburning through maximum afterburning. The data show that exhaust emissions vary with flight speed, altitude, power level, and radial position across the exhaust. Oxides of nitrogen (NOX) emissions decreased with increasing altitude, and increased with increasing flight speed. NOX emission indices with afterburning were less than half the value without afterburning. Carbon monoxide and hydrocarbon emissions increased with increasing altitude, and decreased with increasing flight speed. Emissions of these species were substantially higher with afterburning than without.

  4. Measurement of exhaust emissions from two J-58 engines at simulated supersonic cruise flight conditions

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1976-01-01

    Emissions of total oxides of nitrogen, unburned hydrocarbons, carbon monoxide, and carbon dioxide from two J-58 afterburning turbojet engines at simulated high-altitude flight conditions are reported. Test conditions included flight speeds from Mach 2 to 3 at altitudes from 16 to 23 km. For each flight condition, exhaust measurements were made for four or five power levels from maximum power without afterburning through maximum afterburning. The data show that exhaust emissions vary with flight speed, altitude, power level, and radial position across the exhaust. Oxides of nitrogen emissions decreased with increasing altitude and increased with increasing flight speed. NO(x) emission indices with afterburning were less than half the value without afterburning. Carbon monoxide and hydrocarbon emissions increased with increasing altitude and decreased with increasing flight speed. Emissions of these species were substantially higher with afterburning than without.

  5. Zero-valent iron removal rates of aqueous Cr(VI) measured under flow conditions

    SciTech Connect

    Kaplan, Daniel I.; Gilmore, Tyler J.

    2004-06-30

    The rates of Cr(VI) removal from the aqueous phase by zero-valent iron Fe(0) was measured under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a gournwater remediation technology that replaces the sand in a filter pack of a conventioanl well with a reactive material, such as Fe(0).

  6. Laser spectroscopic real time measurements of methanogenic activity under simulated Martian subsurface analog conditions

    NASA Astrophysics Data System (ADS)

    Schirmack, Janosch; Böhm, Michael; Brauer, Chris; Löhmannsröben, Hans-Gerd; de Vera, Jean-Pierre; Möhlmann, Diedrich; Wagner, Dirk

    2014-08-01

    On Earth, chemolithoautothrophic and anaerobic microorganisms such as methanogenic archaea are regarded as model organisms for possible subsurface life on Mars. For this reason, the methanogenic strain Methanosarcina soligelidi (formerly called Methanosarcina spec. SMA-21), isolated from permafrost-affected soil in northeast Siberia, has been tested under Martian thermo-physical conditions. In previous studies under simulated Martian conditions, high survival rates of these microorganisms were observed. In our study we present a method to measure methane production as a first attempt to study metabolic activity of methanogenic archaea during simulated conditions approaching conditions of Mars-like environments. To determine methanogenic activity, a measurement technique which is capable to measure the produced methane concentration with high precision and with high temporal resolution is needed. Although there are several methods to detect methane, only a few fulfill all the needed requirements to work within simulated extraterrestrial environments. We have chosen laser spectroscopy, which is a non-destructive technique that measures the methane concentration without sample taking and also can be run continuously. In our simulation, we detected methane production at temperatures down to -5 °C, which would be found on Mars either temporarily in the shallow subsurface or continually in the deep subsurface. The pressure of 50 kPa which we used in our experiments, corresponds to the expected pressure in the Martian near subsurface. Our new device proved to be fully functional and the results indicate that the possible existence of methanogenic archaea in Martian subsurface habitats cannot be ruled out.

  7. Assessing the use of remotely sensed measurements for characterizing rangeland condition

    NASA Astrophysics Data System (ADS)

    Folker, Geoffrey P.

    There are over 233 million hectares (ha) of nonfederal grazing lands in the United States. Conventional field observation and sampling techniques are insufficient methods to monitor such large areas frequently enough to confidently quantify the biophysical state and assess rangeland condition over large geographic areas. In an attempt to enhance rangeland resource managers' abilities to monitor and assess these factors, remote sensing scientists and land resource managers have worked together to determine whether remotely sensed measurements can improve the ability to measure rangeland response to land management practices. The relationship between spectral reflectance patterns and plant species composition was investigated on six south-central Kansas ranches. Airborne multispectral color infrared images for 2002 through 2004 were collected at multiple times in the growing season over the study area. Concurrent with the image acquisition periods, ground cover estimates of plant species composition and biomass by growth form were collected. Correlation analysis was used to examine relationships among spectral and biophysical field measurements. Results indicate that heavily grazed sites exhibited the highest spectral vegetation index values. This was attributed to increases in low forage quality broadleaf forbs such as annual ragweed (Ambrosia artemisiifolia L.). Although higher vegetation index values have a positive correlation with overall above ground primary productivity, species composition may be the best indicator of healthy rangeland condition. A Weediness Index, which was found to be correlated with range condition, was also strongly linked to spectral reflectance patterns recorded in the airborne imagery.

  8. Ground-truthing evoked potential measurements against behavioral conditioning in the goldfish, Carassius auratus

    NASA Astrophysics Data System (ADS)

    Hill, Randy J.; Mann, David A.

    2005-04-01

    Auditory evoked potentials (AEPs) have become commonly used to measure hearing thresholds in fish. However, it is uncertain how well AEP thresholds match behavioral hearing thresholds and what effect variability in electrode placement has on AEPs. In the first experiment, the effect of electrode placement on AEPs was determined by simultaneously recording AEPs from four locations on each of 12 goldfish, Carassius auratus. In the second experiment, the hearing sensitivity of 12 goldfish was measured using both classical conditioning and AEP's in the same setup. For behavioral conditioning, the fish were trained to reduce their respiration rate in response to a 5 s sound presentation paired with a brief shock. A modified staircase method was used in which 20 reversals were completed for each frequency, and threshold levels were determined by averaging the last 12 reversals. Once the behavioral audiogram was completed, the AEP measurements were made without moving the fish. The recording electrode was located subdermally over the medulla, and was inserted prior to classical conditioning to minimize handling of animal. The same sound stimuli (pulsed tones) were presented and the resultant evoked potentials were recorded for 1000-6000 averages. AEP input-output functions were then compared to the behavioral audiogram to compare techniques for estimating behavioral thresholds from AEP data.

  9. Brief inhalation method to measure cerebral oxygen extraction fraction with PET: Accuracy determination under pathologic conditions

    SciTech Connect

    Altman, D.I.; Lich, L.L.; Powers, W.J. )

    1991-09-01

    The initial validation of the brief inhalation method to measure cerebral oxygen extraction fraction (OEF) with positron emission tomography (PET) was performed in non-human primates with predominantly normal cerebral oxygen metabolism (CMRO2). Sensitivity analysis by computer simulation, however, indicated that this method may be subject to increasing error as CMRO2 decreases. Accuracy of the method under pathologic conditions of reduced CMRO2 has not been determined. Since reduced CMRO2 values are observed frequently in newborn infants and in regions of ischemia and infarction in adults, we determined the accuracy of the brief inhalation method in non-human primates by comparing OEF measured with PET to OEF measured by arteriovenous oxygen difference (A-VO2) under pathologic conditions of reduced CMRO2 (0.27-2.68 ml 100g-1 min-1). A regression equation of OEF (PET) = 1.07 {times} OEF (A-VO2) + 0.017 (r = 0.99, n = 12) was obtained. The absolute error in oxygen extraction measured with PET was small (mean 0.03 {plus minus} 0.04, range -0.03 to 0.12) and was independent of cerebral blood flow, cerebral blood volume, CMRO2, or OEF. The percent error was higher (19 {plus minus} 37), particularly when OEF is below 0.15. These data indicate that the brief inhalation method can be used for measurement of cerebral oxygen extraction and cerebral oxygen metabolism under pathologic conditions of reduced cerebral oxygen metabolism, with these limitations borne in mind.

  10. ZERO-VALENT IRON REMOVAL RATES OF AQUEOUS Cr(VI) MEASURED UNDER FLOW CONDITIONS

    SciTech Connect

    Kaplan, Daniel I.; Gilmore, Tyler J.

    2004-06-01

    The rates of Cr(VI) removal from the aqueous phase by zero-valent iron, Fe(0), was measured under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a groundwater remediation technology that replaces the sand in a filter pack of a conventional well with a reactive material, such as Fe(0). Dissolved Cr(VI) concentration, dissolved O2 concentration, and Eh data indicated that Cr(VI) removal from the aqueous phase was mass-transfer limited. All pseudo-first-order regression fits to the data were significant (P≤0.05), however, they did not capture many of the salient aspects of the data, including that the removal rate often decreased as contact time increased. As such, application of these rate coefficients to predict long-term Cr(VI) removal were compromised. The rate coefficients measured under flow conditions were comparable to those measured previously under batch conditions with significantly greater solution:solid ratios. Between the range of 20 and 100 wt-% Fe(0) in the column, there was little measurable change in the reaction kinetics. Thus, it may be possible to include sand into the reactive filter packs in the event it is necessary to increase filter pack porosity or to decrease the accumulation of secondary reaction products that may lead to filter pack plugging. Background water chemistry (0.2 M NaHCO3, distilled water, and a carbonate-dominated groundwater) had only marginal, if any, effects on reaction rate coefficients. The reaction rates measured in this study indicated that an Fe(0) filter pack could be used to lower Cr(VI) concentrations by several orders of magnitude in a once-through mode of operation of the Reactive Well Technology.

  11. Investigation of Rayleigh-Taylor turbulence and mixing using direct numerical simulation with experimentally-measured initial conditions. I. Comparison to experimental data

    SciTech Connect

    Mueschke, N; Schilling, O

    2008-07-23

    A 1152 x 760 x 1280 direct numerical simulation (DNS) using initial conditions, geometry, and physical parameters chosen to approximate those of a transitional, small Atwood number Rayleigh-Taylor mixing experiment [Mueschke, Andrews and Schilling, J. Fluid Mech. 567, 27 (2006)] is presented. The density and velocity fluctuations measured just off of the splitter plate in this buoyantly unstable water channel experiment were parameterized to provide physically-realistic, anisotropic initial conditions for the DNS. The methodology for parameterizing the measured data and numerically implementing the resulting perturbation spectra in the simulation is discussed in detail. The DNS model of the experiment is then validated by comparing quantities from the simulation to experimental measurements. In particular, large-scale quantities (such as the bubble front penetration hb and the mixing layer growth parameter {alpha}{sub b}), higher-order statistics (such as velocity variances and the molecular mixing parameter {theta}), and vertical velocity and density variance spectra from the DNS are shown to be in favorable agreement with the experimental data. Differences between the quantities obtained from the DNS and from experimental measurements are related to limitations in the dynamic range of scales resolved in the simulation and other idealizations of the simulation model. This work demonstrates that a parameterization of experimentally-measured initial conditions can yield simulation data that quantitatively agrees well with experimentally-measured low- and higher-order statistics in a Rayleigh-Taylor mixing layer. This study also provides resolution and initial conditions implementation requirements needed to simulate a physical Rayleigh-Taylor mixing experiment. In Part II [Mueschke and Schilling, Phys. Fluids (2008)], other quantities not measured in the experiment are obtained from the DNS and discussed, such as the integral- and Taylor-scale Reynolds numbers

  12. System for measuring oxygen consumption rates of mammalian cells in static culture under hypoxic conditions.

    PubMed

    Kagawa, Yuki; Miyahara, Hirotaka; Ota, Yuri; Tsuneda, Satoshi

    2016-01-01

    Estimating the oxygen consumption rates (OCRs) of mammalian cells in hypoxic environments is essential for designing and developing a three-dimensional (3-D) cell culture system. However, OCR measurements under hypoxic conditions are infrequently reported in the literature. Here, we developed a system for measuring OCRs at low oxygen levels. The system injects nitrogen gas into the environment and measures the oxygen concentration by an optical oxygen microsensor that consumes no oxygen. The developed system was applied to HepG2 cells in static culture. Specifically, we measured the spatial profiles of the local dissolved oxygen concentration in the medium, then estimated the OCRs of the cells. The OCRs, and also the pericellular oxygen concentrations, decreased nonlinearly as the oxygen partial pressure in the environment decreased from 19% to 1%. The OCRs also depended on the culture period and the matrix used for coating the dish surface. Using this system, we can precisely estimate the OCRs of various cell types under environments that mimic 3-D culture conditions, contributing crucial data for an efficient 3-D culture system design. PMID:26558344

  13. Time and Space Resolved Heat Transfer Measurements Under Nucleate Bubbles with Constant Heat Flux Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Myers, Jerry G.; Hussey, Sam W.; Yee, Glenda F.; Kim, Jungho

    2003-01-01

    Investigations into single bubble pool boiling phenomena are often complicated by the difficulties in obtaining time and space resolved information in the bubble region. This usually occurs because the heaters and diagnostics used to measure heat transfer data are often on the order of, or larger than, the bubble characteristic length or region of influence. This has contributed to the development of many different and sometimes contradictory models of pool boiling phenomena and dominant heat transfer mechanisms. Recent investigations by Yaddanapyddi and Kim and Demiray and Kim have obtained time and space resolved heat transfer information at the bubble/heater interface under constant temperature conditions using a novel micro-heater array (10x10 array, each heater 100 microns on a side) that is semi-transparent and doubles as a measurement sensor. By using active feedback to maintain a state of constant temperature at the heater surface, they showed that the area of influence of bubbles generated in FC-72 was much smaller than predicted by standard models and that micro-conduction/micro-convection due to re-wetting dominated heat transfer effects. This study seeks to expand on the previous work by making time and space resolved measurements under bubbles nucleating on a micro-heater array operated under constant heat flux conditions. In the planned investigation, wall temperature measurements made under a single bubble nucleation site will be synchronized with high-speed video to allow analysis of the bubble energy removal from the wall.

  14. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    SciTech Connect

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-11-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON{sup TM} thermoplastic mask and the Type-S{sup TM} head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors ({sigma}{sub sub} and {sigma}{sub sub}). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins

  15. Methodological Issues in the Validation of Implicit Measures: Comment on De Houwer, Teige-Mocigemba, Spruyt, and Moors (2009)

    ERIC Educational Resources Information Center

    Gawronski, Bertram; LeBel, Etienne P.; Peters, Kurt R.; Banse, Rainer

    2009-01-01

    J. De Houwer, S. Teige-Mocigemba, A. Spruyt, and A. Moors's normative analysis of implicit measures provides an excellent clarification of several conceptual ambiguities surrounding the validation and use of implicit measures. The current comment discusses an important, yet unacknowledged, implication of J. De Houwer et al.'s analysis, namely,…

  16. Mild solutions to a measure-valued mass evolution problem with flux boundary conditions

    NASA Astrophysics Data System (ADS)

    Evers, Joep H. M.; Hille, Sander C.; Muntean, Adrian

    2015-08-01

    We investigate the well-posedness and approximation of mild solutions to a class of linear transport equations on the unit interval [ 0, 1 ] endowed with a linear discontinuous production term, formulated in the space M ([ 0, 1 ]) of finite Borel measures. Our working technique includes a detailed boundary layer analysis in terms of a semigroup representation of solutions in spaces of measures able to cope with the passage to the singular limit where thickness of the layer vanishes. We obtain not only a suitable concept of solutions to the chosen measure-valued evolution problem, but also derive convergence rates for the approximation procedure and get insight in the structure of flux boundary conditions for the limit problem.

  17. Conditional Spin Squeezing via Quantum Non-demolition Measurements with an Optical Cycling Transition

    NASA Astrophysics Data System (ADS)

    Weiner, Joshua; Cox, Kevin; Norcia, Matthew; Bohnet, Justin; Chen, Zilong; Thompson, James

    2013-04-01

    We present experimental progress towards quantum non-demolition (QND) measurements of the collective pseudo-spin Jz composed of the maximal mF hyperfine ground states of an ensemble of ˜10^5 ^87Rb atoms confined in a low finesse F = 710 optical cavity. Measuring the phase shift imposed by the atoms on a cavity probe field constitutes a QND measurement that can be used to prepare a conditionally spin squeezed state. By probing on a closed optical transition, we highly suppress both fundamental and technical noise due to Raman scattering compared to probing on an open transition. It may be possible to generate spin squeezed states with >10 dB enhancement in quantum phase estimation relative to the standard quantum limit. The resulting spin squeezed states may specifically enable magnetic field sensing beyond the standard quantum limit as well as broadly impact atomic sensors and tests of fundamental physics.

  18. Gated photon correlation spectroscopy for acoustical particle velocity measurements in free-field conditions.

    PubMed

    Koukoulas, Triantafillos; Piper, Ben; Theobald, Pete

    2013-03-01

    The measurement of acoustic pressure at a point in space using optical methods has been the subject of extensive research in airborne acoustics over the last four decades. The main driver is to reliably establish the acoustic pascal, thus allowing the calibration of microphones with standard and non-standard dimensions to be realized in an absolute and direct manner. However, the research work so far has mostly been limited to standing wave tubes. This Letter reports on the development of an optical system capable of measuring acoustic particle velocities in free-field conditions; agreement within less than 0.6 dB was obtained with standard microphone measurements during these initial experiments. PMID:23464122

  19. Temperature and dust profiles in Martian dust storm conditions retrieved from Mars Climate Sounder measurements

    NASA Astrophysics Data System (ADS)

    Kleinboehl, A.; Kass, D. M.; Schofield, J. T.; McCleese, D. J.

    2013-12-01

    Mars Climate Sounder (MCS) is a mid- and far-infrared thermal emission radiometer on board the Mars Reconnaissance Orbiter. It measures radiances in limb and nadir/on-planet geometry from which vertical profiles of atmospheric temperature, water vapor, dust and condensates can be retrieved in an altitude range from 0 to 80 km and with a vertical resolution of ~5 km. Due to the limb geometry used as the MCS primary observation mode, retrievals in conditions with high aerosol loading are challenging. We have developed several modifications to the MCS retrieval algorithm that will facilitate profile retrievals in high-dust conditions. Key modifications include a retrieval option that uses a surface pressure climatology if a pressure retrieval is not possible in high dust conditions, an extension of aerosol retrievals to higher altitudes, and a correction to the surface temperature climatology. In conditions of a global dust storm, surface temperatures tend to be lower compared to standard conditions. Taking this into account using an adaptive value based on atmospheric opacity leads to improved fits to the radiances measured by MCS and improves the retrieval success rate. We present first results of these improved retrievals during the global dust storm in 2007. Based on the limb opacities observed during the storm, retrievals are typically possible above ~30 km altitude. Temperatures around 240 K are observed in the middle atmosphere at mid- and high southern latitudes after the onset of the storm. Dust appears to be nearly homogeneously mixed at lower altitudes. Significant dust opacities are detected at least up to 70 km altitude. During much of the storm, in particular at higher altitudes, the retrieved dust profiles closely resemble a Conrath-profile.

  20. Measurement and modeling of CO2 diffusion coefficient in Saline Aquifer at reservoir conditions

    NASA Astrophysics Data System (ADS)

    Azin, Reza; Mahmoudy, Mohamad; Raad, Seyed; Osfouri, Shahriar

    2013-12-01

    Storage of CO2 in deep saline aquifers is a promising techniques to mitigate global warming and reduce greenhouse gases (GHG). Correct measurement of diffusivity is essential for predicting rate of transfer and cumulative amount of trapped gas. Little information is available on diffusion of GHG in saline aquifers. In this study, diffusivity of CO2 into a saline aquifer taken from oil field was measured and modeled. Equilibrium concentration of CO2 at gas-liquid interface was determined using Henry's law. Experimental measurements were reported at temperature and pressure ranges of 32-50°C and 5900-6900 kPa, respectively. Results show that diffusivity of CO2 varies between 3.52-5.98×10-9 m2/s for 5900 kPa and 5.33-6.16×10-9 m2/s for 6900 kPa initial pressure. Also, it was found that both pressure and temperature have a positive impact on the measures of diffusion coefficient. Liquid swelling due to gas dissolution and variations in gas compressibility factor as a result of pressure decay was found negligible. Measured diffusivities were used model the physical model and develop concentration profile of dissolved gas in the liquid phase. Results of this study provide unique measures of CO2 diffusion coefficient in saline aquifer at high pressure and temperature conditions, which can be applied in full-field studies of carbon capture and sequestration projects.

  1. Evaluation of a photoacoustic detector for water vapor measurements under simulated tropospheric/lower stratospheric conditions.

    PubMed

    Szakáll, M; Bozóki, Z; Kraemer, M; Spelten, N; Moehler, O; Schurath, U

    2001-12-15

    Although water vapor is one of the most important and certainly the most variable minor constituent of the atmosphere, accurate measurements of p(H20) with high time resolution are difficult, particularly in the cold upper troposphere/lower stratosphere. This work demonstrates that a diode laser-based photoacoustic (PA) water vapor detector is a viable alternative to current water vapor sensors for airborne measurements. The PA system was compared with a high-quality frost point hygrometer (FPH) and with a Lyman-alpha hygrometer in the pressure range of 1000-100 hPa at frost point temperatures between 202 and 216 K. These conditions were simulated in a large environmental chamberfor 14 h. Simultaneous measurements with the three instruments agreed within 6%. Nitric acid vapor interferes with the FPH measurements at low frost point temperatures but does not affect the other instruments. The sensitivity of the PA system is already sufficient for measurements in the upper troposphere, and straightforward improvements can extend its useful range above the tropopause. Rugged construction, extreme simplicity, small size, and potential for long-term automatic operation make the PA system potentially suitable for airborne measurements. PMID:11775165

  2. NIST System for Measuring the Directivity Index of Hearing Aids under Simulated Real-Ear Conditions.

    PubMed

    Wagner, Randall P

    2013-01-01

    The directivity index is a parameter that is commonly used to characterize the performance of directional hearing aids, and is determined from the measured directional response. Since this response is different for a hearing aid worn on a person as compared to when it is in a free field, directivity index measurements of hearing aids are usually done under simulated real-ear conditions. Details are provided regarding the NIST system for measuring the hearing aid directivity index under these conditions and how this system is used to implement a standardized procedure for performing such measurements. This procedure involves a sampling method that utilizes sound source locations distributed in a semi-aligned zone array on an imaginary spherical surface surrounding a standardized acoustical test manikin. The capabilities of the system were demonstrated over the frequency range of one-third-octave bands with center frequencies from 200 Hz to 8000 Hz through NIST participation in an interlaboratory comparison. This comparison was conducted between eight different laboratories of members of Working Group S3/WG48, Hearing Aids, established by Accredited Standards Committee S3, Bioacoustics, which is administered by the Acoustical Society of America and accredited by the American National Standards Institute. Directivity measurements were made for a total of six programmed memories in two different hearing aids and for the unaided manikin with the manikin right pinna accompanying the aids. Omnidirectional, cardioid, and bidirectional response patterns were measured. Results are presented comparing the NIST data with the reference values calculated from the data reported by all participating laboratories. PMID:26401425

  3. NIST System for Measuring the Directivity Index of Hearing Aids under Simulated Real-Ear Conditions

    PubMed Central

    Wagner, Randall P

    2013-01-01

    The directivity index is a parameter that is commonly used to characterize the performance of directional hearing aids, and is determined from the measured directional response. Since this response is different for a hearing aid worn on a person as compared to when it is in a free field, directivity index measurements of hearing aids are usually done under simulated real-ear conditions. Details are provided regarding the NIST system for measuring the hearing aid directivity index under these conditions and how this system is used to implement a standardized procedure for performing such measurements. This procedure involves a sampling method that utilizes sound source locations distributed in a semi-aligned zone array on an imaginary spherical surface surrounding a standardized acoustical test manikin. The capabilities of the system were demonstrated over the frequency range of one-third-octave bands with center frequencies from 200 Hz to 8000 Hz through NIST participation in an interlaboratory comparison. This comparison was conducted between eight different laboratories of members of Working Group S3/WG48, Hearing Aids, established by Accredited Standards Committee S3, Bioacoustics, which is administered by the Acoustical Society of America and accredited by the American National Standards Institute. Directivity measurements were made for a total of six programmed memories in two different hearing aids and for the unaided manikin with the manikin right pinna accompanying the aids. Omnidirectional, cardioid, and bidirectional response patterns were measured. Results are presented comparing the NIST data with the reference values calculated from the data reported by all participating laboratories. PMID:26401425

  4. Emerging technologies to measure neighborhood conditions in public health: implications for interventions and next steps.

    PubMed

    Schootman, M; Nelson, E J; Werner, K; Shacham, E; Elliott, M; Ratnapradipa, K; Lian, M; McVay, A

    2016-01-01

    Adverse neighborhood conditions play an important role beyond individual characteristics. There is increasing interest in identifying specific characteristics of the social and built environments adversely affecting health outcomes. Most research has assessed aspects of such exposures via self-reported instruments or census data. Potential threats in the local environment may be subject to short-term changes that can only be measured with more nimble technology. The advent of new technologies may offer new opportunities to obtain geospatial data about neighborhoods that may circumvent the limitations of traditional data sources. This overview describes the utility, validity and reliability of selected emerging technologies to measure neighborhood conditions for public health applications. It also describes next steps for future research and opportunities for interventions. The paper presents an overview of the literature on measurement of the built and social environment in public health (Google Street View, webcams, crowdsourcing, remote sensing, social media, unmanned aerial vehicles, and lifespace) and location-based interventions. Emerging technologies such as Google Street View, social media, drones, webcams, and crowdsourcing may serve as effective and inexpensive tools to measure the ever-changing environment. Georeferenced social media responses may help identify where to target intervention activities, but also to passively evaluate their effectiveness. Future studies should measure exposure across key time points during the life-course as part of the exposome paradigm and integrate various types of data sources to measure environmental contexts. By harnessing these technologies, public health research can not only monitor populations and the environment, but intervene using novel strategies to improve the public health. PMID:27339260

  5. METHOD FOR SIMULTANEOUS 90SR AND 137CS IN-VIVO MEASUREMENTS OF SMALL ANIMALS AND OTHER ENVIRONMENTAL MEDIA DEVELOPED FOR THE CONDITIONS OF THE CHERNOBYL EXCLUSION ZONE

    SciTech Connect

    Farfan, E.; Jannik, T.

    2011-10-01

    To perform in vivo simultaneous measurements of the {sup 90}Sr and {sup 137}Cs content in the bodies of animals living in the Chernobyl Exclusion Zone (ChEZ), an appropriate method and equipment were developed and installed in a mobile gamma beta spectrometry laboratory. This technique was designed for animals of relatively small sizes (up to 50 g). The {sup 90}Sr content is measured by a beta spectrometer with a 0.1 mm thick scintillation plastic detector. The spectrum processing takes into account the fact that the measured object is 'thick-layered' and contains a comparable quantity of {sup 137}Cs, which is a characteristic condition of the ChEZ. The {sup 137}Cs content is measured by a NaI scintillation detector that is part of the combined gamma beta spectrometry system. For environmental research performed in the ChEZ, the advantages of this method and equipment (rapid measurements, capability to measure live animals directly in their habitat, and the capability of simultaneous {sup 90}Sr and {sup 137}Cs measurements) far outweigh the existing limitations (considerations must be made for background radiation and the animal size, skeletal shape and body mass). The accuracy of these in vivo measurements is shown to be consistent with standard spectrometric and radiochemical methods. Apart from the in vivo measurements, the proposed methodology, after a very simple upgrade that is also described in the article, works even more accurately with samples of other media, such as soil and plants.

  6. Measuring performance in off-patent drug markets: a methodological framework and empirical evidence from twelve EU Member States.

    PubMed

    Kanavos, Panos

    2014-11-01

    This paper develops a methodological framework to help evaluate the performance of generic pharmaceutical policies post-patent expiry or after loss of exclusivity in non-tendering settings, comprising five indicators (generic availability, time delay to and speed of generic entry, number of generic competitors, price developments, and generic volume share evolution) and proposes a series of metrics to evaluate performance. The paper subsequently tests this framework across twelve EU Member States (MS) by using IMS data on 101 patent expired molecules over the 1998-2010 period. Results indicate that significant variation exists in generic market entry, price competition and generic penetration across the study countries. Size of a geographical market is not a predictor of generic market entry intensity or price decline. Regardless of geographic or product market size, many off patent molecules lack generic competitors two years after loss of exclusivity. The ranges in each of the five proposed indicators suggest, first, that there are numerous factors--including institutional ones--contributing to the success of generic entry, price decline and market penetration and, second, MS should seek a combination of supply and demand-side policies in order to maximise cost-savings from generics. Overall, there seems to be considerable potential for faster generic entry, uptake and greater generic competition, particularly for molecules at the lower end of the market. PMID:25201433

  7. Validity and reliability of the T-Scan(®) III for measuring force under laboratory conditions.

    PubMed

    Cerna, M; Ferreira, R; Zaror, C; Navarro, P; Sandoval, P

    2015-07-01

    Although measuring bite force is an important indicator of the health of the masticatory system, few commercially available transducers have been validated for routine clinical use. T-Scan(®) III Occlusal Analysis System allows to record the bite force distribution, indicating its relative intensity and occlusal timing. Nevertheless, even fewer studies have evaluated the validity and reliability of the latest generation of the T-Scan(®) occlusal analysis system. To determine the validity and reliability of the T-Scan(®) III system when measuring total absolute bite force under laboratory conditions. Known forces were applied to 18 T-Scan(®) III sensors, which were classified into two groups differentiated by their production series. Both Lin's concordance correlation coefficient (CCC) and the intra-class correlation coefficient (ICC) were used to assess the system's reliability and validity. Considering all the sensors studied, a substantial level (Lin's CCC 0·969) and a very good level of reliability (CCI 0·994) were obtained. When evaluating the validity of the system, a poor (Lin's CCC 0·530) and moderate (ICC 0·693) agreement were also obtained. The main factor that negatively influenced the validity of the T-Scan(®) III under these study conditions was the significant difference in the behaviour of the two sensor groups. The T-Scan(®) III showed a high degree of reliability when used to perform consecutive measurements. However, the system showed an insufficient degree of validity for measuring absolute force when estimating total occlusal force under laboratory conditions. PMID:25727489

  8. Roundhouse (RND) Mountain Top Research Site: Measurements and Uncertainties for Winter Alpine Weather Conditions

    NASA Astrophysics Data System (ADS)

    Gultepe, I.; Isaac, G. A.; Joe, P.; Kucera, P. A.; Theriault, J. M.; Fisico, T.

    2014-01-01

    The objective of this work is to better understand and summarize the mountain meteorological observations collected during the Science of Nowcasting Winter Weather for the Vancouver 2010 Olympics and Paralympics (SNOW-V10) project that was supported by the Fog Remote Sensing and Modeling (FRAM) project. The Roundhouse (RND) meteorological station was located 1,856 m above sea level that is subject to the winter extreme weather conditions. Below this site, there were three additional observation sites at 1,640, 1,320, and 774 m. These four stations provided some or all the following measurements at 1 min resolution: precipitation rate (PR) and amount, cloud/fog microphysics, 3D wind speed (horizontal wind speed, U h; vertical air velocity, w a), visibility (Vis), infrared (IR) and shortwave (SW) radiative fluxes, temperature ( T) and relative humidity with respect to water (RHw), and aerosol observations. In this work, comparisons are made to assess the uncertainties and variability for the measurements of Vis, RHw, T, PR, and wind for various winter weather conditions. The ground-based cloud imaging probe (GCIP) measurements of snow particles using a profiling microwave radiometer (PMWR) data have also been shown to assess the icing conditions. Overall, the conclusions suggest that uncertainties in the measurements of Vis, PR, T, and RH can be as large as 50, >60, 50, and >20 %, respectively, and these numbers may increase depending on U h, T, Vis, and PR magnitude. Variability of observations along the Whistler Mountain slope (~500 m) suggested that to verify the models, model space resolution should be better than 100 m and time scales better than 1 min. It is also concluded that differences between observed and model based parameters are strongly related to a model's capability of accurate prediction of liquid water content (LWC), PR, and RHw over complex topography.

  9. Service User- and Carer-Reported Measures of Involvement in Mental Health Care Planning: Methodological Quality and Acceptability to Users

    PubMed Central

    Gibbons, Chris J.; Bee, Penny E.; Walker, Lauren; Price, Owen; Lovell, Karina

    2014-01-01

    Background: Increasing service user and carer involvement in mental health care planning is a key healthcare priority but one that is difficult to achieve in practice. To better understand and measure user and carer involvement, it is crucial to have measurement questionnaires that are both psychometrically robust and acceptable to the end user. Methods: We conducted a systematic review using the terms “care plan$,” “mental health,” “user perspective$,” and “user participation” and their linguistic variants as search terms. Databases were searched from inception to November 2012, with an update search at the end of September 2014. We included any articles that described the development, validation or use of a user and/or carer-reported outcome measures of involvement in mental health care planning. We assessed the psychometric quality of each instrument using the “Evaluating the Measurement of Patient-Reported Outcomes” (EMPRO) criteria. Acceptability of each instrument was assessed using novel criteria developed in consultation with a mental health service user and carer consultation group. Results: We identified eleven papers describing the use, development, and/or validation of nine user/carer-reported outcome measures. Psychometric properties were sparsely reported and the questionnaires met few service user/carer-nominated attributes for acceptability. Where reported, basic psychometric statistics were of good quality, indicating that some measures may perform well if subjected to more rigorous psychometric tests. The majority were deemed to be too long for use in practice. Discussion: Multiple instruments are available to measure user/carer involvement in mental health care planning but are either of poor quality or poorly described. Existing measures cannot be considered psychometrically robust by modern standards, and cannot currently be recommended for use. Our review has identified an important knowledge gap, and an urgent need to

  10. Measurement of thermal regain in duct systems located in partially conditioned buffer spaces. Informal report

    SciTech Connect

    Andrews, J.W.

    1994-07-01

    Thermal losses from duct systems have been shown to be a significant fraction of the heat or cooling energy delivered by the space-conditioning equipment. However, when the ducts are located in a partially conditioned buffer space such as a basement, a portion of these losses are effectively regained through system interactions with the building. This paper presents two methods of measuring this regain effect. One is based on the relative thermal resistances between the conditioned space and the buffer space, on the one hand, and between the buffer space and the outside, on the other. The second method is based on a measured drop in the buffer-space temperature when steps are taken to reduce the duct losses. The second method is compared with results of an extensive research project that are published in a major professional society handbook. The thermal regain fraction using the drop in basement temperature was found to be 0.68, while that obtained from an analysis of the system performance data, without using the basement temperature, was 0.59.

  11. Measuring and modeling maize evapotranspiration under plastic film-mulching condition

    NASA Astrophysics Data System (ADS)

    Li, Sien; Kang, Shaozhong; Zhang, Lu; Ortega-Farias, Samuel; Li, Fusheng; Du, Taisheng; Tong, Ling; Wang, Sufen; Ingman, Mark; Guo, Weihua

    2013-10-01

    Plastic film-mulching techniques have been widely used over a variety of agricultural crops for saving water and improving yield. Accurate estimation of crop evapotranspiration (ET) under the film-mulching condition is critical for optimizing crop water management. After taking the mulching effect on soil evaporation (Es) into account, our study adjusted the original Shuttleworth-Wallace model (MSW) in estimating maize ET and Es under the film-mulching condition. Maize ET and Es respectively measured by eddy covariance and micro-lysimeter methods during 2007 and 2008 were used to validate the performance of the Penman-Monteith (PM), the original Shuttleworth-Wallace (SW) and the MSW models in arid northwest China. Results indicate that all three models significantly overestimated ET during the initial crop stage in the both years, which may be due to the underestimation of canopy resistance induced by the Jarvis model for the drought stress in the stage. For the entire experimental period, the SW model overestimated half-hourly maize ET by 17% compared with the eddy covariance method (ETEC) and overestimated daily Es by 241% compared with the micro-lysimeter measurements (EL), while the PM model only underestimated daily maize ET by 6%, and the MSW model only underestimated half-hourly maize ET by 2% and Es by 7% during the whole period. Thus the PM and MSW models significantly improved the accuracy against the original SW model and can be used to estimate ET and Es under the film-mulching condition.

  12. Measurement of Survival Time in Brachionus Rotifers: Synchronization of Maternal Conditions.

    PubMed

    Kaneko, Gen; Yoshinaga, Tatsuki; Gribble, Kristin E; Welch, David M; Ushio, Hideki

    2016-01-01

    Rotifers are microscopic cosmopolitan zooplankton used as models in ecotoxicological and aging studies due to their several advantages such as short lifespan, ease of culture, and parthenogenesis that enables clonal culture. However, caution is required when measuring their survival time as it is affected by maternal age and maternal feeding conditions. Here we provide a protocol for powerful and reproducible measurement of the survival time in Brachionus rotifers following a careful synchronization of culture conditions over several generations. Empirically, poor synchronization results in early mortality and a gradual decrease in survival rate, thus resulting in weak statistical power. Indeed, under such conditions, calorie restriction (CR) failed to significantly extend the lifespan of B. plicatilis although CR-induced longevity has been demonstrated with well-synchronized rotifer samples in past and present studies. This protocol is probably useful for other invertebrate models, including the fruitfly Drosophila melanogaster and the nematode Caenorhabditis elegans, because maternal age effects have also been reported in these species. PMID:27500471

  13. Multisensor System for Isotemporal Measurements to Assess Indoor Climatic Conditions in Poultry Farms

    PubMed Central

    Bustamante, Eliseo; Guijarro, Enrique; García-Diego, Fernando-Juan; Balasch, Sebastián; Hospitaler, Antonio; Torres, Antonio G.

    2012-01-01

    The rearing of poultry for meat production (broilers) is an agricultural food industry with high relevance to the economy and development of some countries. Periodic episodes of extreme climatic conditions during the summer season can cause high mortality among birds, resulting in economic losses. In this context, ventilation systems within poultry houses play a critical role to ensure appropriate indoor climatic conditions. The objective of this study was to develop a multisensor system to evaluate the design of the ventilation system in broiler houses. A measurement system equipped with three types of sensors: air velocity, temperature and differential pressure was designed and built. The system consisted in a laptop, a data acquisition card, a multiplexor module and a set of 24 air temperature, 24 air velocity and two differential pressure sensors. The system was able to acquire up to a maximum of 128 signals simultaneously at 5 second intervals. The multisensor system was calibrated under laboratory conditions and it was then tested in field tests. Field tests were conducted in a commercial broiler farm under four different pressure and ventilation scenarios in two sections within the building. The calibration curves obtained under laboratory conditions showed similar regression coefficients among temperature, air velocity and pressure sensors and a high goodness fit (R2 = 0.99) with the reference. Under field test conditions, the multisensor system showed a high number of input signals from different locations with minimum internal delay in acquiring signals. The variation among air velocity sensors was not significant. The developed multisensor system was able to integrate calibrated sensors of temperature, air velocity and differential pressure and operated succesfully under different conditions in a mechanically-ventilated broiler farm. This system can be used to obtain quasi-instantaneous fields of the air velocity and temperature, as well as differential

  14. Experimental Methodology for Determining Turbomachinery Blade Damping Using Magnetic Bearing Excitation and Non-Contacting Optical Measurements

    NASA Technical Reports Server (NTRS)

    Provenza, Andrew J.; Duffy, Kirsten P.

    2010-01-01

    Experiments to determine the effects of turbomachinery fan blade damping concepts such as passively shunted piezoelectric materials on blade response are ongoing at the NASA Glenn Research Center. A vertical rotor is suspended and excited with active magnetic bearings (AMBs) usually in a vacuum chamber to eliminate aerodynamic forces. Electromagnetic rotor excitation is superimposed onto rotor PD-controlled support and can be fixed to either a stationary or rotating frame of reference. The rotor speed is controlled with an air turbine system. Blade vibrations are measured using optical probes as part of a Non-Contacting Stress Measurement System (NSMS). Damping is calculated from these measurements. It can be difficult to get accurate damping measurements using this experimental setup and some of the details of how to obtain quality results are seemingly nontrivial. The intent of this paper is to present those details.

  15. Laboratory measurements of basalts electrical resistivity under deep oceanic crustal conditions

    NASA Astrophysics Data System (ADS)

    Violay, M. E.; Gibert, B.; Azais, P.; Pezard, P. A.; Flovenz, O. G.; Asmundsson, R.

    2009-12-01

    For sixty years, electrical resistivity soundings have been used to explore geothermal resources in Iceland. They have generally revealed two zones of high electrical conductivity, one at shallow depths (Flovenz et al., 1985) and another at 10-30 km depth (Beblo and Björnsson, 1978). The interpretation of these conductive zones in terms of composition and in-situ physical conditions is often ambiguous, as various parameters can explain these observations like temperature, partial melting, change in minerals and type of pore fluid. Accurate interpretations of resistivity data needed for geothermal exploration require laboratory measurements of electrical conductivities performed on rock samples at different conditions. We present here a method to measure electrical conductivity of rocks under deep crustal conditions for oceanic crustal rock, i.e. at temperatures up to 600°C, confining pressures up to 200 MPa and pore fluid pressures up to 50 MPa. The method has been developed in a internally heated, gas pressure apparatus (Paterson press). Electrical conductivity is measured on large cylindrical samples (15 to 22 mm in diameter and 10 to 15 mm in length) in a two parallel electrodes geometry. Such experiments require that the fluid saturated sample is sleeved in an impermeable and deformable jacket serving to separate the confining pressure medium (high pressure argon) from the pore fluid saturated sample. At temperature above 200°C a metal sleeve must be used, although it induces high leakage currents that could affect electrical measurements. The leakage currents are reduced using addition of 2 guard-ring parallel electrodes (Glover, 1995). The electrical impedance of basalt has been measured over a frequency range from 10 -1 to 106 Hertz. Five different types of low porosity basalts were selected to cover a range in alteration grade, from albitic to granulite facies. Application of this method will provide data on electrical conductivity of fresh and altered

  16. Methodological issues in the validation of implicit measures: comment on De Houwer, Teige-Mocigemba, Spruyt, and Moors (2009).

    PubMed

    Gawronski, Bertram; Lebel, Etienne P; Peters, Kurt R; Banse, Rainer

    2009-05-01

    J. De Houwer, S. Teige-Mocigemba, A. Spruyt, and A. Moors's normative analysis of implicit measures provides an excellent clarification of several conceptual ambiguities surrounding the validation and use of implicit measures. The current comment discusses an important, yet unacknowledged, implication of J. De Houwer et al.'s analysis, namely, that investigations addressing the proposed implicitness criterion (i.e., does the relevant psychological attribute influence measurement outcomes in an automatic fashion?) will be susceptible to fundamental misinterpretations if they are conducted independently of the proposed what criterion (i.e., is the measurement outcome causally produced by the psychological attribute the measurement procedure was designed to assess?). As a solution, it is proposed that experimental validation studies should be combined with a correlational approach in order to determine whether a given manipulation influenced measurement scores via variations in the relevant psychological attribute or via secondary sources of systematic variance. (PsycINFO Database Record (c) 2009 APA, all rights reserved). PMID:19379019

  17. Conditioning-induced attentional bias for face stimuli measured with the emotional Stroop task.

    PubMed

    Lee, Tae-Ho; Lim, Seung-Lark; Lee, Kanghee; Kim, Hyun-Taek; Choi, June-Seek

    2009-02-01

    People with anxiety disorder display attentional bias toward threat-related objects. Using classical fear conditioning, the authors investigated the possible source of such bias in normal participants. Following differential fear conditioning in which an angry face of either male or female (conditioned stimulus: CS+) was paired with mild electric fingershock (unconditioned stimulus: US) but the angry face of the other gender and all other facial expressions unpaired (CS-), an emotional Stroop task was administered. In the Stroop task, participants were required to identify the color of the facial stimuli (red, green, blue, or yellow). Response latency was significantly longer for the CS+ angry face than the other unpaired facial stimuli (CS-). Furthermore, this acquired attentional bias was positively correlated with the level of trait-anxiety measured before the conditioning and the degree of self-reported aversiveness of the US. Our results demonstrated that attentional bias could be induced in normal individuals through a simple associative learning procedure, and the acquisition is modulated by the level of trait anxiety and the level of perceived fear of the aversive US. PMID:19186927

  18. Method for Pre-Conditioning a Measured Surface Height Map for Model Validation

    NASA Technical Reports Server (NTRS)

    Sidick, Erkin

    2012-01-01

    This software allows one to up-sample or down-sample a measured surface map for model validation, not only without introducing any re-sampling errors, but also eliminating the existing measurement noise and measurement errors. Because the re-sampling of a surface map is accomplished based on the analytical expressions of Zernike-polynomials and a power spectral density model, such re-sampling does not introduce any aliasing and interpolation errors as is done by the conventional interpolation and FFT-based (fast-Fourier-transform-based) spatial-filtering method. Also, this new method automatically eliminates the measurement noise and other measurement errors such as artificial discontinuity. The developmental cycle of an optical system, such as a space telescope, includes, but is not limited to, the following two steps: (1) deriving requirements or specs on the optical quality of individual optics before they are fabricated through optical modeling and simulations, and (2) validating the optical model using the measured surface height maps after all optics are fabricated. There are a number of computational issues related to model validation, one of which is the "pre-conditioning" or pre-processing of the measured surface maps before using them in a model validation software tool. This software addresses the following issues: (1) up- or down-sampling a measured surface map to match it with the gridded data format of a model validation tool, and (2) eliminating the surface measurement noise or measurement errors such that the resulted surface height map is continuous or smoothly-varying. So far, the preferred method used for re-sampling a surface map is two-dimensional interpolation. The main problem of this method is that the same pixel can take different values when the method of interpolation is changed among the different methods such as the "nearest," "linear," "cubic," and "spline" fitting in Matlab. The conventional, FFT-based spatial filtering method used to

  19. The Deflection Plate Analyzer: A Technique for Space Plasma Measurements Under Highly Disturbed Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Dutton, Ken; Martinez, Nelson; Smith, Dennis; Stone, Nobie H.

    2003-01-01

    A technique has been developed to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The present method is an extension of the capabilities of the Differential Ion Flux Probe (DIFP) to include a mass measurement that does not include either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This reduces the complexity and expense of instrument fabrication, testing, and integration of flight hardware as compared to classical mass analyzers. The new instrument design is called the Deflection Plate Analyzer (DPA) and can deconvolve multiple ion streams and analyze each stream for ion flux intensity (density), velocity (including direction of motion), mass, and temperature (or energy distribution). The basic functionality of the DPA is discussed. The performance characteristics of a flight instrument as built for an electrodynamic tether mission, the Propulsive Small Expendable Deployer System (ProSEDS), and the instrument s role in measuring key experimental conditions are also discussed.

  20. A method to measure winter precipitation and sublimation under global warming conditions

    NASA Astrophysics Data System (ADS)

    Herndl, Markus; Slawitsch, Veronika; von Unold, Georg

    2016-04-01

    Winter precipitation and snow sublimation are fundamental components of the alpine moisture budget. Much work has been done in the study of these processes and its important contribution to the annual water balance. Due to the above-average sensitivity of the alpine region to climate change, a change in the importance and magnitude of these water balance parameters can be expected. To determine these effects, a lysimeter-facility enclosed in an open-field climate manipulation experiment was established in 2015 at AREC Raumberg-Gumpenstein which is able to measure winter precipitation and sublimation under global warming conditions. In this facility, six monolithic lysimeters are equipped with a snow cover monitoring system, which separates the snow cover above the lysimeter automatically from the surrounding snow cover. Three of those lysimeters were exposed to a +3°C scenario and three lysimeters to ambient conditions. Weight data are recorded every minute and therefore it is possible to get high-resolution information about the water balance parameter in winter. First results over two snow event periods showed that the system can measure very accurately winter precipitation and sublimation especially in comparison with other measurement systems and usually used models. Also first trends confirm that higher winter temperatures may affect snow water equivalent and snow cover duration. With more data during the next years using this method, it is possible to quantify the influence of global warming on water balance parameters during the winter periods.

  1. Measurements in Transitional Boundary Layers Under High Free-Stream Turbulence and Strong Acceleration Conditions

    NASA Technical Reports Server (NTRS)

    Volino, Ralph J.; Simon, Terrence W.

    1995-01-01

    Measurements from transitional, heated boundary layers along a concave-curved test wall are presented and discussed. A boundary layer subject to low free-stream turbulence intensity (FSTI), which contains stationary streamwise (Gortler) vortices, is documented. The low FSTI measurements are followed by measurements in boundary layers subject to high (initially 8%) free-stream turbulence intensity and moderate to strong streamwise acceleration. Conditions were chosen to simulate those present on the downstream half of the pressure side of a gas turbine airfoil. Mean flow characteristics as well as turbulence statistics, including the turbulent shear stress, turbulent heat flux, and turbulent Prandtl number, are documented. A technique called "octant analysis" is introduced and applied to several cases from the literature as well as to data from the present study. Spectral analysis was applied to describe the effects of turbulence scales of different sizes during transition. To the authors'knowledge, this is the first detailed documentation of boundary layer transition under such high free-stream turbulence conditions.

  2. The Deflection Plate Analyzer: A Technique for Space Plasma Measurements Under Highly Disturbed Conditions

    NASA Technical Reports Server (NTRS)

    Wright, Kenneth H., Jr.; Dutton, Ken; Martinez, Nelson; Smith, Dennis; Stone, Nobie H.

    2004-01-01

    A technique has been developed to measure the characteristics of space plasmas under highly disturbed conditions; e.g., non-Maxwellian plasmas with strong drifting populations and plasmas contaminated by spacecraft outgassing. The present method is an extension of the capabilities of the Differential Ion Flux Probe (DIFP) to include a mass measurement that does not include either high voltage or contamination sensitive devices such as channeltron electron multipliers or microchannel plates. This reduces the complexity and expense of instrument fabrication, testing, and integration of flight hardware as compared to classical mass analyzers. The new instrument design is called the Deflection Plate Analyzer (DPA) and can deconvolve multiple ion streams and analyze each stream for ion flux intensity (density), velocity (including direction of motion), mass, and temperature (or energy distribution). The basic functionality of the DPA is discussed. The performance characteristics of a flight instrument as built for an electrodynamic tether mission, the Propulsive Small Expendable Deployer System (ProSEDS), and the instrument s role in measuring key experimental conditions are also discussed.

  3. Comparative Analysis of Power Quality Instruments in Measuring Power under Distorted Conditions

    NASA Astrophysics Data System (ADS)

    Belchior, Fernando; Galvão, Thiago Moura; Ribeiro, Paulo Fernando; Silveira, Paulo Márcio

    2015-10-01

    This paper aims to evaluate the performance of commercial power quality (PQ) instruments. By using a high precision programmable voltage and current source, five meters from different manufacturers are analyzed and compared. At first, three-phase voltage signals are applied to those PQ instruments, considering harmonic distortion, voltage dip, voltage swell, as well as unbalanced voltages. These events are measured, compared and evaluated considering the different instruments. In addition, voltage and current signals are applied under different conditions, with and without harmonic and unbalances, in order to obtain the measurements of electrical power. After analyzing the data generated in all these tests, efforts are focused on establishing a relationship between the accuracy of the measurements. Considering the percentages of errors, a score is assigned to each instrument. The tests have shown "excellent" and "good" results regarding measuring active and apparent power, as well as power factor. However, for non-active power, almost all instruments had lower performance. This work is relevant considering that Brazil is deploying standardization in terms of power quality measurements aiming towards regulatory procedures and index limits.

  4. Embedded optical probes for simultaneous pressure and temperature measurement of materials in extreme conditions

    NASA Astrophysics Data System (ADS)

    Sandberg, R. L.; Rodriguez, G.; Gibson, L. L.; Dattelbaum, D. M.; Stevens, G. D.; Grover, M.; Lalone, B. M.; Udd, E.

    2014-05-01

    We present recent efforts at Los Alamos National Laboratory (LANL) to develop sensors for simultaneous, in situ pressure and temperature measurements under dynamic conditions by using an all-optical fiber-based approach. While similar tests have been done previously in deflagration-to-detonation tests (DDT), where pressure and temperature were measured to 82 kbar and 400°C simultaneously, here we demonstrate the use of embedded fiber grating sensors to obtain high temporal resolution, in situ pressure measurements in inert materials. We present two experimental demonstrations of pressure measurements: (1) under precise shock loading from a gas-gun driven plate impact and (2) under high explosive driven shock in a water filled vessel. The system capitalizes on existing telecom components and fast transient digitizing recording technology. It operates as a relatively inexpensive embedded probe (single-mode 1550 nm fiber-based Bragg grating) that provides a continuous fast pressure record during shock and/or detonation. By applying well-controlled shock wave pressure profiles to these inert materials, we study the dynamic pressure response of embedded fiber Bragg gratings to extract pressure amplitude of the shock wave and compare our results with particle velocity wave profiles measured simultaneously.

  5. Microvascular oxygen tension and flow measurements in rodent cerebral cortex during baseline conditions and functional activation

    PubMed Central

    Yaseen, Mohammad A; Srinivasan, Vivek J; Sakadžić, Sava; Radhakrishnan, Harsha; Gorczynska, Iwona; Wu, Weicheng; Fujimoto, James G; Boas, David A

    2011-01-01

    Measuring cerebral oxygen delivery and metabolism microscopically is important for interpreting macroscopic functional magnetic resonance imaging (fMRI) data and identifying pathological changes associated with stroke, Alzheimer's disease, and brain injury. Here, we present simultaneous, microscopic measurements of cerebral blood flow (CBF) and oxygen partial pressure (pO2) in cortical microvessels of anesthetized rats under baseline conditions and during somatosensory stimulation. Using a custom-built imaging system, we measured CBF with Fourier-domain optical coherence tomography (OCT), and vascular pO2 with confocal phosphorescence lifetime microscopy. Cerebral blood flow and pO2 measurements displayed heterogeneity over distances irresolvable with fMRI and positron emission tomography. Baseline measurements indicate O2 extraction from pial arterioles and homogeneity of ascending venule pO2 despite large variation in microvessel flows. Oxygen extraction is linearly related to flow in ascending venules, suggesting that flow in ascending venules closely matches oxygen demand of the drained territory. Oxygen partial pressure and relative CBF transients during somatosensory stimulation further indicate arteriolar O2 extraction and suggest that arterioles contribute to the fMRI blood oxygen level dependent response. Understanding O2 supply on a microscopic level will yield better insight into brain function and the underlying mechanisms of various neuropathologies. PMID:21179069

  6. Functional and anatomical measures for outflow boundary conditions in atherosclerotic coronary bifurcations.

    PubMed

    Schrauwen, Jelle T C; Coenen, Adriaan; Kurata, Akira; Wentzel, Jolanda J; van der Steen, Antonius F W; Nieman, Koen; Gijsen, Frank J H

    2016-07-26

    The aim of this research was finding the influence of anatomy-based and functional-based outflow boundary conditions for computational fluid dynamics (CFD) on fractional flow reserve (FFR) and wall shear stress (WSS) in mildly diseased coronary bifurcations. For 10 patient-specific bifurcations three simulations were set up with different outflow conditions, while the inflow was kept constant. First, the outflow conditions were based on the diameter of the outlets. Second, they were based on the volume estimates of the myocardium that depended on the outlets. Third, they were based on a myocardial flow measure derived from computed tomography perfusion imaging (CTP). The difference in outflow ratio between the perfusion-based and the diameter-based approach was -7 p.p. [-14 p.p.:7 p.p.] (median percentage point and interquartiles), and between the perfusion-based and volume-based this was -2 p.p. [-2 p.p.:1 p.p.]. Despite of these differences the computed FFRs matched very well. A quantitative analysis of the WSS results showed very high correlations between the methods with an r(2) ranging from 0.90 to 1.00. But despite the high correlations the diameter-based and volume-based approach generally underestimated the WSS compared to the perfusion-based approach. These differences disappeared after normalization. We demonstrated the potential of CTP for setting patient-specific boundary conditions for atherosclerotic coronary bifurcations. FFR and normalized WSS were unaffected by the variations in outflow ratios. In order to compute absolute WSS a functional measure to set the outflow ratio might be of added value in this type of vessels. PMID:26654676

  7. Examination of How Neighborhood Definition Influences Measurements of Youths' Access to Tobacco Retailers: A Methodological Note on Spatial Misclassification

    PubMed Central

    Duncan, Dustin T.; Kawachi, Ichiro; Subramanian, S.V.; Aldstadt, Jared; Melly, Steven J.; Williams, David R.

    2014-01-01

    Measurements of neighborhood exposures likely vary depending on the definition of “neighborhood” selected. This study examined the extent to which neighborhood definition influences findings regarding spatial accessibility to tobacco retailers among youth. We defined spatial accessibility to tobacco retailers (i.e., tobacco retail density, closest tobacco retailer, and average distance to the closest 5 tobacco retailers) on the basis of circular and network buffers of 400 m and 800 m, census block groups, and census tracts by using residential addresses from the 2008 Boston Youth Survey Geospatial Dataset (n = 1,292). Friedman tests (to compare overall differences in neighborhood definitions) were applied. There were differences in measurements of youths' access to tobacco retailers according to the selected neighborhood definitions, and these were marked for the 2 spatial proximity measures (both P < 0.01 for all differences). For example, the median average distance to the closest 5 tobacco retailers was 381.50 m when using specific home addresses, 414.00 m when using census block groups, and 482.50 m when using census tracts, illustrating how neighborhood definition influences the measurement of spatial accessibility to tobacco retailers. These analyses suggest that, whenever possible, egocentric neighborhood definitions should be used. The use of larger administrative neighborhood definitions can bias exposure estimates for proximity measures. PMID:24148710

  8. Development of the elastic wave velocity measurement technique by the ultrasonic method to lower mantle condition

    NASA Astrophysics Data System (ADS)

    Higo, Y.; Funakoshi, K.; Irifune, T.

    2012-12-01

    Elastic property of the major constituent minerals (e.g. Mg-perovskite, (Mg,Fe)O, Ca-perovskite) under high pressure and temperature is important for the fundamental study of geochemical and geophysical behavior of the lower mantle. However, the measurement of the elastic wave velocity has been limited to relatively low pressure and temperature conditions, because the observed signal from the small sample is too weak to determine the precise ultrasonic velocity. In this study, we improved the ultrasonic measurement system to obtain the elastic wave velocity in the lower mantle region, installing the high frequency waveform generator, the post amplifier and the semiconductor with a high-speed relay in the Kawai-type multi-anvil apparatus (SPEED-1500) at SPring-8. A pure polycrystalline alumina (alpha-Al2O3) rod was used as the test sample, because alumina is a very hard and stable material under high pressure and temperature conditions. We successfully obtained the elastic wave velocities of alumina up to 30 GPa and 1873 K, corresponding to the lower mantle P-T conditions. A linear fitting for the obtained elastic data yielded the following parameters: K0S = 254.6 GPa,∂KS/∂P = 4.07, ∂KS/∂T =-0.018 GPaK-1, G= 162.1 GPa, ∂G/∂P = 1.67, and ∂G/∂T =-0.020 GPaK-1, which were in good agreement with the previous studies (Kung et al. 2000: 10 GPa & 300 K, Goto et al. 1989: 0 GPa & 1800 K). More recently, higher pressure ultrasonic measurements of Mg-perovskite have been started, combined with the sintered diamond anvils. In this presentation, the elastic property of Mg-perovskite in the lower mantle region will also be discussed.

  9. Qualification of a distributed optical fiber sensor bonded to the surface of a concrete structure: a methodology to obtain quantitative strain measurements

    NASA Astrophysics Data System (ADS)

    Billon, Astrid; Hénault, Jean-Marie; Quiertant, Marc; Taillade, Frédéric; Khadour, Aghiad; Martin, Renaud-Pierre; Benzarti, Karim

    2015-11-01

    Distributed optical fiber systems (DOFSs) are an emerging and innovative technology that allows long-range and continuous strain/temperature monitoring with a high resolution. Sensing cables are either surface-mounted or embedded into civil engineering structures to ensure long-term structural monitoring and early crack detection. However, strain profiles measured in the optical fiber (OF) may differ from the actual strain in the structure due to the shear transfer through the intermediate material layers between the OF and the host material (i.e., in the protective coating of the sensing cable and in the adhesive). Therefore, OF sensors need to be qualified to provide accurate quantitative strain measurements. This study presents a methodology for the qualification of a DOFS. This qualification is achieved through the calculation of the so-called mechanical transfer function (MTF), which relates the strain profile in the OF to the actual strain profile in the structure. It is proposed to establish a numerical modeling of the system, in which the mechanical parameters are calibrated from experiments. A specific surface-mounted sensing cable connected to an optical frequency domain reflectometry interrogator is considered as a case study. It was found that (i) tensile and pull-out tests can provide detailed information about materials and interfaces of the numerical model; (ii) the calibrated model made it possible to compute strain profiles along the OF and therefore to calculate the MTF of the system; (iii) the results proved to be consistent with experimental data collected on a cracked concrete beam during a four-point bending test. This paper is organized as follows: first, the technical background related to DOFSs and interrogators is briefly recalled, the MTF is defined and the above-mentioned methodology is presented. In the second part, the methodology is applied to a specific cable. Finally, a comparison with experimental evidence validates the proposed

  10. The AmeriFlux Network of Long-Term CO{sub 2} Flux Measurement Stations: Methodology and Intercomparability

    SciTech Connect

    Hollinger, D. Y.; Evans, R. S.

    2003-05-20

    A portable flux measurement system has been used within the AmeriFlux network of CO{sub 2} flux measurement stations to enhance the comparability of data collected across the network. No systematic biases were observed in a comparison between portable system and site H, LE, or CO{sub 2} flux values although there were biases observed between the portable system and site measurement of air temperature and PPFD. Analysis suggests that if values from two stations differ by greater than 26% for H, 35% for LE, and 32% for CO{sub 2} flux they are likely to be significant. Methods for improving the intercomparability of the network are also discussed.

  11. Ice Accretion Measurements on an Airfoil and Wedge in Mixed-Phase Conditions

    NASA Technical Reports Server (NTRS)

    Struk, Peter; Bartkus, Tadas; Tsao, Jen-Ching; Currie, Tom; Fuleki, Dan

    2015-01-01

    This paper describes ice accretion measurements from experiments conducted at the National Research Council (NRC) of Canada's Research Altitude Test Facility during 2012. Due to numerous engine power loss events associated with high altitude convective weather, potential ice accretion within an engine due to ice crystal ingestion is being investigated collaboratively by NASA and NRC. These investigations examine the physical mechanisms of ice accretion on surfaces exposed to ice crystal and mixed phase conditions, similar to those believed to exist in core compressor regions of jet engines. A further objective of these tests is to examine scaling effects since altitude appears to play a key role in this icing process.

  12. Thermal conductivity measurements of glass beads and regolith simulant under vacuum conditions

    NASA Astrophysics Data System (ADS)

    Sakatani, N.; Ogawa, K.; Iijima, Y.; Tsuda, S.; Honda, R.; Tanaka, S.

    2013-09-01

    Past studies of in-situ and laboratory measurements of lunar regolith thermal conductivity imply that the conductivity would vary with depth due to change of density and self-weighted stress. In this study, we experimentally investigated the effect of the compressional stress on the thermal conductivity of the glass beads and regolith simulant using a new stress controlling system under vacuum conditions. We experimentally confirmed that the thermal conductivity increases with the compressional stress, which indicates that the regolith layer on the airless terrestrial bodies has various thermal conductivities according to the depth and their gravity.

  13. Diagnostic measurements on the great machines conditions of lignite surface mines

    SciTech Connect

    Helebrant, F.; Jurman, J.; Fries, J.

    2005-07-01

    An analysis of the diagnosis of loading and service dependability of a rail-mounted excavator used in surface lignite mining is described. Wheel power vibrations in electric motor bearings and electric motor input bearings to the gearbox were measured in situ, in horizontal, vertical, and axial directions. The data were analyzed using a mathematical relationship. The results are presented in a loading diagram that shows the deterioration and the acceptable lower bound of machine conditions over time. Work is continuing. 5 refs., 1 fig.

  14. Measurements of heat fluxes and soil moisture patterns in the field conditions

    NASA Astrophysics Data System (ADS)

    Sanda, M.; Snehota, M.; Haase, T.; Wild, J.

    2011-12-01

    New combined thermal and soil moisture unit coded TMS2 is presented. It is a prototype designed on good experience with TMS1. The device combines three thermometers for use approximately at -10, 0 and +15 cm relative to soil surface when installed vertically. Soil moisture measurement is performed based on time domain transmission (TDT) principle for the full range of soil moisture. Presented new version incorporates lifetime power supply for approximately 5 year operation and life time permanent data storage (0.5 mil logs) if values are acquired every 10 minutes. Lifetime operation log accompanies lifetime data storage with lockable data blocks. Data are retrieved by contact portable pocket collector. Both vertical/surface or buriable/subsurface installation is possible thanks to additional communication interface on demand. Original model TMS1, proved durability in harsh outdoor environment with good functioning in wet conditions withstanding mechanical destruction. Extended testing in the sandstone area of the National Park Bohemian Switzerland, Czech Republic is performed since 2009 by the Institute of Botany of the ASCR. Results of long-term measurement at hundreds of localities are successfully used for i) evaluation of species-specific environmental requirements (for different species of plants, bryophytes and fungi) and ii) extrapolation of microclimatic conditions over large areas of rugged sandstone relief with assistance of accurate, LiDAR based, digital terrain model. TMS1 units are also applied for continuous measurement of temperature and moisture of coarse woody debris, which serves as an important substrate for establishment and growth of seedlings and is thus crucial for natural regeneration of many forest ecosystems. The TMS1 sensors have been tested and calibrated in soil laboratories of Czech Technical University in Prague for three soil materials: arenic cambisol, podzol and quartz sand, showing good linearity and minor influence of the

  15. Field testing and adaptation of a methodology to measure "in-stream" values in the Tongue River, northern Great Plains (NGP) region

    USGS Publications Warehouse

    Bovee, Ken D.; Gore, James A.; Silverman, Arnold J.

    1978-01-01

    A comprehensive, multi-component in-stream flow methodology was developed and field tested in the Tongue River in southeastern Montana. The methodology incorporates a sensitivity for the flow requirements of a wide variety of in-stream uses, and the flexibility to adjust flows to accommodate seasonal and sub-seasonal changes in the flow requirements for different areas. In addition, the methodology provides the means to accurately determine the magnitude of the water requirement for each in-stream use. The methodology can be a powerful water management tool in that it provides the flexibility and accuracy necessary in water use negotiations and evaluation of trade-offs. In contrast to most traditional methodologies, in-stream flow requirements were determined by additive independent methodologies developed for: 1) fisheries, including spawning, rearing, and food production; 2) sediment transport; 3) the mitigation of adverse impacts of ice; and 4) evapotranspiration losses. Since each flow requirement varied in important throughout the year, the consideration of a single in-stream use as a basis for a flow recommendation is inadequate. The study shows that the base flow requirement for spawning shovelnose sturgeon was 13.0 m3/sec. During the same period of the year, the flow required to initiate the scour of sediment from pools is 18.0 m3/sec, with increased scour efficiency occurring at flows between 20.0 and 25.0 m3/sec. An over-winter flow of 2.83 m3/sec. would result in the loss of approximately 80% of the riffle areas to encroachment by surface ice. At the base flow for insect production, approximately 60% of the riffle area is lost to ice. Serious damage to the channel could be incurred from ice jams during the spring break-up period. A flow of 12.0 m3/sec. is recommended to alleviate this problem. Extensive ice jams would be expected at the base rearing and food production levels. The base rearing flow may be profoundly influenced by the loss of streamflow

  16. Corneal Biomechanical Properties in Different Ocular Conditions and New Measurement Techniques

    PubMed Central

    Garcia-Porta, Nery; Salgado-Borges, Jose; Parafita-Mato, Manuel; González-Méijome, Jose Manuel

    2014-01-01

    Several refractive and therapeutic treatments as well as several ocular or systemic diseases might induce changes in the mechanical resistance of the cornea. Furthermore, intraocular pressure measurement, one of the most used clinical tools, is also highly dependent on this characteristic. Corneal biomechanical properties can be measured now in the clinical setting with different instruments. In the present work, we review the potential role of the biomechanical properties of the cornea in different fields of ophthalmology and visual science in light of the definitions of the fundamental properties of matter and the results obtained from the different instruments available. The body of literature published so far provides an insight into how the corneal mechanical properties change in different sight-threatening ocular conditions and after different surgical procedures. The future in this field is very promising with several new technologies being applied to the analysis of the corneal biomechanical properties. PMID:24729900

  17. Measurement of the Critical Distance Parameter Against Icing Conditions on a NACA 0012 Swept Wing Tip

    NASA Technical Reports Server (NTRS)

    Vargas, Mario; Kreeger, Richard E.

    2011-01-01

    This work presents the results of three experiments, one conducted in the Icing Research Tunnel (IRT) at NASA Glenn Research Center and two in the Goodrich Icing Wind Tunnel (IWT). The experiments were designed to measure the critical distance parameter on a NACA 0012 Swept Wing Tip at sweep angles of 45deg, 30deg, and 15deg. A time sequence imaging technique (TSIT) was used to obtain real time close-up imaging data during the first 2 min of the ice accretion formation. The time sequence photographic data was used to measure the critical distance at each icing condition and to study how it develops in real time. The effect on the critical distance of liquid water content, drop size, total temperature, and velocity was studied. The results were interpreted using a simple energy balance on a roughness element

  18. Mass changes in NSTX Surface Layers with Li Conditioning as Measured by Quartz Microbalances

    SciTech Connect

    C.H. Skinner, H.W. Kugel, A. L. Roquemore, PS. Krstic and A. Beste

    2008-06-09

    Dynamic retention, lithium deposition, and the stability of thick deposited layers were measured by three quartz crystal microbalances (QMB) deployed in plasma shadowed areas at the upper and lower divertor and outboard midplane in the National Spherical Torus Experiment (NSTX). Deposition of 185 {micro}/g/cm{sup 2} over 3 months in 2007 was measured by a QMB at the lower divertor while a QMB on the upper divertor, that was shadowed from the evaporator, received an order of magnitude less deposition. During helium glow discharge conditioning both neutral gas collisions and the ionization and subsequent drift of Li{sup +} interrupted the lithium deposition on the lower divertor. We present calculations of the relevant mean free paths. Occasionally strong variations in the QMB frequency were observed of thick lithium films suggesting relaxation of mechanical stress and/or flaking or peeling of the deposited layers.

  19. Measuring coupling forces woodcutters exert on saws in real working conditions.

    PubMed

    Malinowska-Borowska, Jolanta; Harazin, Barbara; Zieliński, Grzegorz

    2012-01-01

    Prolonged exposure to hand-arm vibration (HAV) generated by chainsaws can cause HAV syndrome, i.e., disorders in the upper extremities of forestry workers. Progress of HAV syndrome depends on the intensity of mechanical vibration transmitted throughout the body, which is directly proportional to coupling forces applied by the woodcutter to a vibrating tool. This study aimed to establish a method of measuring coupling forces exerted by chainsaw workers in real working conditions. Coupling forces exerted by workers with their right and left hands were measured with a hydro-electronic force meter. Wood hardness, the type of chainsaw and the kind of forest operation, i.e., felling, cross-cutting or limbing, were considered. PMID:22429531

  20. Boundary condition identification of tapered beam with flexible supports using static flexibility measurements

    NASA Astrophysics Data System (ADS)

    Wang, Le; Guo, Ning; Yang, Zhichun

    2016-06-01

    This paper investigates a boundary condition identification method for tapered beam with the specific flexible boundaries using static flexibility measurements. The specific flexible boundaries are modeled by two translational springs with a particular interval which are connected at one end of the tapered beam, and the purpose of this paper is just to identify the stiffnesses of the two translational springs. According to the static equilibrium equation, it is proved that the static flexibility of the beam is a function of the flexural rigidity of the beam at its constrained end and the stiffnesses of the two translational springs. Then, using three different static flexibility measurements, a set of linear equations are established to identify the stiffnesses of the two translational springs. Finally, the feasibility and effectiveness of the proposed method are demonstrated using both simulative and experimental examples.