Science.gov

Sample records for conditions measurement methodology

  1. Methodology for measuring exhaust aerosol size distributions using an engine test under transient operating conditions

    NASA Astrophysics Data System (ADS)

    María Desantes, José; Bermúdez, Vicente; Molina, Santiago; Linares, Waldemar G.

    2011-11-01

    A study on the sources of variability in the measurement of particle size distribution using a two-stage dilution system and an engine exhaust particle sizer was conducted to obtain a comprehensive and repeatable methodology that can be used to measure the particle size distribution of aerosols emitted by a light-duty diesel engine under transient operating conditions. The paper includes three experimental phases: an experimental validation of the measurement method; an evaluation of the influence of sampling factors, such as dilution system pre-conditioning; and a study of the effects of the dilution conditions, such as the dilution ratio and the dilution air temperature. An examination of the type and degree of influence of each studied factor is presented, recommendations for reducing variability are given and critical parameter values are identified to develop a highly reliable measurement methodology that could be applied to further studies on the effect of engine operating parameters on exhaust particle size distributions.

  2. Development of the methodology of exhaust emissions measurement under RDE (Real Driving Emissions) conditions for non-road mobile machinery (NRMM) vehicles

    NASA Astrophysics Data System (ADS)

    Merkisz, J.; Lijewski, P.; Fuc, P.; Siedlecki, M.; Ziolkowski, A.

    2016-09-01

    The paper analyzes the exhaust emissions from farm vehicles based on research performed under field conditions (RDE) according to the NTE procedure. This analysis has shown that it is hard to meet the NTE requirements under field conditions (engine operation in the NTE zone for at least 30 seconds). Due to a very high variability of the engine conditions, the share of a valid number of NTE windows in the field test is small throughout the entire test. For this reason, a modification of the measurement and exhaust emissions calculation methodology has been proposed for farm vehicles of the NRMM group. A test has been developed composed of the following phases: trip to the operation site (paved roads) and field operations (including u-turns and maneuvering). The range of the operation time share in individual test phases has been determined. A change in the method of calculating the real exhaust emissions has also been implemented in relation to the NTE procedure.

  3. Methodology for determining CD-SEM measurement condition of sub-20nm resist patterns for 0.33NA EUV lithography

    NASA Astrophysics Data System (ADS)

    Okai, Nobuhiro; Lavigne, Erin; Hitomi, Keiichiro; Halle, Scott; Hotta, Shoji; Koshihara, Shunsuke; Tanaka, Junichi; Bailey, Todd

    2015-03-01

    A novel methodology was established for determining critical dimension scanning electron microscope (CD-SEM) optimum measurement condition of sub-20 nm resist patterns for 0.33NA EUV lithography yielding both small shrinkage and high precision. To investigate dependency of resist shrinkage on pattern size and electron beam irradiation condition, shrinkage of 18, 32, and 45 nm EUV resist patterns was measured over a wide range of beam conditions. A shrinkage trend similar to that of ArF resist patterns was observed for 32 and 45 nm, but 18 nm pattern showed a different dependence on acceleration voltage. Conventional methodology developed for ArF resist pattern to predict shrinkage and precision using the Taguchi method was applied to EUV resist pattern to examine the extendibility of the method. Predicted shrinkage by Taguchi method for 32 and 45 nm patterns agreed with measurements. However, the prediction error increases considerably as the pattern size decreases from 32 to 18 nm because there is a significant interaction between acceleration voltage and irradiated electron dose in L18 array used in the Taguchi method. Thus, we proposed a new method that consists of separated prediction procedures of shrinkage and precision using both a shrinkage curve and the Taguchi method, respectively. The new method was applied to 18 nm EUV resist pattern, and the optimum measurement condition with shrinkage of 1.5 nm and precision of 0.12 nm was determined. Our new method is a versatile technique which is applicable not only to fine EUV resist pattern but also to ArF resist pattern.

  4. The antioxidant activity of teas measured by the FRAP method adapted to the FIA system: optimising the conditions using the response surface methodology.

    PubMed

    Martins, Alessandro C; Bukman, Lais; Vargas, Alexandro M M; Barizão, Érica O; Moraes, Juliana C G; Visentainer, Jesuí V; Almeida, Vitor C

    2013-05-01

    This study proposes a FRAP assay adapted to FIA system with a merging zones configuration. The FIA system conditions were optimised with the response surface methodology using the central composite rotatable design. The optimisation parameters studied were: the carrier flow rate, the lengths of the sample and reagent loops, and reactor length. The conditions selected in accordance with the results were: carrier flow rate of 1.00 ml/min, length of the loops 18.2 cm and length of the reaction coil 210.1 cm. The detection and quantification limits were, respectively, 28.6 and 86.8 μmol/l Fe(2+), and the precision was 1.27%. The proposed method had an analytical frequency of 30 samples/h and about 95% less volume of FRAP reagent was consumed. The FRAP assay adapted to the FIA system under the optimised conditions was utilised to determine the antioxidant activity of tea samples.

  5. Methodological Quality of National Guidelines for Pediatric Inpatient Conditions

    PubMed Central

    Hester, Gabrielle; Nelson, Katherine; Mahant, Sanjay; Eresuma, Emily; Keren, Ron; Srivastava, Rajendu

    2014-01-01

    Background Guidelines help inform standardization of care for quality improvement (QI). The Pediatric Research in Inpatient Settings (PRIS) network published a prioritization list of inpatient conditions with high prevalence, cost, and variation in resource utilization across children’s hospitals. The methodological quality of guidelines for priority conditions is unknown. Objective To rate the methodological quality of national guidelines for 20 priority pediatric inpatient conditions. Design We searched sources including PubMed for national guidelines published 2002–2012. Guidelines specific to one organism, test or treatment, or institution were excluded. Guidelines were rated by two raters using a validated tool (AGREE II) with an overall rating on a 7-point scale (7–highest). Inter-rater reliability was measured with a weighted kappa coefficient. Results 17 guidelines met inclusion criteria for 13 conditions, 7 conditions yielded no relevant national guidelines. The highest methodological quality guidelines were for asthma, tonsillectomy, and bronchiolitis (mean overall rating 7, 6.5 and 6.5 respectively); the lowest were for sickle cell disease (2 guidelines) and dental caries (mean overall rating 4, 3.5, and 3 respectively). The overall weighted kappa was 0.83 (95% confidence interval 0.78–0.87). Conclusions We identified a group of moderate to high methodological quality national guidelines for priority pediatric inpatient conditions. Hospitals should consider these guidelines to inform QI initiatives. PMID:24677729

  6. Methodological Comparison between a Novel Automatic Sampling System for Gas Chromatography versus Photoacoustic Spectroscopy for Measuring Greenhouse Gas Emissions under Field Conditions

    PubMed Central

    Schmithausen, Alexander J.; Trimborn, Manfred; Büscher, Wolfgang

    2016-01-01

    Trace gases such as nitrous oxide (N2O), methane (CH4), and carbon dioxide (CO2) are climate-related gases, and their emissions from agricultural livestock barns are not negligible. Conventional measurement systems in the field (Fourier transform infrared spectroscopy (FTIR); photoacoustic system (PAS)) are not sufficiently sensitive to N2O. Laser-based measurement systems are highly accurate, but they are very expensive to purchase and maintain. One cost-effective alternative is gas chromatography (GC) with electron capture detection (ECD), but this is not suitable for field applications due to radiation. Measuring samples collected automatically under field conditions in the laboratory at a subsequent time presents many challenges. This study presents a sampling designed to promote laboratory analysis of N2O concentrations sampled under field conditions. Analyses were carried out using PAS in the field (online system) and GC in the laboratory (offline system). Both measurement systems showed a good correlation for CH4 and CO2 concentrations. Measured N2O concentrations were near the detection limit for PAS. GC achieved more reliable results for N2O in very low concentration ranges. PMID:27706101

  7. Rapid development of xylanase assay conditions using Taguchi methodology.

    PubMed

    Prasad Uday, Uma Shankar; Bandyopadhyay, Tarun Kanti; Bhunia, Biswanath

    2016-11-01

    The present investigation is mainly concerned with the rapid development of extracellular xylanase assay conditions by using Taguchi methodology. The extracellular xylanase was produced from Aspergillus niger (KP874102.1), a new strain isolated from a soil sample of the Baramura forest, Tripura West, India. Four physical parameters including temperature, pH, buffer concentration and incubation time were considered as key factors for xylanase activity and were optimized using Taguchi robust design methodology for enhanced xylanase activity. The main effect, interaction effects and optimal levels of the process factors were determined using signal-to-noise (S/N) ratio. The Taguchi method recommends the use of S/N ratio to measure quality characteristics. Based on analysis of the S/N ratio, optimal levels of the process factors were determined. Analysis of variance (ANOVA) was performed to evaluate statistically significant process factors. ANOVA results showed that temperature contributed the maximum impact (62.58%) on xylanase activity, followed by pH (22.69%), buffer concentration (9.55%) and incubation time (5.16%). Predicted results showed that enhanced xylanase activity (81.47%) can be achieved with pH 2, temperature 50°C, buffer concentration 50 Mm and incubation time 10 min.

  8. A Wavelet-Based Methodology for Grinding Wheel Condition Monitoring

    SciTech Connect

    Liao, T. W.; Ting, C.F.; Qu, Jun; Blau, Peter Julian

    2007-01-01

    Grinding wheel surface condition changes as more material is removed. This paper presents a wavelet-based methodology for grinding wheel condition monitoring based on acoustic emission (AE) signals. Grinding experiments in creep feed mode were conducted to grind alumina specimens with a resinoid-bonded diamond wheel using two different conditions. During the experiments, AE signals were collected when the wheel was 'sharp' and when the wheel was 'dull'. Discriminant features were then extracted from each raw AE signal segment using the discrete wavelet decomposition procedure. An adaptive genetic clustering algorithm was finally applied to the extracted features in order to distinguish different states of grinding wheel condition. The test results indicate that the proposed methodology can achieve 97% clustering accuracy for the high material removal rate condition, 86.7% for the low material removal rate condition, and 76.7% for the combined grinding conditions if the base wavelet, the decomposition level, and the GA parameters are properly selected.

  9. PIMM: A Performance Improvement Measurement Methodology

    SciTech Connect

    Not Available

    1994-05-15

    This report presents a Performance Improvement Measurement Methodology (PIMM) for measuring and reporting the mission performance for organizational elements of the U.S. Department of Energy to comply with the Chief Financial Officer`s Act (CFOA) of 1990 and the Government Performance and Results Act (GPRA) of 1993. The PIMM is illustrated by application to the Morgantown Energy Technology Center (METC), a Research, Development and Demonstration (RD&D) field center of the Office of Fossil Energy, along with limited applications to the Strategic Petroleum Reserve Office and the Office of Fossil Energy. METC is now implementing the first year of a pilot project under GPRA using the PIMM. The PIMM process is applicable to all elements of the Department; organizations may customize measurements to their specific missions. The PIMM has four aspects: (1) an achievement measurement that applies to any organizational element, (2) key indicators that apply to institutional elements, (3) a risk reduction measurement that applies to all RD&D elements and to elements with long-term activities leading to risk-associated outcomes, and (4) a cost performance evaluation. Key Indicators show how close the institution is to attaining long range goals. Risk reduction analysis is especially relevant to RD&D. Product risk is defined as the chance that the product of new technology will not meet the requirements of the customer. RD&D is conducted to reduce technology risks to acceptable levels. The PIMM provides a profile to track risk reduction as RD&D proceeds. Cost performance evaluations provide a measurement of the expected costs of outcomes relative to their actual costs.

  10. Relative Hazard and Risk Measure Calculation Methodology

    SciTech Connect

    Stenner, Robert D.; Strenge, Dennis L.; Elder, Matthew S.

    2004-03-20

    The relative hazard (RH) and risk measure (RM) methodology and computer code is a health risk-based tool designed to allow managers and environmental decision makers the opportunity to readily consider human health risks (i.e., public and worker risks) in their screening-level analysis of alternative cleanup strategies. Environmental management decisions involve consideration of costs, schedules, regulatory requirements, health hazards, and risks. The RH-RM tool is a risk-based environmental management decision tool that allows managers the ability to predict and track health hazards and risks over time as they change in relation to mitigation and cleanup actions. Analysis of the hazards and risks associated with planned mitigation and cleanup actions provides a baseline against which alternative strategies can be compared. This new tool allows managers to explore “what if scenarios,” to better understand the impact of alternative mitigation and cleanup actions (i.e., alternatives to the planned actions) on health hazards and risks. This new tool allows managers to screen alternatives on the basis of human health risk and compare the results with cost and other factors pertinent to the decision. Once an alternative or a narrow set of alternatives are selected, it will then be more cost-effective to perform the detailed risk analysis necessary for programmatic and regulatory acceptance of the selected alternative. The RH-RM code has been integrated into the PNNL developed Framework for Risk Analysis In Multimedia Environmental Systems (FRAMES) to allow the input and output data of the RH-RM code to be readily shared with the more comprehensive risk analysis models, such as the PNNL developed Multimedia Environmental Pollutant Assessment System (MEPAS) model.

  11. Measurements of Gluconeogenesis and Glycogenolysis: A Methodological Review

    PubMed Central

    Chung, Stephanie T.; Chacko, Shaji K.; Sunehag, Agneta L.

    2015-01-01

    Gluconeogenesis is a complex metabolic process that involves multiple enzymatic steps regulated by myriad factors, including substrate concentrations, the redox state, activation and inhibition of specific enzyme steps, and hormonal modulation. At present, the most widely accepted technique to determine gluconeogenesis is by measuring the incorporation of deuterium from the body water pool into newly formed glucose. However, several techniques using radioactive and stable-labeled isotopes have been used to quantitate the contribution and regulation of gluconeogenesis in humans. Each method has its advantages, methodological assumptions, and set of propagated errors. In this review, we examine the strengths and weaknesses of the most commonly used stable isotopes methods to measure gluconeogenesis in vivo. We discuss the advantages and limitations of each method and summarize the applicability of these measurements in understanding normal and pathophysiological conditions. PMID:26604176

  12. Wastewater Sampling Methodologies and Flow Measurement Techniques.

    ERIC Educational Resources Information Center

    Harris, Daniel J.; Keffer, William J.

    This document provides a ready source of information about water/wastewater sampling activities using various commercial sampling and flow measurement devices. The report consolidates the findings and summarizes the activities, experiences, sampling methods, and field measurement techniques conducted by the Environmental Protection Agency (EPA),…

  13. Methodological Challenges in Measuring Child Maltreatment

    ERIC Educational Resources Information Center

    Fallon, Barbara; Trocme, Nico; Fluke, John; MacLaurin, Bruce; Tonmyr, Lil; Yuan, Ying-Ying

    2010-01-01

    Objective: This article reviewed the different surveillance systems used to monitor the extent of reported child maltreatment in North America. Methods: Key measurement and definitional differences between the surveillance systems are detailed and their potential impact on the measurement of the rate of victimization. The infrastructure…

  14. A methodological review of resilience measurement scales

    PubMed Central

    2011-01-01

    Background The evaluation of interventions and policies designed to promote resilience, and research to understand the determinants and associations, require reliable and valid measures to ensure data quality. This paper systematically reviews the psychometric rigour of resilience measurement scales developed for use in general and clinical populations. Methods Eight electronic abstract databases and the internet were searched and reference lists of all identified papers were hand searched. The focus was to identify peer reviewed journal articles where resilience was a key focus and/or is assessed. Two authors independently extracted data and performed a quality assessment of the scale psychometric properties. Results Nineteen resilience measures were reviewed; four of these were refinements of the original measure. All the measures had some missing information regarding the psychometric properties. Overall, the Connor-Davidson Resilience Scale, the Resilience Scale for Adults and the Brief Resilience Scale received the best psychometric ratings. The conceptual and theoretical adequacy of a number of the scales was questionable. Conclusion We found no current 'gold standard' amongst 15 measures of resilience. A number of the scales are in the early stages of development, and all require further validation work. Given increasing interest in resilience from major international funders, key policy makers and practice, researchers are urged to report relevant validation statistics when using the measures. PMID:21294858

  15. Experimental comparison of exchange bias measurement methodologies

    SciTech Connect

    Hovorka, Ondrej; Berger, Andreas; Friedman, Gary

    2007-05-01

    Measurements performed on all-ferromagnetic bilayer systems and supported by model calculation results are used to compare different exchange bias characterization methods. We demonstrate that the accuracy of the conventional two-point technique based on measuring the sum of the coercive fields depends on the symmetry properties of hysteresis loops. On the other hand, the recently proposed center of mass method yields results independent of the hysteresis loop type and coincides with the two-point measurement only if the loops are symmetric. Our experimental and simulation results clearly demonstrate a strong correlation between loop asymmetry and the difference between these methods.

  16. Methodology for high accuracy contact angle measurement.

    PubMed

    Kalantarian, A; David, R; Neumann, A W

    2009-12-15

    A new version of axisymmetric drop shape analysis (ADSA) called ADSA-NA (ADSA-no apex) was developed for measuring interfacial properties for drop configurations without an apex. ADSA-NA facilitates contact angle measurements on drops with a capillary protruding into the drop. Thus a much simpler experimental setup, not involving formation of a complete drop from below through a hole in the test surface, may be used. The contact angles of long-chained alkanes on a commercial fluoropolymer, Teflon AF 1600, were measured using the new method. A new numerical scheme was incorporated into the image processing to improve the location of the contact points of the liquid meniscus with the solid substrate to subpixel resolution. The images acquired in the experiments were also analyzed by a different drop shape technique called theoretical image fitting analysis-axisymmetric interfaces (TIFA-AI). The results were compared with literature values obtained by means of the standard ADSA for sessile drops with the apex. Comparison of the results from ADSA-NA with those from TIFA-AI and ADSA reveals that, with different numerical strategies and experimental setups, contact angles can be measured with an accuracy of less than 0.2 degrees. Contact angles and surface tensions measured from drops with no apex, i.e., by means of ADSA-NA and TIFA-AI, were considerably less scattered than those from complete drops with apex. ADSA-NA was also used to explore sources of improvement in contact angle resolution. It was found that using an accurate value of surface tension as an input enhances the accuracy of contact angle measurements.

  17. Evaluation of UT Wall Thickness Measurements and Measurement Methodology

    SciTech Connect

    Weier, Dennis R.; Pardini, Allan F.

    2007-10-01

    CH2M HILL has requested that PNNL examine the ultrasonic methodology utilized in the inspection of the Hanford double shell waste tanks. Specifically, PNNL is to evaluate the UT process variability and capability to detect changes in wall thickness and to document the UT operator's techniques and methodology in the determination of the reported minimum and average UT data and how it compares to the raw (unanalyzed) UT data.

  18. Toward a new methodology for measuring the threshold Shields number

    NASA Astrophysics Data System (ADS)

    Rousseau, Gauthier; Dhont, Blaise; Ancey, Christophe

    2016-04-01

    A number of bedload transport equations involve the threshold Shields number (corresponding to the threshold of incipient motion for particles resting on the streambed). Different methods have been developed for determining this threshold Shields number; they usually assume that the initial streambed is plane prior to sediment transport. Yet, there are many instances in real-world scenarios, in which the initial streambed is not free of bed forms. We are interested in developing a new methodology for determining the threshold of incipient motion in gravel-bed streams in which smooth bed forms (e.g., anti-dunes) develop. Experiments were conducted in a 10-cm wide, 2.5-m long flume, whose initial inclination was 3%. Flows were supercritical and fully turbulent. The flume was supplied with water and sediment at fixed rates. As bed forms developed and migrated, and sediment transport rates exhibited wide fluctuations, measurements had to be taken over long times (typically 10 hr). Using a high-speed camera, we recorded the instantaneous bed load transport rate at the outlet of the flume by taking top-view images. In parallel, we measured the evolution of the bed slope, water depth, and shear stress by filming through a lateral window of the flume. These measurements allowed for the estimation of the space and time-averaged slope, from which we deduced the space and time-averaged Shields number under incipient bed load transport conditions. In our experiments, the threshold Shields number was strongly dependent on streambed morphology. Experiments are under way to determine whether taking the space and time average of incipient motion experiments leads to a more robust definition of the threshold Shields number. If so, this new methodology will perform better than existing approaches at measuring the threshold Shields number.

  19. Methodological aspects of EEG and body dynamics measurements during motion

    PubMed Central

    Reis, Pedro M. R.; Hebenstreit, Felix; Gabsteiger, Florian; von Tscharner, Vinzenz; Lochmann, Matthias

    2014-01-01

    EEG involves the recording, analysis, and interpretation of voltages recorded on the human scalp which originate from brain gray matter. EEG is one of the most popular methods of studying and understanding the processes that underlie behavior. This is so, because EEG is relatively cheap, easy to wear, light weight and has high temporal resolution. In terms of behavior, this encompasses actions, such as movements that are performed in response to the environment. However, there are methodological difficulties which can occur when recording EEG during movement such as movement artifacts. Thus, most studies about the human brain have examined activations during static conditions. This article attempts to compile and describe relevant methodological solutions that emerged in order to measure body and brain dynamics during motion. These descriptions cover suggestions on how to avoid and reduce motion artifacts, hardware, software and techniques for synchronously recording EEG, EMG, kinematics, kinetics, and eye movements during motion. Additionally, we present various recording systems, EEG electrodes, caps and methods for determinating real/custom electrode positions. In the end we will conclude that it is possible to record and analyze synchronized brain and body dynamics related to movement or exercise tasks. PMID:24715858

  20. DEVELOPMENT OF PETROLUEM RESIDUA SOLUBILITY MEASUREMENT METHODOLOGY

    SciTech Connect

    Per Redelius

    2006-03-01

    In the present study an existing spectrophotometry system was upgraded to provide high-resolution ultraviolet (UV), visible (Vis), and near infrared (NIR) analyses of test solutions to measure the relative solubilities of petroleum residua dissolved in eighteen test solvents. Test solutions were prepared by dissolving ten percent petroleum residue in a given test solvent, agitating the mixture, followed by filtration and/or centrifugation to remove insoluble materials. These solutions were finally diluted with a good solvent resulting in a supernatant solution that was analyzed by spectrophotometry to quantify the degree of dissolution of a particular residue in the suite of test solvents that were selected. Results obtained from this approach were compared with spot-test data (to be discussed) obtained from the cosponsor.

  1. Advanced quantitative measurement methodology in physics education research

    NASA Astrophysics Data System (ADS)

    Wang, Jing

    parts. The first part involves the comparison between item response theory (IRT) and classical test theory (CTT). The two theories both provide test item statistics for educational inferences and decisions. The two theories are both applied to Force Concept Inventory data obtained from students enrolled in The Ohio State University. Effort was made to examine the similarity and difference between the two theories, and the possible explanation to the difference. The study suggests that item response theory is more sensitive to the context and conceptual features of the test items than classical test theory. The IRT parameters provide a better measure than CTT parameters for the educational audience to investigate item features. The second part of the dissertation is on the measure of association for binary data. In quantitative assessment, binary data is often encountered because of its simplicity. The current popular measures of association fail under some extremely unbalanced conditions. However, the occurrence of these conditions is not rare in educational data. Two popular association measures, the Pearson's correlation and the tetrachoric correlation are examined. A new method, model based association is introduced, and an educational testing constraint is discussed. The existing popular methods are compared with the model based association measure with and without the constraint. Connections between the value of association and the context and conceptual features of questions are discussed in detail. Results show that all the methods have their advantages and disadvantages. Special attention to the test and data conditions is necessary. The last part of the dissertation is focused on exploratory factor analysis (EFA). The theoretical advantages of EFA are discussed. Typical misunderstanding and misusage of EFA are explored. The EFA is performed on Lawson's Classroom Test of Scientific Reasoning (LCTSR), a widely used assessment on scientific reasoning skills. The

  2. Methodological considerations for measuring spontaneous physical activity in rodents.

    PubMed

    Teske, Jennifer A; Perez-Leighton, Claudio E; Billington, Charles J; Kotz, Catherine M

    2014-05-15

    When exploring biological determinants of spontaneous physical activity (SPA), it is critical to consider whether methodological factors differentially affect rodents and the measured SPA. We determined whether acclimation time, sensory stimulation, vendor, or chamber size affected measures in rodents with varying propensity for SPA. We used principal component analysis to determine which SPA components (ambulatory and vertical counts, time in SPA, and distance traveled) best described the variability in SPA measurements. We compared radiotelemetry and infrared photobeams used to measure SPA and exploratory activity. Acclimation time, sensory stimulation, vendor, and chamber size independently influenced SPA, and the effect was moderated by the propensity for SPA. A 24-h acclimation period prior to SPA measurement was sufficient for habituation. Principal component analysis showed that ambulatory and vertical measurements of SPA describe different dimensions of the rodent's SPA behavior. Smaller testing chambers and a sensory attenuation cubicle around the chamber reduced SPA. SPA varies between rodents purchased from different vendors. Radiotelemetry and infrared photobeams differ in their sensitivity to detect phenotypic differences in SPA and exploratory activity. These data highlight methodological considerations in rodent SPA measurement and a need to standardize SPA methodology.

  3. Design methodology of an automated scattering measurement facility

    NASA Astrophysics Data System (ADS)

    Mazur, D. G.

    1985-12-01

    This thesis addresses the design methodology surrounding an automated scattering measurement facility. A brief historical survey of radar cross-section (RCS) measurements is presented. The electromagnetic theory associated with a continuous wave (CW) background cancellation technique for measuring RCS is discussed as background. In addition, problems associated with interfacing test equipment, data storage and output are addressed. The facility used as a model for this thesis is located at the Air Force Institute of Technology, WPARB, OH. The design methodology applies to any automated scattering measurement facility. A software package incorporating features that enhance the operation of AFIT's facility by students is presented. Finally, sample output from the software package illustrate formats for displaying RCS data.

  4. A new methodology to measure the running biomechanics of amputees.

    PubMed

    Wilson, James Richard; Asfour, Shihab; Abdelrahman, Khaled Zakaria; Gailey, Robert

    2009-09-01

    We present a new methodology to measure the running biomechanics of amputees. This methodology combines the use of a spring-mass model and symmetry index, two standard techniques in biomechanics literature, but not yet used in concert to evaluate amputee biomechanics. The methodology was examined in the context of a pilot study to examine two transtibial amputee sprinters and showed biomechanically quantifiable changes for small adjustments in prosthetic prescription. Vertical ground reaction forces were measured in several trials for two transtibial amputees running at constant speed. A spring-mass model was used in conjunction with a symmetry index to observe the effect of varying prosthetic height and stiffness on running biomechanics. All spring-mass variables were significantly affected by changes in prosthetic prescription among the two subjects tested (p < 0.05). When prosthetic height was changed, both subjects showed significant differences, in Deltay(max), Deltal and contact time (t(c)) on the prosthetic limb and in k(vert) and k(leg) on the sound limb. The symmetry indices calculated for spring-mass variables were all significantly affected due to changes in prosthetic prescription for the male subject and all but the peak force (F(peak)) for the female subject. This methodology is a straight-forward tool for evaluating the effect of changes to prosthetic prescription.

  5. National working conditions surveys in Latin America: comparison of methodological characteristics

    PubMed Central

    Merino-Salazar, Pamela; Artazcoz, Lucía; Campos-Serna, Javier; Gimeno, David; Benavides, Fernando G.

    2015-01-01

    Background: High-quality and comparable data to monitor working conditions and health in Latin America are not currently available. In 2007, multiple Latin American countries started implementing national working conditions surveys. However, little is known about their methodological characteristics. Objective: To identify commonalities and differences in the methodologies of working conditions surveys (WCSs) conducted in Latin America through 2013. Methods: The study critically examined WCSs in Latin America between 2007 and 2013. Sampling design, data collection, and questionnaire content were compared. Results: Two types of surveys were identified: (1) surveys covering the entire working population and administered at the respondent's home and (2) surveys administered at the workplace. There was considerable overlap in the topics covered by the dimensions of employment and working conditions measured, but less overlap in terms of health outcomes, prevention resources, and activities. Conclusions: Although WCSs from Latin America are similar, there was heterogeneity across surveyed populations and location of the interview. Reducing differences in surveys between countries will increase comparability and allow for a more comprehensive understanding of occupational health in the region. PMID:26079314

  6. Object shape-based optical sensing methodology and system for condition monitoring of contaminated engine lubricants

    NASA Astrophysics Data System (ADS)

    Bordatchev, Evgueni; Aghayan, Hamid; Yang, Jun

    2014-03-01

    Presence of contaminants, such as gasoline, moisture, and coolant in the engine lubricant indicates mechanical failure within the engine and significantly reduces lubricant quality. This paper describes a novel sensing system, its methodology and experimental verifications for analysis of the presence of contaminants in the engine lubricants. The sensing methodology is based on the statistical shape analysis methodology utilizing optical analysis of the distortion effect when an object image is obtained through a thin random optical medium. The novelty of the proposed sensing system lies within the employed methodology which an object with a known periodic shape is introduced behind a thin film of the contaminated lubricant. In this case, an acquired image represents a combined lubricant-object optical appearance, where an a priori known periodical structure of the object is distorted by a contaminated lubricant. The object, e.g. a stainless steel woven wire cloth with a mesh size of 65×65 µm2 and a circular wire diameter of 33 µm was placed behind a microfluidic channel, containing engine lubricant and optical images of flowing lubricant with stationary object were acquired and analyzed. Several parameters of acquired optical images, such as, color of lubricant and object, object shape width at object and lubricant levels, object relative color, and object width non-uniformity coefficient, were proposed. Measured on-line parameters were used for optical analysis of fresh and contaminated lubricants. Estimation of contaminant presence and lubricant condition was performed by comparison of parameters for fresh and contaminated lubricants. Developed methodology was verified experimentally showing ability to distinguish lubricants with 1%, 4%, 7%, and 10% coolant, gasoline and water contamination individually and in a combination form of coolant (0%-5%) and gasoline (0%-5%).

  7. From lab to field conditions: a pilot study on EEG methodology in applied sports sciences.

    PubMed

    Reinecke, Kirsten; Cordes, Marjolijn; Lerch, Christiane; Koutsandréou, Flora; Schubert, Michael; Weiss, Michael; Baumeister, Jochen

    2011-12-01

    Although neurophysiological aspects have become more important in sports and exercise sciences in the last years, it was not possible to measure cortical activity during performance outside a laboratory due to equipment limits or movement artifacts in particular. With this pilot study we want to investigate whether Electroencephalography (EEG) data obtained in a laboratory golf putting performance differ from a suitable putting task under field conditions. Therefore, parameters of the working memory (frontal Theta and parietal Alpha 2 power) were recorded during these two conditions. Statistical calculations demonstrated a significant difference only for Theta power at F4 regarding the two putting conditions "field" and "laboratory". These findings support the idea that brain activity patterns obtained under laboratory conditions are comparable but not equivalent to those obtained under field conditions. Additionally, we were able to show that the EEG methodology seems to be a reliable tool to observe brain activity under field conditions in a golf putting task. However, considering the still existing problems of movement artifacts during EEG measurements, eligible sports and exercises are limited to those being relatively motionless during execution. Further studies are needed to confirm these pilot results.

  8. Methodology of high-resolution photography for mural condition database

    NASA Astrophysics Data System (ADS)

    Higuchi, R.; Suzuki, T.; Shibata, M.; Taniguchi, Y.

    2015-08-01

    Digital documentation is one of the most useful techniques to record the condition of cultural heritage. Recently, high-resolution images become increasingly useful because it is possible to show general views of mural paintings and also detailed mural conditions in a single image. As mural paintings are damaged by environmental stresses, it is necessary to record the details of painting condition on high-resolution base maps. Unfortunately, the cost of high-resolution photography and the difficulty of operating its instruments and software have commonly been an impediment for researchers and conservators. However, the recent development of graphic software makes its operation simpler and less expensive. In this paper, we suggest a new approach to make digital heritage inventories without special instruments, based on our recent our research project in Üzümlü church in Cappadocia, Turkey. This method enables us to achieve a high-resolution image database with low costs, short time, and limited human resources.

  9. Novel CD-SEM measurement methodology for complex OPCed patterns

    NASA Astrophysics Data System (ADS)

    Lee, Hyung-Joo; Park, Won Joo; Choi, Seuk Hwan; Chung, Dong Hoon; Shin, Inkyun; Kim, Byung-Gook; Jeon, Chan-Uk; Fukaya, Hiroshi; Ogiso, Yoshiaki; Shida, Soichi; Nakamura, Takayuki

    2014-07-01

    As design rules of lithography shrink: accuracy and precision of Critical Dimension (CD) and controllability of hard OPCed patterns are required in semiconductor production. Critical Dimension Scanning Electron Microscopes (CD SEM) are essential tools to confirm the quality of a mask such as CD control; CD uniformity and CD mean to target (MTT). Basically, Repeatability and Reproducibility (R and R) performance depends on the length of Region of Interest (ROI). Therefore, the measured CD can easily fluctuate in cases of extremely narrow regions of OPCed patterns. With that premise, it is very difficult to define MTT and uniformity of complex OPCed masks using the conventional SEM measurement approach. To overcome these difficulties, we evaluated Design Based Metrology (DBM) using Large Field Of View (LFOV) of CD-SEM. DBM can standardize measurement points and positions within LFOV based on the inflection/jog of OPCed patterns. Thus, DBM has realized several thousand multi ROI measurements with average CD. This new measurement technique can remove local CD errors and improved statistical methodology of the entire mask to enhance the representativeness of global CD uniformity. With this study we confirmed this new technique as a more reliable methodology in complex OPCed patterns compared to conventional technology. This paper summarizes the experiments of DBM with LFOV using various types of the patterns and compares them with current CD SEM methods.

  10. Methodology for evaluating statistically predicted versus measured imagery

    NASA Astrophysics Data System (ADS)

    Kooper, Rob; Bajcsy, Peter; Andersh, Dennis

    2005-05-01

    We present a novel methodology for evaluating statistically predicted versus measured multi-modal imagery, such as Synthetic Aperture Radar (SAR), Electro-Optical (EO), Multi-Spectral (MS) and Hyper-Spectral (HS) modalities. While several scene modeling approaches have been proposed in the past for multi-modal image predictions, the problem of evaluating synthetic and measured images has remained an open issue. Although analytical prediction models would be appropriate for accuracy evaluations of man-made objects, for example, SAR target modeling based on Xpatch, the analytical models cannot be applied to prediction evaluation of natural scenes because of their randomness and high geometrical complexity imaged by any of the aforementioned sensor modality. Thus, statistical prediction models are frequently chosen as more appropriate scene modeling approaches and there is a need to evaluate the accuracy of statistically predicted versus measured imagery. This problem poses challenges in terms of selecting quantitative and qualitative evaluation techniques, and establishing a methodology for systematic comparisons of synthetic and measured images. In this work, we demonstrate clutter accuracy evaluations for modified measured and predicted synthetic images with statistically modeled clutter. We show experimental results for color (red, green and blue) and HS imaging modalities, and for statistical clutter models using Johnson's family of probability distribution functions (PDFs). The methodology includes several evaluation techniques for comparing image samples and their similarity, image histograms, statistical central moments, and estimated probability distribution functions (PDFs). Particularly, we assess correlation, histogram, chi-squared, pixel and PDF parameter based error metrics quantitatively, and relate them to a human visual perception of predicted image quality. The work is directly applicable to multi-sensor phenomenology modeling for exploitation

  11. Methodology of modeling and measuring computer architectures for plasma simulations

    NASA Technical Reports Server (NTRS)

    Wang, L. P. T.

    1977-01-01

    A brief introduction to plasma simulation using computers and the difficulties on currently available computers is given. Through the use of an analyzing and measuring methodology - SARA, the control flow and data flow of a particle simulation model REM2-1/2D are exemplified. After recursive refinements the total execution time may be greatly shortened and a fully parallel data flow can be obtained. From this data flow, a matched computer architecture or organization could be configured to achieve the computation bound of an application problem. A sequential type simulation model, an array/pipeline type simulation model, and a fully parallel simulation model of a code REM2-1/2D are proposed and analyzed. This methodology can be applied to other application problems which have implicitly parallel nature.

  12. Simple methodologies for spectral emissivity measurement of rocks

    NASA Astrophysics Data System (ADS)

    Danov, Miroslav; Borisova, Denitsa; Stoyanov, Dimitar; Petkov, Doyno

    Presented investigation is focused on the measurement of spectral emissivity in the spectral interval of 8-14µm performed by ground-based technique. This spectral interval is generally used for investigation of vegetation, rock and water surfaces. The amount of radiated energy is a function of the object's temperature and its emissivity. For this reason the emissivity data is very useful for object's temperature assessment in thermal imaging process. We have developed and compared two simplified methodologies for measurement of spectral emissivity of rock and mineral samples, avoiding additional heating or grinding of the samples that accompanied other techniques. The first of them is related to the measurements of the hemispherical spectral emissivity, while the second one concerns the measurement of the directional spectral emissivity of samples. Both methodologies are suitable for laboratory and field measurements of samples with small active area (10cm2 ). As an illustration of the hemispherical spectral emissivity approach, the emissivity spectrum of limestone is presented. Most frequently the emissivity is referred to the normal emissivity. However, the directionality of the emissivity has an important effect on the measurements, for example when the land surface temperature is deduced. A simple methodology for measuring of the directional emissivity is proposed and developed. It is based on the emission of a collimated infrared (IR) source irradiating the investigated sample. The IR radiation is reflected by the sample and collected by a lithium tantalite pyroelectric detector. The spectral resolution of the reflected by the sample emission is provided by a set of 30 narrow-band transmission filters. Phase sensitive detection technique is used to enhance the signal/noise. The registered data are processed by a PC. The measuring process will be discussed and the experimentally measured directional emissivity spectra will be presented, related to some rock

  13. Conditioning natural gas for measurement and transportation

    SciTech Connect

    Barnhard, E.E.

    1984-04-01

    This paper discusses methods of conditioning natural gas for measurement and transportation. Gas mixtures measured at the well head or into a gathering system may not yet be conditioned to pipeline standards at the point of measurement, because title to the gas passes from the seller to the buyer at that point. Therefore, it is sometimes necessary to measure the gas flow without complete conditioning and to do it accurately. Careful study of the conditioning steps that the gas has completed, or that must be performed prior to measurement, will affect selection of the measurement equipment and the success of its operation.

  14. Physiological outflow boundary conditions methodology for small arteries with multiple outlets: a patient-specific hepatic artery haemodynamics case study.

    PubMed

    Aramburu, Jorge; Antón, Raúl; Bernal, Nebai; Rivas, Alejandro; Ramos, Juan Carlos; Sangro, Bruno; Bilbao, José Ignacio

    2015-04-01

    Physiological outflow boundary conditions are necessary to carry out computational fluid dynamics simulations that reliably represent the blood flow through arteries. When dealing with complex three-dimensional trees of small arteries, and therefore with multiple outlets, the robustness and speed of convergence are also important. This study derives physiological outflow boundary conditions for cases in which the physiological values at those outlets are not known (neither in vivo measurements nor literature-based values are available) and in which the tree exhibits symmetry to some extent. The inputs of the methodology are the three-dimensional domain and the flow rate waveform and the systolic and diastolic pressures at the inlet. The derived physiological outflow boundary conditions, which are a physiological pressure waveform for each outlet, are based on the results of a zero-dimensional model simulation. The methodology assumes symmetrical branching and is able to tackle the flow distribution problem when the domain outlets are at branches with a different number of upstream bifurcations. The methodology is applied to a group of patient-specific arteries in the liver. The methodology is considered to be valid because the pulsatile computational fluid dynamics simulation with the inflow flow rate waveform (input of the methodology) and the derived outflow boundary conditions lead to physiological results, that is, the resulting systolic and diastolic pressures at the inlet match the inputs of the methodology, and the flow split is also physiological.

  15. A new drop-shape methodology for surface tension measurement

    NASA Astrophysics Data System (ADS)

    Cabezas, M. G.; Bateni, A.; Montanero, J. M.; Neumann, A. W.

    2004-11-01

    Drop-shape techniques, such as axisymmetric drop-shape analysis (ADSA), have been widely used to measure surface tension. In the current schemes, theoretical curves are fitted to the experimental profiles by adjusting the value of surface tension. The best match between theoretical and experimental profiles identifies the surface tension of the drop. Extracting the experimental drop profile using edge detection, is an important part of the current drop-shape techniques. However, edge detections fail when acquisition of sharp images is not possible due to experimental or optical limitations. A new drop-shape approach is presented, which eliminates the need for the edge detection and provides a wider range of applicability. The new methodology, called theoretical image fitting analysis (TIFA), generates theoretical images of the drop and forms an error function that describes the pixel-by-pixel deviation of the theoretical image from the experimental one. Taking surface tension as an adjustable parameter, TIFA minimizes the error function, i.e. fits the theoretical image to the experimental one. The validity of the new methodology is examined by comparing the results with those of ADSA. Using the new methodology it is finally possible to enhance the study of the surface tension of lung surfactants at higher concentrations. Due to the opaqueness of the solution, such studies were limited to the low concentrations of surfactants heretofore.

  16. Methodology for measurement of shielding effectiveness in the microwave range

    NASA Astrophysics Data System (ADS)

    Baeckstroem, M.; Jansson, L.

    1995-02-01

    In the last few years an increased attention has been paid to the threat posed by pulsed high power microwaves against the operational reliability of electrical systems. In the process of system design and hardness verification determination of shielding effectiveness plays an important role. The report is a brief summary of the authors experiences regarding measurements, presentation and evaluation of data from measurements of shielding effectiveness of electronic compartments in the microwave range, primarily between 0.4 and 18 GHz. The design and function of field probes is dealt with, especially what concerns measurements in electrically small spaces. Guiding principles for the choice of frequency resolution and of probe locations are given. The significance of different ways to excite the test object is discussed as well as a method to determine receiving cross section of cables. Proposals on how to present measurement data and how to evaluate data w.r.t design requirements are presented. The report is meant both to serve as a guidance on how to perform shielding measurements and to provide a foundation for future work to improve and further develop the methodology. Further development is in particular required for shielding measurements in electrically small cavities.

  17. A Methodology for Measuring Strain in Power Semiconductors

    NASA Astrophysics Data System (ADS)

    Avery, Seth M.

    The objective of this work is to develop a strain measurement methodology for use in power electronics during electrical operation; such that strain models can be developed and used as the basis of an active strain controller---improving the reliability of power electronics modules. This research involves developing electronic speckle pattern interferometry (ESPI) into a technology capable of measuring thermal-mechanical strain in electrically active power semiconductors. ESPI is a non-contact optical technique capable of high resolution (approx. 10 nm) surface displacement measurements. This work has developed a 3-D ESPI test stand, where simultaneous in- and out-of-plane measured components are combined to accurately determine full-field surface displacement. Two cameras are used to capture both local (interconnect level) displacements and strains, and global (device level) displacements. Methods have been developed to enable strain measurements of larger loads, while avoiding speckle decorrelation (which limits ESPI measurement of large deformations). A method of extracting strain estimates directly from unfiltered and wrapped phase maps has been developed, simplifying data analysis. Experimental noise measurements are made and used to develop optimal filtering using model-based tracking and determined strain noise characteristics. The experimental results of this work are strain measurements made on the surface of a leadframe of an electrically active IGBT. A model-based tracking technique has been developed to allow for the optimal strain solution to be extracted from noisy displacement results. Also, an experimentally validated thermal-mechanical FE strain model has been developed. The results of this work demonstrate that in situ strain measurements in power devices are feasible. Using the procedures developed in the work, strain measurements at critical locations of strain, which limit device reliability, at relevant power levels can be completed.

  18. Select Methodology for Validating Advanced Satellite Measurement Systems

    NASA Technical Reports Server (NTRS)

    Larar, Allen M.; Zhou, Daniel K.; Liu, Xi; Smith, William L.

    2008-01-01

    Advanced satellite sensors are tasked with improving global measurements of the Earth's atmosphere, clouds, and surface to enable enhancements in weather prediction, climate monitoring capability, and environmental change detection. Measurement system validation is crucial to achieving this goal and maximizing research and operational utility of resultant data. Field campaigns including satellite under-flights with well calibrated FTS sensors aboard high-altitude aircraft are an essential part of the validation task. This presentation focuses on an overview of validation methodology developed for assessment of high spectral resolution infrared systems, and includes results of preliminary studies performed to investigate the performance of the Infrared Atmospheric Sounding Interferometer (IASI) instrument aboard the MetOp-A satellite.

  19. Conditional Standard Error of Measurement in Prediction.

    ERIC Educational Resources Information Center

    Woodruff, David

    1990-01-01

    A method of estimating conditional standard error of measurement at specific score/ability levels is described that avoids theoretical problems identified for previous methods. The method focuses on variance of observed scores conditional on a fixed value of an observed parallel measurement, decomposing these variances into true and error parts.…

  20. Measuring user experience in digital gaming: theoretical and methodological issues

    NASA Astrophysics Data System (ADS)

    Takatalo, Jari; Häkkinen, Jukka; Kaistinen, Jyrki; Nyman, Göte

    2007-01-01

    There are innumerable concepts, terms and definitions for user experience. Few of them have a solid empirical foundation. In trying to understand user experience in interactive technologies such as computer games and virtual environments, reliable and valid concepts are needed for measuring relevant user reactions and experiences. Here we present our approach to create both theoretically and methodologically sound methods for quantification of the rich user experience in different digital environments. Our approach is based on the idea that the experience received from a content presented with a specific technology is always a result of a complex psychological interpretation process, which components should be understood. The main aim of our approach is to grasp the complex and multivariate nature of the experience and make it measurable. We will present our two basic measurement frameworks, which have been developed and tested in large data set (n=2182). The 15 measurement scales extracted from these models are applied to digital gaming with a head-mounted display and a table-top display. The results show how it is possible to map between experience, technology variables and the background of the user (e.g., gender). This approach can help to optimize, for example, the contents for specific viewing devices or viewing situations.

  1. Methodological considerations for measuring glucocorticoid metabolites in feathers

    PubMed Central

    Berk, Sara A.; McGettrick, Julie R.; Hansen, Warren K.; Breuner, Creagh W.

    2016-01-01

    In recent years, researchers have begun to use corticosteroid metabolites in feathers (fCORT) as a metric of stress physiology in birds. However, there remain substantial questions about how to measure fCORT most accurately. Notably, small samples contain artificially high amounts of fCORT per millimetre of feather (the small sample artefact). Furthermore, it appears that fCORT is correlated with circulating plasma corticosterone only when levels are artificially elevated by the use of corticosterone implants. Here, we used several approaches to address current methodological issues with the measurement of fCORT. First, we verified that the small sample artefact exists across species and feather types. Second, we attempted to correct for this effect by increasing the amount of methanol relative to the amount of feather during extraction. We consistently detected more fCORT per millimetre or per milligram of feather in small samples than in large samples even when we adjusted methanol:feather concentrations. We also used high-performance liquid chromatography to identify hormone metabolites present in feathers and measured the reactivity of these metabolites against the most commonly used antibody for measuring fCORT. We verified that our antibody is mainly identifying corticosterone (CORT) in feathers, but other metabolites have significant cross-reactivity. Lastly, we measured faecal glucocorticoid metabolites in house sparrows and correlated these measurements with corticosteroid metabolites deposited in concurrently grown feathers; we found no correlation between faecal glucocorticoid metabolites and fCORT. We suggest that researchers should be cautious in their interpretation of fCORT in wild birds and should seek alternative validation methods to examine species-specific relationships between environmental challenges and fCORT. PMID:27335650

  2. Experimental Methodology for Measuring Combustion and Injection-Coupled Responses

    NASA Technical Reports Server (NTRS)

    Cavitt, Ryan C.; Frederick, Robert A.; Bazarov, Vladimir G.

    2006-01-01

    A Russian scaling methodology for liquid rocket engines utilizing a single, full scale element is reviewed. The scaling methodology exploits the supercritical phase of the full scale propellants to simplify scaling requirements. Many assumptions are utilized in the derivation of the scaling criteria. A test apparatus design is presented to implement the Russian methodology and consequently verify the assumptions. This test apparatus will allow researchers to assess the usefulness of the scaling procedures and possibly enhance the methodology. A matrix of the apparatus capabilities for a RD-170 injector is also presented. Several methods to enhance the methodology have been generated through the design process.

  3. Innovative methodologies and technologies for thermal energy release measurement.

    NASA Astrophysics Data System (ADS)

    Marotta, Enrica; Peluso, Rosario; Avino, Rosario; Belviso, Pasquale; Caliro, Stefano; Carandente, Antonio; Chiodini, Giovanni; Mangiacapra, Annarita; Petrillo, Zaccaria; Sansivero, Fabio; Vilardo, Giuseppe; Marfe, Barbara

    2016-04-01

    Volcanoes exchange heat, gases and other fluids between the interrior of the Earth and its atmosphere influencing processes both at the surface and above it. This work is devoted to improve the knowledge on the parameters that control the anomalies in heat flux and chemical species emissions associated with the diffuse degassing processes of volcanic and hydrothermal zones. We are studying and developing innovative medium range remote sensing technologies to measure the variations through time of heat flux and chemical emissions in order to boost the definition of the activity state of a volcano and allowing a better assessment of the related hazard and risk mitigation. The current methodologies used to measure heat flux (i.e. CO2 flux or temperature gradient) are either poorly efficient or effective, and are unable to detect short to medium time (days to months) variation trends in the heat flux. Remote sensing of these parameters will allow for measurements faster than already accredited methods therefore it will be both more effective and efficient in case of emergency and it will be used to make quick routine monitoring. We are currently developing a method based on drone-born IR cameras to measure the ground surface temperature that, in a purely conductive regime, is directly correlated to the shallow temperature gradient. The use of flying drones will allow to quickly obtain a mapping of areas with thermal anomalies and a measure of their temperature at distance in the order of hundreds of meters. Further development of remote sensing will be done through the use, on flying drones, of multispectral and/or iperspectral sensors, UV scanners in order to be able to detect the amount of chemical species released in the athmosphere.

  4. Object recognition testing: methodological considerations on exploration and discrimination measures.

    PubMed

    Akkerman, Sven; Blokland, Arjan; Reneerkens, Olga; van Goethem, Nick P; Bollen, Eva; Gijselaers, Hieronymus J M; Lieben, Cindy K J; Steinbusch, Harry W M; Prickaerts, Jos

    2012-07-01

    The object recognition task (ORT) is a popular one-trial learning test for animals. In the current study, we investigated several methodological issues concerning the task. Data was pooled from 28 ORT studies, containing 731 male Wistar rats. We investigated the relationship between 3 common absolute- and relative discrimination measures, as well as their relation to exploratory activity. In this context, the effects of pre-experimental habituation, object familiarity, trial duration, retention interval and the amnesic drugs MK-801 and scopolamine were investigated. Our analyses showed that the ORT is very sensitive, capable of detecting subtle differences in memory (discrimination) and exploratory performance. As a consequence, it is susceptible to potential biases due to (injection) stress and side effects of drugs. Our data indicated that a minimum amount of exploration is required in the sample and test trial for stable significant discrimination performance. However, there was no relationship between the level of exploration in the sample trial and discrimination performance. In addition, the level of exploration in the test trial was positively related to the absolute discrimination measure, whereas this was not the case for relative discrimination measures, which correct for exploratory differences, making them more resistant to exploration biases. Animals appeared to remember object information over multiple test sessions. Therefore, when animals have encountered both objects in prior test sessions, the object preference observed in the test trial of 1h retention intervals is probably due to a relative difference in familiarity between the objects in the test trial, rather than true novelty per se. Taken together, our findings suggest to take into consideration pre-experimental exposure (familiarization) to objects, habituation to treatment procedures, and the use of relative discrimination measures when using the ORT.

  5. Contact angle measurements under thermodynamic equilibrium conditions.

    PubMed

    Lages, Carol; Méndez, Eduardo

    2007-08-01

    The precise control of the ambient humidity during contact angle measurements is needed to obtain stable and valid data. For a such purpose, a simple low-cost device was designed, and several modified surfaces relevant to biosensor design were studied. Static contact angle values for these surfaces are lower than advancing contact angles published for ambient conditions, indicating that thermodynamic equilibrium conditions are needed to avoid drop evaporation during the measurements.

  6. Measuring persistence: A literature review focusing on methodological issues

    SciTech Connect

    Wolfe, A.K.; Brown, M.A.; Trumble, D.

    1995-03-01

    This literature review was conducted as part of a larger project to produce a handbook on the measurement of persistence. The past decade has marked the development of the concept of persistence and a growing recognition that the long-term impacts of demand-side management (DSM) programs warrant careful assessment. Although Increasing attention has been paid to the topic of persistence, no clear consensus has emerged either about its definition or about the methods most appropriate for its measurement and analysis. This project strives to fill that gap by reviewing the goals, terminology, and methods of past persistence studies. It was conducted from the perspective of a utility that seeks to acquire demand-side resources and is interested in their long-term durability; it was not conducted from the perspective of the individual consumer. Over 30 persistence studies, articles, and protocols were examined for this report. The review begins by discussing the underpinnings of persistence studies: namely, the definitions of persistence and the purposes of persistence studies. Then. it describes issues relevant to both the collection and analysis of data on the persistence of energy and demand savings. Findings from persistence studies also are summarized. Throughout the review, four studies are used repeatedly to illustrate different methodological and analytical approaches to persistence so that readers can track the data collection. data analysis, and findings of a set of comprehensive studies that represent alternative approaches.

  7. Methodological Considerations for Hair Cortisol Measurements in Children

    PubMed Central

    Slominski, Radomir; Rovnaghi, Cynthia R.; Anand, Kanwaljeet J. S.

    2015-01-01

    Background Hair cortisol levels are used increasingly as a measure for chronic stress in young children. We propose modifications to the current methods used for hair cortisol analysis to more accurately determine reference ranges for hair cortisol across different populations and age groups. Methods The authors compared standard (finely cutting hair) vs. milled methods for hair processing (n=16), developed a 4-step extraction process for hair protein and cortisol (n=16), and compared liquid chromatography-mass spectrometry (LCMS) vs. ELISA assays for measuring hair cortisol (n=28). The extraction process included sequential incubations in methanol and acetone, repeated twice. Hair protein was measured via spectrophotometric ratios at 260/280 nm to indicate the hair dissolution state using a BioTek® plate reader and dedicated software. Hair cortisol was measured using an ELISA assay kit. Individual (n=13), pooled hair samples (n=12) with high, intermediate, and low cortisol values and the ELISA assay internal standards (n=3) were also evaluated by LCMS. Results Milled and standard methods showed highly correlated hair cortisol (rs=0.951, p<0.0001) and protein values (rs=0.902, p=0.0002), although higher yields of cortisol and protein were obtained from the standard method in 13/16 and 14/16 samples respectively (p<0.05). Four sequential extractions yielded additional amounts of protein (36.5%, 27.5%, 30.5%, 3.1%) and cortisol (45.4%, 31.1%, 15.1%, 0.04%) from hair samples. Cortisol values measured by LCMS and ELISA were correlated (rs=0.737; p<0.0001), although cortisol levels (median [IQR]) detected in the same samples by LCMS (38.7 [14.4, 136] ng/ml) were lower than by ELISA (172.2 [67.9, 1051] ng/ml). LCMS also detected cortisone, which comprised 13.4% (3.7%, 25.9%) of the steroids detected. Conclusion Methodological studies suggest that finely cutting hair with sequential incubations in methanol and acetone, repeated twice, extracts greater yields of cortisol

  8. Adaptability of laser diffraction measurement technique in soil physics methodology

    NASA Astrophysics Data System (ADS)

    Barna, Gyöngyi; Szabó, József; Rajkai, Kálmán; Bakacsi, Zsófia; Koós, Sándor; László, Péter; Hauk, Gabriella; Makó, András

    2016-04-01

    There are intentions all around the world to harmonize soils' particle size distribution (PSD) data by the laser diffractometer measurements (LDM) to that of the sedimentation techniques (pipette or hydrometer methods). Unfortunately, up to the applied methodology (e. g. type of pre-treatments, kind of dispersant etc.), PSDs of the sedimentation methods (due to different standards) are dissimilar and could be hardly harmonized with each other, as well. A need was arisen therefore to build up a database, containing PSD values measured by the pipette method according to the Hungarian standard (MSZ-08. 0205: 1978) and the LDM according to a widespread and widely used procedure. In our current publication the first results of statistical analysis of the new and growing PSD database are presented: 204 soil samples measured with pipette method and LDM (Malvern Mastersizer 2000, HydroG dispersion unit) were compared. Applying usual size limits at the LDM, clay fraction was highly under- and silt fraction was overestimated compared to the pipette method. Subsequently soil texture classes determined from the LDM measurements significantly differ from results of the pipette method. According to previous surveys and relating to each other the two dataset to optimizing, the clay/silt boundary at LDM was changed. Comparing the results of PSDs by pipette method to that of the LDM, in case of clay and silt fractions the modified size limits gave higher similarities. Extension of upper size limit of clay fraction from 0.002 to 0.0066 mm, and so change the lower size limit of silt fractions causes more easy comparability of pipette method and LDM. Higher correlations were found between clay content and water vapor adsorption, specific surface area in case of modified limit, as well. Texture classes were also found less dissimilar. The difference between the results of the two kind of PSD measurement methods could be further reduced knowing other routinely analyzed soil parameters

  9. Definition and Validation of a Methodology to Measure the Local Static Stiffness of Large Appendages Interfaces Using Dynamic Measurements

    NASA Astrophysics Data System (ADS)

    Bernasconi, M.; Rodriguez Senin, A.; Laduree, G.

    2014-06-01

    This paper describes the methodology developed under the Sentinel 1 spacecraft structure project to measure the local static stiffness of synthetic aperture radar antenna (SAR-A) interfaces using dynamic measurements. The methodology used consists in measuring accelerance [acceleration/force] frequency response functions (FRF) at the SAR-A interfaces and at several points selected as boundary conditions used for the derivation of the local stiffness [1]. The accelerance FRF is used to calculate the flexibility FRF [displacement/force] from which the static term of the flexibility is extracted. The static term is obtained via a least squares approximation at low frequency of the real part of the flexibility FRF (curve fitting approach) and extrapolation of the curve down to 0 Hz.Since the test was performed with the launch vehicle adapter ring clamped, the direct results of these measurements lead to global stiffness values. To calculate the local stiffness values the results were post- processed to subtract the contribution of the global deformation of the spacecraft structure. The local flexibility matrix at the SAR-A interfaces is calculated by imposing zero displacement at those points selected as virtual boundary conditions. Then, the local stiffness components were obtained inverting the diagonal terms of the local flexibility matrix for the three translational and the two in-plane rotational degrees of freedom in 9 SAR-A interfaces.The results obtained using this methodology were validated with a classical static test at one of the interfaces showing a good correlation between static and dynamic tests results. It was concluded that this methodology is suitable for the verification of static stiffness of large appendages interfaces and it can be applied to future missions that carry large payloads with critical structural interfaces.

  10. Conditional measurements as probes of quantum dynamics

    SciTech Connect

    Siddiqui, Shabnam; Erenso, Daniel; Vyas, Reeta; Singh, Surendra

    2003-06-01

    We discuss conditional measurements as probes of quantum dynamics and show that they provide different ways to characterize quantum fluctuations. We illustrate this by considering the light from a subthreshold degenerate parametric oscillator. Analytic results and curves are presented to illustrate the behavior.

  11. Aircraft emission measurements by remote sensing methodologies at airports

    NASA Astrophysics Data System (ADS)

    Schäfer, Klaus; Jahn, Carsten; Sturm, Peter; Lechner, Bernhard; Bacher, Michael

    The emission indices of aircraft engine exhausts from measurements taken under operating conditions, to calculate precisely the emission inventories of airports, are not available up to now. To determine these data, measurement campaigns were performed on idling aircraft at major European airports using non-intrusive spectroscopic methods like Fourier transform infrared spectrometry and differential optical absorption spectroscopy. Emission indices for CO and NO x were calculated and compared to the values given in the International Civil Aviation Organisation (ICAO) database. The emission index for CO for 36 different aircraft engine types and for NO x (24 different engine types) were determined. It was shown that for idling aircraft, CO emissions are underestimated using the ICAO database. The emission indices for NO x determined in this work are lower than given in the ICAO database. In addition, a high variance of emission indices in each aircraft family and from engine to engine of the same engine type was found. During the same measurement campaigns, the emission indices for CO and NO of eight different types of auxilliary power units were investigated.

  12. Developing a Methodology for Measuring Stress Transients at Seismogenic Depth

    NASA Astrophysics Data System (ADS)

    Silver, P. G.; Niu, F.; Daley, T.; Majer, E.

    2005-05-01

    The dependence of crack properties on stress means that crustal seismic velocity exhibits stress dependence. This dependence constitutes, in principle, a powerful means of studying transient changes in stress at seismogenic depth through the repeat measurement of travel time from a controlled source. While the scientific potential of this stress dependence has been known for decades, time-dependent seismic imaging has yet to become a reliable means of measuring subsurface stress changes in fault-zone environments. This is due to 1) insufficient delay-time precision necessary to detect small changes in stress, and 2) the difficulty in establishing a reliable in-situ calibration between stress and seismic velocity. These two problems are coupled because the best sources of calibration, solid-earth tides and barometric pressure, produce weak stress perturbations of order 102-103 Pa that require precision in the measurement of the fractional velocity change dlnv of order 10-6, based on laboratory experiments. We have thus focused on developing a methodology that is capable of providing this high level of precision. For example, we have shown that precision in dlnv is maximized when there are Q/π wavelengths in the source-receiver path. This relationship provides a means of selecting an optimal geometry and/or source characteristic frequency in the planning of experiments. We have initiated a series of experiments to demonstrate the detectability of these stress-calibration signals in progressively more tectonically relevant settings. Initial tests have been completed on the smallest scale, with two boreholes 17 m deep and 3 meters apart. We have used a piezoelectric source (0.1ms source pulse repeated every 100ms) and a string of 24 hydrophones to record P waves with a dominant frequency of 10KHz. Recording was conducted for 160 hours. The massive stacking of ~36,000 high-SNR traces/hr leads to delay-time precision of 6ns (hour sampling) corresponding to dlnv

  13. The Role of Measuring in the Learning of Environmental Occupations and Some Aspects of Environmental Methodology

    ERIC Educational Resources Information Center

    István, Lüko

    2016-01-01

    The methodology neglected area of pedagogy, within the environmental specialist teaching methodology cultivating the best part is incomplete. In this article I shall attempt to environmental methodology presented in one part of the University of West Environmental Education workshop, based on the measurement of experiential learning as an…

  14. Measurement of 1999 drought conditions in Mississippi

    USGS Publications Warehouse

    Turnipseed, D. Phil; Long, Loyd G.

    2000-01-01

    Accurate and reliable water-resources data collected during drought conditions are critical to regulatory agencies such as the Mississippi Department of Environmental Quality (MDEQ). Droughts have affected Mississippi during 1940-44, 1951-57, 1962-71, 1980-82 and 1983-88. In late summer and early autumn 1999, many areas of Mississippi experienced near record drought conditions causing concern to many private and public interests. Personnel from the U.S. Geological Survey (USGS), in cooperation with the MDEQ Office of Land and Water Resources (MDEQ-OLWR) measured water levels and streamflows throughout the State of Mississippi during drought conditions in August through October 1999. Droughts are normal, recurring hydrological events caused by deficiency of precipitation over an extended period of time that can have adverse effects on anthropogenic use of water. Much of the State of Mississippi has continued to experience drought conditions through late winter of 2000. Data on minimum streamflows are an important factor for determining the regulation of flow control structures, effluent discharge, and surface water withdrawals and other water-management decisions during droughts. Data on minimum streamflows become paramount during drought conditions. This report presents information related to the legal aspects of drought conditions and includes selected data collected at streamgages affected by severe drought conditions in Mississippi during the late summer and early autumn of 1999. Comparisons of low-flow characteristics at selected streamgages to other period-of-record low-flows at selected gages in the State are also presented.

  15. Weak measurement and Bohmian conditional wave functions

    SciTech Connect

    Norsen, Travis; Struyve, Ward

    2014-11-15

    It was recently pointed out and demonstrated experimentally by Lundeen et al. that the wave function of a particle (more precisely, the wave function possessed by each member of an ensemble of identically-prepared particles) can be “directly measured” using weak measurement. Here it is shown that if this same technique is applied, with appropriate post-selection, to one particle from a perhaps entangled multi-particle system, the result is precisely the so-called “conditional wave function” of Bohmian mechanics. Thus, a plausibly operationalist method for defining the wave function of a quantum mechanical sub-system corresponds to the natural definition of a sub-system wave function which Bohmian mechanics uniquely makes possible. Similarly, a weak-measurement-based procedure for directly measuring a sub-system’s density matrix should yield, under appropriate circumstances, the Bohmian “conditional density matrix” as opposed to the standard reduced density matrix. Experimental arrangements to demonstrate this behavior–and also thereby reveal the non-local dependence of sub-system state functions on distant interventions–are suggested and discussed. - Highlights: • We study a “direct measurement” protocol for wave functions and density matrices. • Weakly measured states of entangled particles correspond to Bohmian conditional states. • Novel method of observing quantum non-locality is proposed.

  16. Determining seabird body condition using nonlethal measures.

    PubMed

    Jacobs, Shoshanah R; Elliott, Kyle; Guigueno, Mélanie F; Gaston, Anthony J; Redman, Paula; Speakman, John R; Weber, Jean-Michel

    2012-01-01

    Energy stores are critical for successful breeding, and longitudinal studies require nonlethal methods to measure energy stores ("body condition"). Nonlethal techniques for measuring energy reserves are seldom verified independently. We compare body mass, size-corrected mass (SCM), plasma lipids, and isotopic dilution with extracted total body lipid content in three seabird species (thick-billed murres Uria lomvia, all four measures; northern fulmars Fulmarus glacialis, three measures; and black-legged kittiwakes Rissa tridactyla, two measures). SCM and body mass were better predictors of total body lipids for the species with high percent lipids (fulmars; R2 = 0.5-0.6) than for the species with low percent lipids (murres and kittiwakes; R2 = 0.2-0.4). The relationship between SCM and percent body lipids, which we argue is often a better measure of condition, was also poor (R2 < 0.2) for species with low lipids. In a literature comparison of 17 bird species, percent lipids was the only predictor of the strength of the relationship between mass and total body lipids; we suggest that SCM be used as an index of energy stores only when lipids exceed 15% of body mass. Across all three species we measured, SCM based on the ordinary least squares regression of mass on the first principal component outperformed other measures. Isotopic dilution was a better predictor of both total body lipids and percent body lipids than were mass, SCM, or plasma lipids in murres. Total body lipids decreased through the breeding season at both sites, while total and neutral plasma lipid concentrations increased at one site but not another, suggesting mobilization of lipid stores for breeding. A literature review showed substantial variation in the reliability of plasma markers, and we recommend isotopic dilution (oxygen-18, plateau) for determination of energy reserves in birds where lipid content is below 15%.

  17. Bone drilling methodology and tool based on position measurements.

    PubMed

    Díaz, Iñaki; Gil, Jorge Juan; Louredo, Marcos

    2013-11-01

    Bone drilling, despite being a very common procedure in hospitals around the world, becomes very challenging when performed close to organs such as the cochlea or when depth control is critical for avoiding damage to surrounding tissue. To date, several mechatronic prototypes have been proposed to assist surgeons by automatically detecting bone layer transitions and breakthroughs. However, none of them is currently accurate enough to be part of the surgeon's standard equipment. The present paper shows a test bench specially designed to evaluate prior methodologies and analyze their drawbacks. Afterward, a new layer detection methodology with improved performance is described and tested. Finally, the prototype of a portable mechatronic bone drill that takes advantage of the proposed detection algorithm is presented.

  18. Mass-based condition measures and their relationship with fitness: in what condition is condition?

    PubMed Central

    Barnett, Craig A.; Suzuki, Toshitaka N.; Sakaluk, Scott K.; Thompson, Charles F.

    2015-01-01

    Mass or body-size measures of ‘condition’ are of central importance to the study of ecology and evolution, and it is often assumed that differences in condition measures are positively and linearly related to fitness. Using examples drawn from ecological studies, we show that indices of condition frequently are unlikely to be related to fitness in a linear fashion. Researchers need to be more explicit in acknowledging the limitations of mass-based condition measures and accept that, under some circumstances, they may not relate to fitness as traditionally assumed. Any relationship between a particular condition measure and fitness should first be empirically validated before condition is used as a proxy for fitness. In the absence of such evidence, researchers should explicitly acknowledge that assuming such a relationship may be unrealistic. PMID:26019406

  19. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements.

    PubMed

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained.

  20. Novel methodology for accurate resolution of fluid signatures from multi-dimensional NMR well-logging measurements

    NASA Astrophysics Data System (ADS)

    Anand, Vivek

    2017-03-01

    A novel methodology for accurate fluid characterization from multi-dimensional nuclear magnetic resonance (NMR) well-logging measurements is introduced. This methodology overcomes a fundamental challenge of poor resolution of features in multi-dimensional NMR distributions due to low signal-to-noise ratio (SNR) of well-logging measurements. Based on an unsupervised machine-learning concept of blind source separation, the methodology resolves fluid responses from simultaneous analysis of large quantities of well-logging data. The multi-dimensional NMR distributions from a well log are arranged in a database matrix that is expressed as the product of two non-negative matrices. The first matrix contains the unique fluid signatures, and the second matrix contains the relative contributions of the signatures for each measurement sample. No a priori information or subjective assumptions about the underlying features in the data are required. Furthermore, the dimensionality of the data is reduced by several orders of magnitude, which greatly simplifies the visualization and interpretation of the fluid signatures. Compared to traditional methods of NMR fluid characterization which only use the information content of a single measurement, the new methodology uses the orders-of-magnitude higher information content of the entire well log. Simulations show that the methodology can resolve accurate fluid responses in challenging SNR conditions. The application of the methodology to well-logging data from a heavy oil reservoir shows that individual fluid signatures of heavy oil, water associated with clays and water in interstitial pores can be accurately obtained.

  1. Calibration methodology for proportional counters applied to yield measurements of a neutron burst.

    PubMed

    Tarifeño-Saldivia, Ariel; Mayer, Roberto E; Pavez, Cristian; Soto, Leopoldo

    2014-01-01

    This paper introduces a methodology for the yield measurement of a neutron burst using neutron proportional counters. This methodology is to be applied when single neutron events cannot be resolved in time by nuclear standard electronics, or when a continuous current cannot be measured at the output of the counter. The methodology is based on the calibration of the counter in pulse mode, and the use of a statistical model to estimate the number of detected events from the accumulated charge resulting from the detection of the burst of neutrons. The model is developed and presented in full detail. For the measurement of fast neutron yields generated from plasma focus experiments using a moderated proportional counter, the implementation of the methodology is herein discussed. An experimental verification of the accuracy of the methodology is presented. An improvement of more than one order of magnitude in the accuracy of the detection system is obtained by using this methodology with respect to previous calibration methods.

  2. Measurement-based auralization methodology for the assessment of noise mitigation measures

    NASA Astrophysics Data System (ADS)

    Thomas, Pieter; Wei, Weigang; Van Renterghem, Timothy; Botteldooren, Dick

    2016-09-01

    The effect of noise mitigation measures is generally expressed by noise levels only, neglecting the listener's perception. In this study, an auralization methodology is proposed that enables an auditive preview of noise abatement measures for road traffic noise, based on the direction dependent attenuation of a priori recordings made with a dedicated 32-channel spherical microphone array. This measurement-based auralization has the advantage that all non-road traffic sounds that create the listening context are present. The potential of this auralization methodology is evaluated through the assessment of the effect of an L-shaped mound. The angular insertion loss of the mound is estimated by using the ISO 9613-2 propagation model, the Pierce barrier diffraction model and the Harmonoise point-to-point model. The realism of the auralization technique is evaluated by listening tests, indicating that listeners had great difficulty in differentiating between a posteriori recordings and auralized samples, which shows the validity of the followed approaches.

  3. Optimisation of conditions for the extraction of casearins from Casearia sylvestris using response surface methodology.

    PubMed

    Bandeira, Karin F; Tininis, Aristeu G; Da Bolzani, Vanderlan S; Cavalheiro, Alberto J

    2006-01-01

    Optimal conditions for the extraction of casearins from Casearia sylvestris were determined using response surface methodology. The maceration and sonication extraction techniques were performed using a 3 x 3 x 3 full factorial design including three acidity conditions, three solvents of different polarities and three extraction times. The yields and selectivities of the extraction of casearins were significantly influenced by acidity conditions. Taking into account all variables tested, the optimal conditions for maceration extraction were estimated to involve treatment with dichloromethane saturated with ammonium hydroxide for 26 h. Similar yields and selectivities for casearins were determined for sonication extraction using the same solvent but for the much shorter time of 1 h. The best results for stabilisation of the fresh plant material were obtained using leaves that had been oven dried at 40 degrees C for 48 h.

  4. Methodology for determining time-dependent mechanical properties of tuff subjected to near-field repository conditions

    SciTech Connect

    Blacic, J.D.; Andersen, R.

    1983-01-01

    We have established a methodology to determine the time dependence of strength and transport properties of tuff under conditions appropriate to a nuclear waste repository. Exploratory tests to determine the approximate magnitudes of thermomechanical property changes are nearly complete. In this report we describe the capabilities of an apparatus designed to precisely measure the time-dependent deformation and permeability of tuff at simulated repository conditions. Preliminary tests with this new apparatus indicate that microclastic creep failure of tuff occurs over a narrow strain range with little precursory Tertiary creep behavior. In one test, deformation under conditions of slowly decreasing effective pressure resulted in failure, whereas some strain indicators showed a decreasing rate of strain.

  5. Response Surface Methodology: An Extensive Potential to Optimize in vivo Photodynamic Therapy Conditions

    SciTech Connect

    Tirand, Loraine; Bastogne, Thierry; Bechet, Denise M.Sc.; Linder, Michel; Thomas, Noemie; Frochot, Celine; Guillemin, Francois; Barberi-Heyob, Muriel

    2009-09-01

    Purpose: Photodynamic therapy (PDT) is based on the interaction of a photosensitizing (PS) agent, light, and oxygen. Few new PS agents are being developed to the in vivo stage, partly because of the difficulty in finding the right treatment conditions. Response surface methodology, an empirical modeling approach based on data resulting from a set of designed experiments, was suggested as a rational solution with which to select in vivo PDT conditions by using a new peptide-conjugated PS targeting agent, neuropilin-1. Methods and Materials: A Doehlert experimental design was selected to model effects and interactions of the PS dose, fluence, and fluence rate on the growth of U87 human malignant glioma cell xenografts in nude mice, using a fixed drug-light interval. All experimental results were computed by Nemrod-W software and Matlab. Results: Intrinsic diameter growth rate, a tumor growth parameter independent of the initial volume of the tumor, was selected as the response variable and was compared to tumor growth delay and relative tumor volumes. With only 13 experimental conditions tested, an optimal PDT condition was selected (PS agent dose, 2.80 mg/kg; fluence, 120 J/cm{sup 2}; fluence rate, 85 mW/cm{sup 2}). Treatment of glioma-bearing mice with the peptide-conjugated PS agent, followed by the optimized PDT condition showed a statistically significant improvement in delaying tumor growth compared with animals who received the PDT with the nonconjugated PS agent. Conclusions: Response surface methodology appears to be a useful experimental approach for rapid testing of different treatment conditions and determination of optimal values of PDT factors for any PS agent.

  6. A methodology for hard/soft information fusion in the condition monitoring of aircraft

    NASA Astrophysics Data System (ADS)

    Bernardo, Joseph T.

    2013-05-01

    Condition-based maintenance (CBM) refers to the philosophy of performing maintenance when the need arises, based upon indicators of deterioration in the condition of the machinery. Traditionally, CBM involves equipping machinery with electronic sensors that continuously monitor components and collect data for analysis. The addition of the multisensory capability of human cognitive functions (i.e., sensemaking, problem detection, planning, adaptation, coordination, naturalistic decision making) to traditional CBM may create a fuller picture of machinery condition. Cognitive systems engineering techniques provide an opportunity to utilize a dynamic resource—people acting as soft sensors. The literature is extensive on techniques to fuse data from electronic sensors, but little work exists on fusing data from humans with that from electronic sensors (i.e., hard/soft fusion). The purpose of my research is to explore, observe, investigate, analyze, and evaluate the fusion of pilot and maintainer knowledge, experiences, and sensory perceptions with digital maintenance resources. Hard/soft information fusion has the potential to increase problem detection capability, improve flight safety, and increase mission readiness. This proposed project consists the creation of a methodology that is based upon the Living Laboratories framework, a research methodology that is built upon cognitive engineering principles1. This study performs a critical assessment of concept, which will support development of activities to demonstrate hard/soft information fusion in operationally relevant scenarios of aircraft maintenance. It consists of fieldwork, knowledge elicitation to inform a simulation and a prototype.

  7. Application of Response Surface Methodology to Evaluation of Bioconversion Experimental Conditions

    PubMed Central

    Cheynier, Véronique; Feinberg, Max; Chararas, Constantin; Ducauze, Christian

    1983-01-01

    Using Candida tenuis, a yeast isolated from the digestive tube of the larva of Phoracantha semipunctata (Cerambycidae, Coleoptera), we were able to demonstrate the bioconversion of citronellal to citronellol. Response surface methodology was used to achieve the optimization of the experimental conditions for that bioconversion process. To study the proposed second-order polynomial model, we used a central composite experimental design with multiple linear regression to estimate the model coefficients of the five selected factors believed to influence the bioconversion process. Only four were demonstrated to be predominant: the incubation pH, temperature, time, and the amount of substrate. The best reduction yields (close to 90%) were obtained with alkaline pH conditions (pH 7.5), a low temperature (25°C), a small amount of substrate (15 μl), and short incubation time (16 h). This methodology was very efficient: only 36 experiments were necessary to assess these conditions, and model adequacy was very satisfactory as the coefficient of determination was 0.9411. PMID:16346211

  8. Causality analysis in business performance measurement system using system dynamics methodology

    NASA Astrophysics Data System (ADS)

    Yusof, Zainuridah; Yusoff, Wan Fadzilah Wan; Maarof, Faridah

    2014-07-01

    One of the main components of the Balanced Scorecard (BSC) that differentiates it from any other performance measurement system (PMS) is the Strategy Map with its unidirectional causality feature. Despite its apparent popularity, criticisms on the causality have been rigorously discussed by earlier researchers. In seeking empirical evidence of causality, propositions based on the service profit chain theory were developed and tested using the econometrics analysis, Granger causality test on the 45 data points. However, the insufficiency of well-established causality models was found as only 40% of the causal linkages were supported by the data. Expert knowledge was suggested to be used in the situations of insufficiency of historical data. The Delphi method was selected and conducted in obtaining the consensus of the causality existence among the 15 selected expert persons by utilizing 3 rounds of questionnaires. Study revealed that only 20% of the propositions were not supported. The existences of bidirectional causality which demonstrate significant dynamic environmental complexity through interaction among measures were obtained from both methods. With that, a computer modeling and simulation using System Dynamics (SD) methodology was develop as an experimental platform to identify how policies impacting the business performance in such environments. The reproduction, sensitivity and extreme condition tests were conducted onto developed SD model to ensure their capability in mimic the reality, robustness and validity for causality analysis platform. This study applied a theoretical service management model within the BSC domain to a practical situation using SD methodology where very limited work has been done.

  9. Measuring service line competitive position. A systematic methodology for hospitals.

    PubMed

    Studnicki, J

    1991-01-01

    To mount a broad effort aimed at improving their competitive position for some service or group of services, hospitals have begun to pursue product line management techniques. A few hospitals have even reorganized completely under the product line framework. The benefits include focusing accountability for operations and results, facilitating coordination between departments and functions, stimulating market segmentation, and promoting rigorous examination of new and existing programs. As part of its strategic planning process, a suburban Baltimore hospital developed a product line management methodology with six basic steps: (1) define the service lines (which they did by grouping all existing diagnosis-related groups into 35 service lines), (2) determine the contribution of each service line to total inpatient volume, (3) determine trends in service line volumes (by comparing data over time), (4) derive a useful comparison group (competing hospitals or groups of hospitals with comparable size, scope of services, payer mix, and financial status), (5) review multiple time frames, and (6) summarize the long- and short-term performance of the hospital's service lines to focus further analysis. This type of systematic and disciplined analysis can become part of a permanent strategic intelligence program. When hospitals have such a program in place, their market research, planning, budgeting, and operations will be tied together in a true management decision support system.

  10. Eddy correlation measurements in wet environmental conditions

    NASA Astrophysics Data System (ADS)

    Cuenca, R. H.; Migliori, L.; O Kane, J. P.

    2003-04-01

    The lower Feale catchment is a low-lying peaty area of 200 km^2 situated in southwest Ireland that is subject to inundation by flooding. The catchment lies adjacent to the Feale River and is subject to tidal signals as well as runoff processes. Various mitigation strategies are being investigated to reduce the damage due to flooding. Part of the effort has required development of a detailed hydrologic balance for the study area which is a wet pasture environment with local field drains that are typically flooded. An eddy correlation system was installed in the summer of 2002 to measure components of the energy balance, including evapotranspiration, along with special sensors to measure other hydrologic variables particular to this study. Data collected will be essential for validation of surface flux models to be developed for this site. Data filtering is performed using a combination of software developed by the Boundary-Layer Group (BLG) at Oregon State University together with modifications made to this system for conditions at this site. This automated procedure greatly reduces the tedious inspection of individual records. The package of tests, developed by the BLG for both tower and aircraft high frequency data, checks for electronic spiking, signal dropout, unrealistic magnitudes, extreme higher moment statistics, as well as other error scenarios not covered by the instrumentation diagnostics built into the system. Critical parameter values for each potential error were developed by applying the tests to real fast response turbulent time series. Potential instrumentation problems, flux sampling problems, and unusual physical situations records are flagged for removal or further analysis. A final visual inspection step is required to minimize rejection of physically unusual but real behavior in the time series. The problems of data management, data quality control, individual instrumentation sensitivity, potential underestimation of latent and sensible heat

  11. Modeling of the effect of freezer conditions on the hardness of ice cream using response surface methodology.

    PubMed

    Inoue, K; Ochi, H; Habara, K; Taketsuka, M; Saito, H; Ichihashi, N; Iwatsuki, K

    2009-12-01

    The effect of conventional continuous freezer parameters [mix flow (L/h), overrun (%), drawing temperature ( degrees C), cylinder pressure (kPa), and dasher speed (rpm)] on the hardness of ice cream under varying measured temperatures (-5, -10, and -15 degrees C) was investigated systematically using response surface methodology (central composite face-centered design), and the relationships were expressed as statistical models. The range (maximum and minimum values) of each freezer parameter was set according to the actual capability of the conventional freezer and applicability to the manufacturing process. Hardness was measured using a penetrometer. These models showed that overrun and drawing temperature had significant effects on hardness. The models can be used to optimize freezer conditions to make ice cream of the least possible hardness under the highest overrun (120%) and a drawing temperature of approximately -5.5 degrees C (slightly warmer than the lowest drawing temperature of -6.5 degrees C) within the range of this study. With reference to the structural elements of the ice cream, we suggest that the volume of overrun and ice crystal content, ice crystal size, and fat globule destabilization affect the hardness of ice cream. In addition, the combination of a simple instrumental parameter and response surface methodology allows us to show the relation between freezer conditions and one of the most important properties-hardness-visually and quantitatively on the practical level.

  12. Application of MetaRail railway noise measurement methodology: comparison of three track systems

    NASA Astrophysics Data System (ADS)

    Kalivoda, M.; Kudrna, M.; Presle, G.

    2003-10-01

    Within the fourth RTD Framework Programme, the European Union has supported a research project dealing with the improvement of railway noise (emission) measurement methodologies. This project was called MetaRail and proposed a number of procedures and methods to decrease systematic measurement errors and to increase reproducibility. In 1999 the Austrian Federal Railways installed 1000 m of test track to explore the long-term behaviour of three different ballast track systems. This test included track stability, rail forces and ballast forces, as well as vibration transmission and noise emission. The noise study was carried out using the experience and methods developed within MetaRail. This includes rail roughness measurements as well as measurements of vertical railhead, sleeper and ballast vibration in parallel with the noise emission measurement with a single microphone at a distance of 7.5 m from the track. Using a test train with block- and disc-braked vehicles helped to control operational conditions and indicated the influence of different wheel roughness. It has been shown that the parallel recording of several vibration signals together with the noise signal makes it possible to evaluate the contributions of car body, sleeper, track and wheel sources to the overall noise emission. It must be stressed that this method is not focused as is a microphone-array. However, this methodology is far easier to apply and thus cheaper. Within this study, noise emission was allocated to the different elements to answer questions such as whether the sleeper eigenfrequency is transmitted into the rail.

  13. Optimization of hydrolysis conditions for bovine plasma protein using response surface methodology.

    PubMed

    Seo, Hyun-Woo; Jung, Eun-Young; Go, Gwang-Woong; Kim, Gap-Don; Joo, Seon-Tea; Yang, Han-Sul

    2015-10-15

    The purpose of this study was to establish optimal conditions for the hydrolysis of bovine plasma protein. Response surface methodology was used to model and optimize responses [degree of hydrolysis (DH), 2,2-diphenyl-1-picrydrazyl (DPPH) radical-scavenging activity and Fe(2+)-chelating activity]. Hydrolysis conditions, such as hydrolysis temperature (46.6-63.4 °C), hydrolysis time (98-502 min), and hydrolysis pH (6.32-9.68) were selected as the main processing conditions in the hydrolysis of bovine plasma protein. Optimal conditions for maximum DH (%), DPPH radical-scavenging activity (%) and Fe(2+)-chelating activity (%) of the hydrolyzed bovine plasma protein, were respectively established. We discovered the following three conditions for optimal hydrolysis of bovine plasma: pH of 7.82-8.32, temperature of 54.1 °C, and time of 338.4-398.4 min. We consequently succeeded in hydrolyzing bovine plasma protein under these conditions and confirmed the various desirable properties of optimal hydrolysis.

  14. Aqueduct: a methodology to measure and communicate global water risks

    NASA Astrophysics Data System (ADS)

    Gassert, Francis; Reig, Paul

    2013-04-01

    , helping non-expert audiences better understand and evaluate risks facing water users. This presentation will discuss the methodology used to combine the indicator values into aggregated risk scores and lessons learned from working with diverse audiences in academia, development institutions, and the public and private sectors.

  15. Measuring Individual Differences in Decision Biases: Methodological Considerations

    PubMed Central

    Aczel, Balazs; Bago, Bence; Szollosi, Aba; Foldes, Andrei; Lukacs, Bence

    2015-01-01

    Individual differences in people's susceptibility to heuristics and biases (HB) are often measured by multiple-bias questionnaires consisting of one or a few items for each bias. This research approach relies on the assumptions that (1) different versions of a decision bias task measure are interchangeable as they measure the same cognitive failure; and (2) that some combination of these tasks measures the same underlying construct. Based on these assumptions, in Study 1 we developed two versions of a new decision bias survey for which we modified 13 HB tasks to increase their comparability, construct validity, and the participants' motivation. The analysis of the responses (N = 1279) showed weak internal consistency within the surveys and a great level of discrepancy between the extracted patterns of the underlying factors. To explore these inconsistencies, in Study 2 we used three original examples of HB tasks for each of seven biases. We created three decision bias surveys by allocating one version of each HB task to each survey. The participants' responses (N = 527) showed a similar pattern as in Study 1, questioning the assumption that the different examples of the HB tasks are interchangeable and that they measure the same underlying construct. These results emphasize the need to understand the domain-specificity of cognitive biases as well as the effect of the wording of the cover story and the response mode on bias susceptibility before employing them in multiple-bias questionnaires. PMID:26635677

  16. Conceptual and Methodological Issues in Treatment Integrity Measurement

    ERIC Educational Resources Information Center

    McLeod, Bryce D.; Southam-Gerow, Michael A.; Weisz, John R.

    2009-01-01

    This special series focused on treatment integrity in the child mental health and education field is timely. The articles do a laudable job of reviewing (a) the current status of treatment integrity research and measurement, (b) existing conceptual models of treatment integrity, and (c) the limitations of prior research. Overall, this thoughtful…

  17. Measuring Social Interaction of the Urban Elderly: A Methodological Synthesis.

    ERIC Educational Resources Information Center

    Sokolovsky, Jay; Cohen, Carl I.

    1981-01-01

    Investigates the myth of the inner-city elderly as isolated. Suggests: there are severe limitations to traditional measures of determining sociability; social network analysis can overcome many of the deficiencies of other methods; and a synthesis of the anthropological and sociological approaches to network analysis can optimize data collection.…

  18. Measure of Landscape Heterogeneity by Agent-Based Methodology

    NASA Astrophysics Data System (ADS)

    Wirth, E.; Szabó, Gy.; Czinkóczky, A.

    2016-06-01

    With the rapid increase of the world's population, the efficient food production is one of the key factors of the human survival. Since biodiversity and heterogeneity is the basis of the sustainable agriculture, the authors tried to measure the heterogeneity of a chosen landscape. The EU farming and subsidizing policies (EEA, 2014) support landscape heterogeneity and diversity, nevertheless exact measurements and calculations apart from statistical parameters (standard deviation, mean), do not really exist. In the present paper the authors' goal is to find an objective, dynamic method that measures landscape heterogeneity. It is achieved with the so called agent-based modelling, where randomly dispatched dynamic scouts record the observed land cover parameters and sum up the features of a new type of land. During the simulation the agents collect a Monte Carlo integral as a diversity landscape potential which can be considered as the unit of the `greening' measure. As a final product of the ABM method, a landscape potential map is obtained that can serve as a tool for objective decision making to support agricultural diversity.

  19. Sensible organizations: technology and methodology for automatically measuring organizational behavior.

    PubMed

    Olguin Olguin, Daniel; Waber, Benjamin N; Kim, Taemie; Mohan, Akshay; Ara, Koji; Pentland, Alex

    2009-02-01

    We present the design, implementation, and deployment of a wearable computing platform for measuring and analyzing human behavior in organizational settings. We propose the use of wearable electronic badges capable of automatically measuring the amount of face-to-face interaction, conversational time, physical proximity to other people, and physical activity levels in order to capture individual and collective patterns of behavior. Our goal is to be able to understand how patterns of behavior shape individuals and organizations. By using on-body sensors in large groups of people for extended periods of time in naturalistic settings, we have been able to identify, measure, and quantify social interactions, group behavior, and organizational dynamics. We deployed this wearable computing platform in a group of 22 employees working in a real organization over a period of one month. Using these automatic measurements, we were able to predict employees' self-assessments of job satisfaction and their own perceptions of group interaction quality by combining data collected with our platform and e-mail communication data. In particular, the total amount of communication was predictive of both of these assessments, and betweenness in the social network exhibited a high negative correlation with group interaction satisfaction. We also found that physical proximity and e-mail exchange had a negative correlation of r = -0.55 (p 0.01), which has far-reaching implications for past and future research on social networks.

  20. Electrochemical treatment of deproteinated whey wastewater and optimization of treatment conditions with response surface methodology.

    PubMed

    Güven, Güray; Perendeci, Altunay; Tanyolaç, Abdurrahman

    2008-08-30

    Electrochemical treatment of deproteinated whey wastewater produced during cheese manufacture was studied as an alternative treatment method for the first time in literature. Through the preliminary batch runs, appropriate electrode material was determined as iron due to high removal efficiency of chemical oxygen demand (COD), and turbidity. The electrochemical treatment conditions were optimized through response surface methodology (RSM), where applied voltage was kept in the range, electrolyte concentration was minimized, waste concentration and COD removal percent were maximized at 25 degrees C. Optimum conditions at 25 degrees C were estimated through RSM as 11.29 V applied voltage, 100% waste concentration (containing 40 g/L lactose) and 19.87 g/L electrolyte concentration to achieve 29.27% COD removal. However, highest COD removal through the set of runs was found as 53.32% within 8h. These results reveal the applicability of electrochemical treatment to the deproteinated whey wastewater as an alternative advanced wastewater treatment method.

  1. Methodology Development for Measurement of Agent Fate in an Environmental Wind Tunnel

    DTIC Science & Technology

    2005-10-01

    METHODOLOGY DEVELOPMENT FOR MEASUREMENT OF AGENT FATE IN AN ENVIRONMENTAL WIND TUNNEL Wendel Shuely, Robert Nickol, GEO-Centers, and...managerial support by Dr. H. Durst , Mr. L. Bickford and Dr. J. Savage, ECBC, and Mr. Tim Bauer, NSWC.

  2. A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles

    PubMed Central

    Janson, Lucas; Rajaratnam, Bala

    2014-01-01

    Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature. PMID:25587203

  3. Bayesian methodology to estimate and update safety performance functions under limited data conditions: a sensitivity analysis.

    PubMed

    Heydari, Shahram; Miranda-Moreno, Luis F; Lord, Dominique; Fu, Liping

    2014-03-01

    In road safety studies, decision makers must often cope with limited data conditions. In such circumstances, the maximum likelihood estimation (MLE), which relies on asymptotic theory, is unreliable and prone to bias. Moreover, it has been reported in the literature that (a) Bayesian estimates might be significantly biased when using non-informative prior distributions under limited data conditions, and that (b) the calibration of limited data is plausible when existing evidence in the form of proper priors is introduced into analyses. Although the Highway Safety Manual (2010) (HSM) and other research studies provide calibration and updating procedures, the data requirements can be very taxing. This paper presents a practical and sound Bayesian method to estimate and/or update safety performance function (SPF) parameters combining the information available from limited data with the SPF parameters reported in the HSM. The proposed Bayesian updating approach has the advantage of requiring fewer observations to get reliable estimates. This paper documents this procedure. The adopted technique is validated by conducting a sensitivity analysis through an extensive simulation study with 15 different models, which include various prior combinations. This sensitivity analysis contributes to our understanding of the comparative aspects of a large number of prior distributions. Furthermore, the proposed method contributes to unification of the Bayesian updating process for SPFs. The results demonstrate the accuracy of the developed methodology. Therefore, the suggested approach offers considerable promise as a methodological tool to estimate and/or update baseline SPFs and to evaluate the efficacy of road safety countermeasures under limited data conditions.

  4. A Methodology for Robust Multiproxy Paleoclimate Reconstructions and Modeling of Temperature Conditional Quantiles.

    PubMed

    Janson, Lucas; Rajaratnam, Bala

    Great strides have been made in the field of reconstructing past temperatures based on models relating temperature to temperature-sensitive paleoclimate proxies. One of the goals of such reconstructions is to assess if current climate is anomalous in a millennial context. These regression based approaches model the conditional mean of the temperature distribution as a function of paleoclimate proxies (or vice versa). Some of the recent focus in the area has considered methods which help reduce the uncertainty inherent in such statistical paleoclimate reconstructions, with the ultimate goal of improving the confidence that can be attached to such endeavors. A second important scientific focus in the subject area is the area of forward models for proxies, the goal of which is to understand the way paleoclimate proxies are driven by temperature and other environmental variables. One of the primary contributions of this paper is novel statistical methodology for (1) quantile regression with autoregressive residual structure, (2) estimation of corresponding model parameters, (3) development of a rigorous framework for specifying uncertainty estimates of quantities of interest, yielding (4) statistical byproducts that address the two scientific foci discussed above. We show that by using the above statistical methodology we can demonstrably produce a more robust reconstruction than is possible by using conditional-mean-fitting methods. Our reconstruction shares some of the common features of past reconstructions, but we also gain useful insights. More importantly, we are able to demonstrate a significantly smaller uncertainty than that from previous regression methods. In addition, the quantile regression component allows us to model, in a more complete and flexible way than least squares, the conditional distribution of temperature given proxies. This relationship can be used to inform forward models relating how proxies are driven by temperature.

  5. Optimization of hull-less pumpkin seed roasting conditions using response surface methodology.

    PubMed

    Vujasinović, Vesna; Radočaj, Olga; Dimić, Etelka

    2012-05-01

    Response surface methodology (RSM) was applied to optimize hull-less pumpkin seed roasting conditions before seed pressing to maximize the biochemical composition and antioxidant capacity of the virgin pumpkin oils obtained using a hydraulic press. Hull-less pumpkin seeds were roasted for various lengths of time (30 to 70 min) at various roasting temperatures (90 to 130 °C), resulting in 9 different oil samples, while the responses were phospholipids content, total phenols content, α- and γ-tocopherols, and antioxidative activity [by 2,2-diphenyl-1-picrylhydrazyl (DPPH) free-radical assay]. Mathematical models have shown that roasting conditions influenced all dependent variables at P < 0.05. The higher roasting temperatures had a significant effect (P < 0.05) on phospholipids, phenols, and α-tocopherols contents, while longer roasting time had a significant effect (P < 0.05) on γ-tocopherol content and antioxidant capacity, among the samples prepared under different roasting conditions. The optimum conditions for roasting the hull-less pumpkin seeds were 120 °C for duration of 49 min, which resulted in these oil concentrations: phospholipids 0.29%, total phenols 23.06 mg/kg, α-tocopherol 5.74 mg/100 g, γ-tocopherol 24.41 mg/100 g, and an antioxidative activity (EC(50)) of 27.18 mg oil/mg DPPH.

  6. Global positioning system: a methodology for modelling the pseudorange measurements

    NASA Astrophysics Data System (ADS)

    Barros, M. S. S.; Rosa, L. C. L.; Walter, F.; Alves, L. H. P. M.

    1999-01-01

    A model for GPS measurements (pseudorange) based on time series statistic (ARIMA) is presented. The model is based on data collected by a Trimble differential GPS receiver over a period of 20 minutes, corresponding to more than 700 data points. The best fitting model, ARIMA(2,2,1), was obtained after testing different models. The final model shows a square law behavior and a residue with a Gaussian like shape with a mean close to zero, and standard deviation of 1.21 m. This result is confirmed by the Kolmogorov-Smirnov test at the significance level of 5%. The independence is tested finding the autocorrelation function, and it is shown that within the confidence interval the independence hypothesis is confirmed (Durbin-Watson test).

  7. Methodology for reliability based condition assessment. Application to concrete structures in nuclear plants

    SciTech Connect

    Mori, Y.; Ellingwood, B.

    1993-08-01

    Structures in nuclear power plants may be exposed to aggressive environmental effects that cause their strength to decrease over an extended period of service. A major concern in evaluating the continued service for such structures is to ensure that in their current condition they are able to withstand future extreme load events during the intended service life with a level of reliability sufficient for public safety. This report describes a methodology to facilitate quantitative assessments of current and future structural reliability and performance of structures in nuclear power plants. This methodology takes into account the nature of past and future loads, and randomness in strength and in degradation resulting from environmental factors. An adaptive Monte Carlo simulation procedure is used to evaluate time-dependent system reliability. The time-dependent reliability is sensitive to the time-varying load characteristics and to the choice of initial strength and strength degradation models but not to correlation in component strengths within a system. Inspection/maintenance strategies are identified that minimize the expected future costs of keeping the failure probability of a structure at or below an established target failure probability during its anticipated service period.

  8. Optimization of culture conditions for flexirubin production by Chryseobacterium artocarpi CECT 8497 using response surface methodology.

    PubMed

    Venil, Chidambaram Kulandaisamy; Zakaria, Zainul Akmar; Ahmad, Wan Azlina

    2015-01-01

    Flexirubins are the unique type of bacterial pigments produced by the bacteria from the genus Chryseobacterium, which are used in the treatment of chronic skin disease, eczema etc. and may serve as a chemotaxonomic marker. Chryseobacterium artocarpi CECT 8497, an yellowish-orange pigment producing strain was investigated for maximum production of pigment by optimizing medium composition employing response surface methodology (RSM). Culture conditions affecting pigment production were optimized statistically in shake flask experiments. Lactose, l-tryptophan and KH2PO4 were the most significant variables affecting pigment production. Box Behnken design (BBD) and RSM analysis were adopted to investigate the interactions between variables and determine the optimal values for maximum pigment production. Evaluation of the experimental results signified that the optimum conditions for maximum production of pigment (521.64 mg/L) in 50 L bioreactor were lactose 11.25 g/L, l-tryptophan 6 g/L and KH2PO4 650 ppm. Production under optimized conditions increased to 7.23 fold comparing to its production prior to optimization. Results of this study showed that statistical optimization of medium composition and their interaction effects enable short listing of the significant factors influencing maximum pigment production from Chryseobacterium artocarpi CECT 8497. In addition, this is the first report optimizing the process parameters for flexirubin type pigment production from Chryseobacterium artocarpi CECT 8497.

  9. A combined linear optimisation methodology for water resources allocation in Alfeios River Basin (Greece) under uncertain and vague system conditions

    NASA Astrophysics Data System (ADS)

    Bekri, Eleni; Yannopoulos, Panayotis; Disse, Markus

    2013-04-01

    In the present study, a combined linear programming methodology, based on Li et al. (2010) and Bekri et al. (2012), is employed for optimizing water allocation under uncertain system conditions in the Alfeios River Basin, in Greece. The Alfeios River is a water resources system of great natural, ecological, social and economic importance for Western Greece, since it has the longest and highest flow rate watercourse in the Peloponnisos region. Moreover, the river basin was exposed in the last decades to a plethora of environmental stresses (e.g. hydrogeological alterations, intensively irrigated agriculture, surface and groundwater overexploitation and infrastructure developments), resulting in the degradation of its quantitative and qualitative characteristics. As in most Mediterranean countries, water resource management in Alfeios River Basin has been focused up to now on an essentially supply-driven approach. It is still characterized by a lack of effective operational strategies. Authority responsibility relationships are fragmented, and law enforcement and policy implementation are weak. The present regulated water allocation puzzle entails a mixture of hydropower generation, irrigation, drinking water supply and recreational activities. Under these conditions its water resources management is characterised by high uncertainty and by vague and imprecise data. The considered methodology has been developed in order to deal with uncertainties expressed as either probability distributions, or/and fuzzy boundary intervals, derived by associated α-cut levels. In this framework a set of deterministic submodels is studied through linear programming. The ad hoc water resources management and alternative management patterns in an Alfeios subbasin are analyzed and evaluated under various scenarios, using the above mentioned methodology, aiming to promote a sustainable and equitable water management. Li, Y.P., Huang, G.H. and S.L., Nie, (2010), Planning water resources

  10. Exploring the optimum conditions for maximizing the microbial growth of Candida intermedia by response surface methodology.

    PubMed

    Yönten, Vahap; Aktaş, Nahit

    2014-01-01

    Exploring optimum and cost-efficient medium composition for microbial growth of Candida intermedia Y-1981 yeast culture growing on whey was studied by applying a multistep response surface methodology. In the first step, Plackett-Burman (PB) design was utilized to determine the most significant fermentation medium factors on microbial growth. The medium temperature, sodium chloride and lactose concentrations were determined as the most important factors. Subsequently, the optimum combinations of the selected factors were explored by steepest ascent (SA) and central composite design (CCD). The optimum values for lactose and sodium chloride concentrations and medium temperature were found to be 18.4 g/L, 0.161 g/L, and 32.4°C, respectively. Experiments carried out at the optimum conditions revealed a maximum specific growth rate of 0.090 1/hr; 42% of total lactose removal was achieved in 24 h of fermentation time. The obtained results were finally verified with batch reactor experiments carried out under the optimum conditions evaluated.

  11. Methodology of Blade Unsteady Pressure Measurement in the NASA Transonic Flutter Cascade

    NASA Technical Reports Server (NTRS)

    Lepicovsky, J.; McFarland, E. R.; Capece, V. R.; Jett, T. A.; Senyitko, R. G.

    2002-01-01

    In this report the methodology adopted to measure unsteady pressures on blade surfaces in the NASA Transonic Flutter Cascade under conditions of simulated blade flutter is described. The previous work done in this cascade reported that the oscillating cascade produced waves, which for some interblade phase angles reflected off the wind tunnel walls back into the cascade, interfered with the cascade unsteady aerodynamics, and contaminated the acquired data. To alleviate the problems with data contamination due to the back wall interference, a method of influence coefficients was selected for the future unsteady work in this cascade. In this approach only one blade in the cascade is oscillated at a time. The majority of the report is concerned with the experimental technique used and the experimental data generated in the facility. The report presents a list of all test conditions for the small amplitude of blade oscillations, and shows examples of some of the results achieved. The report does not discuss data analysis procedures like ensemble averaging, frequency analysis, and unsteady blade loading diagrams reconstructed using the influence coefficient method. Finally, the report presents the lessons learned from this phase of the experimental effort, and suggests the improvements and directions of the experimental work for tests to be carried out for large oscillation amplitudes.

  12. Theoretical and Methodological Challenges in Measuring Instructional Quality in Mathematics Education Using Classroom Observations

    ERIC Educational Resources Information Center

    Schlesinger, Lena; Jentsch, Armin

    2016-01-01

    In this article, we analyze theoretical as well as methodological challenges in measuring instructional quality in mathematics classrooms by examining standardized observational instruments. At the beginning, we describe the results of a systematic literature review for determining subject-specific aspects measured in recent lesson studies in…

  13. Mathematical simulation of power conditioning systems. Volume 1: Simulation of elementary units. Report on simulation methodology

    NASA Technical Reports Server (NTRS)

    Prajous, R.; Mazankine, J.; Ippolito, J. C.

    1978-01-01

    Methods and algorithms used for the simulation of elementary power conditioning units buck, boost, and buck-boost, as well as shunt PWM are described. Definitions are given of similar converters and reduced parameters. The various parts of the simulation to be carried out are dealt with; local stability, corrective network, measurements of input-output impedance and global stability. A simulation example is given.

  14. Optimization of culture conditions for hydrogen production by Ethanoligenens harbinense B49 using response surface methodology.

    PubMed

    Guo, Wan-Qian; Ren, Nan-Qi; Wang, Xiang-Jing; Xiang, Wen-Sheng; Ding, Jie; You, Yang; Liu, Bing-Feng

    2009-02-01

    The design of an optimum and cost-efficient medium for high-level production of hydrogen by Ethanoligenens harbinense B49 was attempted by using response surface methodology (RSM). Based on the Plackett-Burman design, Fe(2+) and Mg(2+) were selected as the most critical nutrient salts. Subsequently, the optimum combination of the selected factors and the sole carbon source glucose were investigated by the Box-Behnken design. Results showed that the maximum hydrogen yield of 2.21 mol/mol glucose was predicted when the concentrations of glucose, Fe(2+) and Mg(2+) were 14.57 g/L, 177.28 mg/L and 691.98 mg/L, respectively. The results were further verified by triplicate experiments. The batch reactors were operated under an optimized condition of the respective glucose, Fe(2+) and Mg(2+) concentration of 14.5 g/L, 180 mg/L and 690 mg/L, the initial pH of 6.0 and experimental temperature of 35+/-1(o)C. Without further pH adjustment, the maximum hydrogen yield of 2.20 mol/mol glucose was obtained based on the optimized medium with further verified the practicability of this optimum strategy.

  15. Optimization of Electrochemical Treatment Process Conditions for Distillery Effluent Using Response Surface Methodology.

    PubMed

    Arulmathi, P; Elangovan, G; Begum, A Farjana

    2015-01-01

    Distillery industry is recognized as one of the most polluting industries in India with a large amount of annual effluent production. In this present study, the optimization of electrochemical treatment process variables was reported to treat the color and COD of distillery spent wash using Ti/Pt as an anode in a batch mode. Process variables such as pH, current density, electrolysis time, and electrolyte dose were selected as operation variables and chemical oxygen demand (COD) and color removal efficiency were considered as response variable for optimization using response surface methodology. Indirect electrochemical-oxidation process variables were optimized using Box-Behnken response surface design (BBD). The results showed that electrochemical treatment process effectively removed the COD (89.5%) and color (95.1%) of the distillery industry spent wash under the optimum conditions: pH of 4.12, current density of 25.02 mA/cm(2), electrolysis time of 103.27 min, and electrolyte (NaCl) concentration of 1.67 g/L, respectively.

  16. The influence of construction methodology on structural brain network measures: A review.

    PubMed

    Qi, Shouliang; Meesters, Stephan; Nicolay, Klaas; Romeny, Bart M Ter Haar; Ossenblok, Pauly

    2015-09-30

    Structural brain networks based on diffusion MRI and tractography show robust attributes such as small-worldness, hierarchical modularity, and rich-club organization. However, there are large discrepancies in the reports about specific network measures. It is hypothesized that these discrepancies result from the influence of construction methodology. We surveyed the methodological options and their influences on network measures. It is found that most network measures are sensitive to the scale of brain parcellation, MRI gradient schemes and orientation model, and the tractography algorithm, which is in accordance with the theoretical analysis of the small-world network model. Different network weighting schemes represent different attributes of brain networks, which makes these schemes incomparable between studies. Methodology choice depends on the specific study objectives and a clear understanding of the pros and cons of a particular methodology. Because there is no way to eliminate these influences, it seems more practical to quantify them, optimize the methodologies, and construct structural brain networks with multiple spatial resolutions, multiple edge densities, and multiple weighting schemes.

  17. An Updated Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Hirt, Evelyn H.; Coles, Garill A.; Bonebrake, Christopher A.; Ivans, William J.; Wootan, David W.; Mitchell, Mark R.

    2014-07-18

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment, as AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors and the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results on augmenting an initial methodology for enhanced risk monitors that integrate real-time information about equipment condition and POF into risk monitors. Methods to propagate uncertainty through the enhanced risk monitor are evaluated. Available data to quantify the level of uncertainty and the POF of key components are examined for their relevance, and a status update of this data evaluation is described. Finally, we describe potential targets for developing new risk metrics that may be useful for studying trade-offs for economic

  18. Technical Report on Preliminary Methodology for Enhancing Risk Monitors with Integrated Equipment Condition Assessment

    SciTech Connect

    Ramuhalli, Pradeep; Coles, Garill A.; Coble, Jamie B.; Hirt, Evelyn H.

    2013-09-17

    Small modular reactors (SMRs) generally include reactors with electric output of ~350 MWe or less (this cutoff varies somewhat but is substantially less than full-size plant output of 700 MWe or more). Advanced SMRs (AdvSMRs) refer to a specific class of SMRs and are based on modularization of advanced reactor concepts. AdvSMRs may provide a longer-term alternative to traditional light-water reactors (LWRs) and SMRs based on integral pressurized water reactor concepts currently being considered. Enhancing affordability of AdvSMRs will be critical to ensuring wider deployment. AdvSMRs suffer from loss of economies of scale inherent in small reactors when compared to large (~greater than 600 MWe output) reactors. Some of this loss can be recovered through reduced capital costs through smaller size, fewer components, modular fabrication processes, and the opportunity for modular construction. However, the controllable day-to-day costs of AdvSMRs will be dominated by operation and maintenance (O&M) costs. Technologies that help characterize real-time risk are important for controlling O&M costs. Risk monitors are used in current nuclear power plants to provide a point-in-time estimate of the system risk given the current plant configuration (e.g., equipment availability, operational regime, and environmental conditions). However, current risk monitors are unable to support the capability requirements listed above as they do not take into account plant-specific normal, abnormal, and deteriorating states of active components and systems. This report documents technology developments that are a step towards enhancing risk monitors that, if integrated with supervisory plant control systems, can provide the capability requirements listed and meet the goals of controlling O&M costs. The report describes research results from an initial methodology for enhanced risk monitors by integrating real-time information about equipment condition and POF into risk monitors.

  19. Measuring the entropy from shifted boundary conditions

    NASA Astrophysics Data System (ADS)

    Giusti, L.; Pepe, M.

    We explore a new computational strategy for determining the equation of state of the SU(3) Yang-Mills theory. By imposing shifted boundary conditions, the entropy density is computed from the vacuum expectation value of the off-diagonal components T_{0k} of the energy-momentum tensor. A step-scaling function is introduced to span a wide range in temperature values. We present preliminary numerical results for the entropy density and its step-scaling function obtained at eight temperature values in the range T_c - 15 T_c. At each temperature, discretization effects are removed by simulating the theory at several lattice spacings and by extrapolating the results to the continuum limit. Finite-size effects are always kept below the statistical errors. The absence of ultraviolet power divergences and the remarkably small discretization effects allow for a precise determination of the step-scaling function in the explored temperature range. These findings establish this strategy as a viable solution for an accurate determination of the equation of state in a wide range of temperature values.

  20. Integrating Multi-Tiered Measurement Outcomes for Special Education Eligibility with Sequential Decision-Making Methodology

    ERIC Educational Resources Information Center

    Decker, Scott L.; Englund, Julia; Albritton, Kizzy

    2012-01-01

    Changes to federal guidelines for the identification of children with disabilities have supported the use of multi-tiered models of service delivery. This study investigated the impact of measurement methodology as used across numerous tiers in determining special education eligibility. Four studies were completed using a sample of inner-city…

  1. 39 CFR 3055.5 - Changes to measurement systems, service standards, service goals, or reporting methodologies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 39 Postal Service 1 2011-07-01 2011-07-01 false Changes to measurement systems, service standards, service goals, or reporting methodologies. 3055.5 Section 3055.5 Postal Service POSTAL REGULATORY COMMISSION PERSONNEL SERVICE PERFORMANCE AND CUSTOMER SATISFACTION REPORTING Annual Reporting of...

  2. Measuring College Students' Alcohol Consumption in Natural Drinking Environments: Field Methodologies for Bars and Parties

    ERIC Educational Resources Information Center

    Clapp, John D.; Holmes, Megan R.; Reed, Mark B.; Shillington, Audrey M.; Freisthler, Bridget; Lange, James E.

    2007-01-01

    In recent years researchers have paid substantial attention to the issue of college students' alcohol use. One limitation to the current literature is an over reliance on retrospective, self-report survey data. This article presents field methodologies for measuring college students' alcohol consumption in natural drinking environments.…

  3. Computer Science and Technology: Measurement of Interative Computing: Methodology and Application.

    ERIC Educational Resources Information Center

    Cotton, Ira W.

    This dissertation reports the development and application of a new methodology for the measurement and evaluation of interactive computing, applied to either the users of an interactive computing system or to the system itself, including the service computer and any communications network through which the service is delivered. The focus is on the…

  4. Translating Oral Health-Related Quality of Life Measures: Are There Alternative Methodologies?

    ERIC Educational Resources Information Center

    Brondani, Mario; He, Sarah

    2013-01-01

    Translating existing sociodental indicators to another language involves a rigorous methodology, which can be costly. Free-of-charge online translator tools are available, but have not been evaluated in the context of research involving quality of life measures. To explore the value of using online translator tools to develop oral health-related…

  5. A coupled Bayesian and fault tree methodology to assess future groundwater conditions in light of climate change

    NASA Astrophysics Data System (ADS)

    Huang, J. J.; Du, M.; McBean, E. A.; Wang, H.; Wang, J.

    2014-08-01

    Maintaining acceptable groundwater levels, particularly in arid areas, while protecting ecosystems, are key measures against desertification. Due to complicated hydrological processes and their inherent uncertainties, investigations of groundwater recharge conditions are challenging, particularly in arid areas under climate changing conditions. To assist planning to protect against desertification, a fault tree methodology, in conjunction with fuzzy logic and Bayesian data mining, are applied to Minqin Oasis, a highly vulnerable regime in northern China. A set of risk factors is employed within the fault tree framework, with fuzzy logic translating qualitative risk data into probabilities. Bayesian data mining is used to quantify the contribution of each risk factor to the final aggregated risk. The implications of both historical and future climate trends are employed for temperature, precipitation and potential evapotranspiration (PET) to assess water table changes under various future scenarios. The findings indicate that water table levels will continue to drop at the rate of 0.6 m yr-1 in the future when climatic effects alone are considered, if agricultural and industrial production capacity remain at 2004 levels.

  6. Optimization of culture conditions to improve Helicobacter pylori growth in Ham's F-12 medium by response surface methodology.

    PubMed

    Bessa, L J; Correia, D M; Cellini, L; Azevedo, N F; Rocha, I

    2012-01-01

    Helicobacter pylori is a gastroduodenal pathogen that colonizes the human stomach and is the causal agent of gastric diseases. From the clinical and epidemiological point of view, enhancing and improving the growth of this bacterium in liquid media is an important goal to achieve in order to allow the performance of accurate physiological studies. The aim of this work was to optimize three culture conditions that influence the growth of H. pylori in the defined medium Ham s F-12 supplemented with 5 percent fetal bovine serum by using response surface methodology as a statistical technique to obtain the optimal conditions. The factors studied in this experimental design (Box-Behnken design) were the pH of the medium, the shaking speed (rpm) and the percentage of atmospheric oxygen, in a total of 17 experiments. The biomass specific growth rate was the response measured. The model was validated for pH and shaking speed. The percentage of atmospheric oxygen did not influence the growth for the range of values studied. At the optimal values found for pH and shaking speed, 8 and 130 rpm, respectively, a specific growth rate value of 0.164 h-1, corresponding to a maximal concentration of approximately 1.5x108 CFU/ml, was reached after 8 h. The experimental design strategy allowed, for the first time, the optimization of H. pylori growth in a semi-synthetic medium, which may be important to improve physiological and metabolic studies of this fastidious bacterium.

  7. Surface monitoring measurements of materials on environmental change conditions

    NASA Astrophysics Data System (ADS)

    Tornari, Vivi; Bernikola, Eirini; Bellendorf, Paul; Bertolin, Chiara; Camuffo, Dario; Kotova, Lola; Jacobs, Daniela; Zarnic, Roko; Rajcic, Vlatka; Leissner, Johanna

    2013-05-01

    Climate Change is one of the most critical global challenges of our time and the burdened cultural heritage of Europe is particularly vulnerable to be left unprotected. Climate for Culture2 project exploits the damage impact of climate change on cultural heritage at regional scale. In this paper the progress of the study with in situ measurements and investigations at cultural heritage sites throughout Europe combined with laboratory simulations is described. Cultural works of art are susceptible to deterioration with environmental changes causing imperceptibly slow but steady accumulation of damaging effects directly impacted on structural integrity. Laser holographic interference method is employed to provide remote non destructive field-wise detection of the structural differences occurred as climate responses. The first results from climate simulation of South East Europe (Crete) are presented. A full study in regards to the four climate regions of Europe is foreseen to provide values for development of a precise and integrated model of thermographic building simulations for evaluation of impact of climate change. Development of a third generation user interface software optimised portable metrology system (DHSPI II) is designed to record in custom intervals the surface of materials witnessing reactions under simulated climatic conditions both onfield and in laboratory. The climate conditions refer to real data-loggers readings representing characteristic historical building in selected climate zones. New generation impact sensors termed Glass Sensors and Free Water Sensors are employed in the monitoring procedure to cross-correlate climate data with deformation data. In this paper results from the combined methodology are additionally presented.

  8. Difficult to measure constructs: conceptual and methodological issues concerning participation and environmental factors.

    PubMed

    Whiteneck, Gale; Dijkers, Marcel P

    2009-11-01

    For rehabilitation and disability research, participation and environment are 2 crucial constructs that have been placed center stage by the International Classification of Functioning, Disability and Health (ICF). However, neither construct is adequately conceptualized by the ICF, and both are difficult to measure. This article addresses conceptual and methodologic issues related to these ICF constructs, and recommends an improved distinction between activities and participation, as well as elaboration of environment. A division of the combined ICF categories for activity and participation into 2 separate taxonomies is proposed to guide future research. The issue of measuring participation from objective and subjective perspectives is examined, and maintaining these distinct conceptual domains in the measurement of participation is recommended. The methodological issues contributing to the difficulty of measuring participation are discussed, including potential dimensionality, alternative metrics, and the appropriateness of various measurement models. For environment, the need for theory to focus research on those aspects of the environment that interact with individuals' impairments and functional limitations in affecting activities and participation is discussed, along with potential measurement models for those aspects. The limitations resulting from reliance on research participants as reporters on their own environment are set forth. Addressing these conceptual and methodological issues is required before the measurement of participation and environmental factors can advance and these important constructs can be used more effectively in rehabilitation and disability observational research and trials.

  9. Validation of a CFD Methodology for Variable Speed Power Turbine Relevant Conditions

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.; Giel, Paul W.; McVetta, Ashlie B.

    2013-01-01

    Analysis tools are needed to investigate aerodynamic performance of Variable-Speed Power Turbines (VSPT) for rotorcraft applications. The VSPT operates at low Reynolds numbers (transitional flow) and over a wide range of incidence. Previously, the capability of a published three-equation turbulence model to predict accurately the transition location for three-dimensional heat transfer problems was assessed. In this paper, the results of a post-diction exercise using a three-dimensional flow in a transonic linear cascade comprising VSPT blading are presented. The measured blade pressure distributions and exit total pressure and flow angles for two incidence angles corresponding to cruise (i = 5.8deg) and takeoff (i = -36.7deg) were used for this study. For the higher loading condition of cruise and the negative incidence condition of takeoff, overall agreement with data may be considered satisfactory but areas of needed improvement are also indicated.

  10. Response surface methodology for optimising the culture conditions for eicosapentaenoic acid production by marine bacteria.

    PubMed

    Abd Elrazak, Ahmed; Ward, Alan C; Glassey, Jarka

    2013-05-01

    Polyunsaturated fatty acids (PUFAs), especially eicosapentaenoic acid (EPA), are increasingly attracting scientific attention owing to their significant health-promoting role in the human body. However, the human body lacks the ability to produce them in vivo. The limitations associated with the current sources of ω-3 fatty acids from animal and plant sources have led to increased interest in microbial production. Bacterial isolate 717 was identified as a potential high EPA producer. As an important step in the process development of the microbial PUFA production, the culture conditions at the bioreactor scale were optimised for the isolate 717 using a response surface methodology exploring the significant effect of temperature, pH and dissolved oxygen and the interaction between them on the EPA production. This optimisation strategy led to a significant increase in the amount of EPA produced by the isolate under investigation, where the amount of EPA increased from 9 mg/g biomass (33 mg/l representing 7.6 % of the total fatty acids) to 45 mg/g (350 mg/l representing 25 % of the total fatty acids). To avoid additional costs associated with extreme cooling at large scale, a temperature shock experiment was carried out reducing the overall cooling time from the whole cultivation process to 4 h only prior to harvest. The ability of the organism to produce EPA under the complete absence of oxygen was tested revealing that oxygen is not critically required for the biosynthesis of EPA but the production improved in the presence of oxygen. The stability of the produced oil and the complete absence of heavy metals in the bacterial biomass are considered as an additional benefit of bacterial EPA compared to other sources of PUFA. To our knowledge this is the first report of a bacterial isolate producing EPA with such high yields making the large-scale manufacture much more economically viable.

  11. Scalar mixing and strain dynamics methodologies for PIV/LIF measurements of vortex ring flows

    NASA Astrophysics Data System (ADS)

    Bouremel, Yann; Ducci, Andrea

    2017-01-01

    Fluid mixing operations are central to possibly all chemical, petrochemical, and pharmaceutical industries either being related to biphasic blending in polymerisation processes, cell suspension for biopharmaceuticals production, and fractionation of complex oil mixtures. This work aims at providing a fundamental understanding of the mixing and stretching dynamics occurring in a reactor in the presence of a vortical structure, and the vortex ring was selected as a flow paradigm of vortices commonly encountered in stirred and shaken reactors in laminar flow conditions. High resolution laser induced fluorescence and particle imaging velocimetry measurements were carried out to fully resolve the flow dissipative scales and provide a complete data set to fully assess macro- and micro-mixing characteristics. The analysis builds upon the Lamb-Oseen vortex work of Meunier and Villermaux ["How vortices mix," J. Fluid Mech. 476, 213-222 (2003)] and the engulfment model of Baldyga and Bourne ["Simplification of micromixing calculations. I. Derivation and application of new model," Chem. Eng. J. 42, 83-92 (1989); "Simplification of micromixing calculations. II. New applications," ibid. 42, 93-101 (1989)] which are valid for diffusion-free conditions, and a comparison is made between three methodologies to assess mixing characteristics. The first method is commonly used in macro-mixing studies and is based on a control area analysis by estimating the variation in time of the concentration standard deviation, while the other two are formulated to provide an insight into local segregation dynamics, by either using an iso-concentration approach or an iso-concentration gradient approach to take into account diffusion.

  12. Theoretical framework and methodological development of common subjective health outcome measures in osteoarthritis: a critical review.

    PubMed

    Pollard, Beth; Johnston, Marie; Dixon, Diane

    2007-03-07

    Subjective measures involving clinician ratings or patient self-assessments have become recognised as an important tool for the assessment of health outcome. The value of a health outcome measure is usually assessed by a psychometric evaluation of its reliability, validity and responsiveness. However, psychometric testing involves an accumulation of evidence and has recognised limitations. It has been suggested that an evaluation of how well a measure has been developed would be a useful additional criteria in assessing the value of a measure. This paper explored the theoretical background and methodological development of subjective health status measures commonly used in osteoarthritis research. Fourteen subjective health outcome measures commonly used in osteoarthritis research were examined. Each measure was explored on the basis of their i) theoretical framework (was there a definition of what was being assessed and was it part of a theoretical model?) and ii) methodological development (what was the scaling strategy, how were the items generated and reduced, what was the response format and what was the scoring method?). Only the AIMS, SF-36 and WHOQOL defined what they were assessing (i.e. the construct of interest) and no measure assessed was part of a theoretical model. None of the clinician report measures appeared to have implemented a scaling procedure or described the rationale for the items selected or scoring system. Of the patient self-report measures, the AIMS, MPQ, OXFORD, SF-36, WHOQOL and WOMAC appeared to follow a standard psychometric scaling method. The DRP and EuroQol used alternative scaling methods. The review highlighted the general lack of theoretical framework for both clinician report and patient self-report measures. This review also drew attention to the wide variation in the methodological development of commonly used measures in OA. While, in general the patient self-report measures had good methodological development, the

  13. Nonlinear Parabolic Equations Involving Measures as Initial Conditions.

    DTIC Science & Technology

    1981-09-01

    CHART N N N Afl4Uf’t 1N II Il MRC Technical Summary Report # 2277 0 NONLINEAR PARABOLIC EQUATIONS INVOLVING MEASURES AS INITIAL CONDITIONS I Haim Brezis ...NONLINEAR PARABOLIC EQUATIONS INVOLVING MEASURES AS INITIAL CONDITIONS Haim Brezis and Avner Friedman Technical Summary Report #2277 September 1981...with NRC, and not with the authors of this report. * 𔃾s ’a * ’ 4| NONLINEAR PARABOLIC EQUATIONS INVOLVING MEASURES AS INITIAL CONDITIONS Haim Brezis

  14. A system and methodology for measuring volatile organic compounds produced by hydroponic lettuce in a controlled environment

    NASA Technical Reports Server (NTRS)

    Charron, C. S.; Cantliffe, D. J.; Wheeler, R. M.; Manukian, A.; Heath, R. R.

    1996-01-01

    A system and methodology were developed for the nondestructive qualitative and quantitative analysis of volatile emissions from hydroponically grown 'Waldmann's Green' leaf lettuce (Lactuca sativa L.). Photosynthetic photon flux (PPF), photoperiod, and temperature were automatically controlled and monitored in a growth chamber modified for the collection of plant volatiles. The lipoxygenase pathway products (Z)-3-hexenal, (Z)-3-hexenol, and (Z)-3-hexenyl acetate were emitted by lettuce plants after the transition from the light period to the dark period. The volatile collection system developed in this study enabled measurements of volatiles emitted by intact plants, from planting to harvest, under controlled environmental conditions.

  15. A system and methodology for measuring volatile organic compounds produced by hydroponic lettuce in a controlled environment.

    PubMed

    Charron, C S; Cantliffe, D J; Wheeler, R M; Manukian, A; Heath, R R

    1996-05-01

    A system and methodology were developed for the nondestructive qualitative and quantitative analysis of volatile emissions from hydroponically grown 'Waldmann's Green' leaf lettuce (Lactuca sativa L.). Photosynthetic photon flux (PPF), photoperiod, and temperature were automatically controlled and monitored in a growth chamber modified for the collection of plant volatiles. The lipoxygenase pathway products (Z)-3-hexenal, (Z)-3-hexenol, and (Z)-3-hexenyl acetate were emitted by lettuce plants after the transition from the light period to the dark period. The volatile collection system developed in this study enabled measurements of volatiles emitted by intact plants, from planting to harvest, under controlled environmental conditions.

  16. Analytical Methodology Used To Assess/Refine Observatory Thermal Vacuum Test Conditions For the Landsat 8 Data Continuity Mission

    NASA Technical Reports Server (NTRS)

    Fantano, Louis

    2015-01-01

    Thermal and Fluids Analysis Workshop Silver Spring, MD NCTS 21070-15 The Landsat 8 Data Continuity Mission, which is part of the United States Geologic Survey (USGS), launched February 11, 2013. A Landsat environmental test requirement mandated that test conditions bound worst-case flight thermal environments. This paper describes a rigorous analytical methodology applied to assess refine proposed thermal vacuum test conditions and the issues encountered attempting to satisfy this requirement.

  17. Thermal effusivity measurements for liquids: a self-consistent photoacoustic methodology.

    PubMed

    Balderas-López, J A

    2007-06-01

    A self-consistent photoacoustic methodology for the measurement of the thermal effusivity for liquids is presented. This methodology makes use of the analytical solution for the one-dimensional heat diffusion problem for a single layer, assuming a harmonic heat source in the surface absorption limit. The analytical treatment involves fitting procedures over normalized amplitudes and phases, obtained as the ratio of photoacoustic signals in the front configuration with and without the liquid sample, as functions of the modulation frequency. Two values of thermal effusivity for each liquid sample are obtained, one from the analysis of the normalized amplitudes and the other one from the normalized phases. The comparison between the experimental and theoretical phases allows the description of a simple criterion for deciding on the appropriate modulation frequency range for the analysis in each case. This methodology was applied for measuring the thermal effusivity of some pure liquids; a very good agreement between the thermal effusivity values obtained by this methodology and the corresponding ones reported in the literature was obtained.

  18. An Inverse Problem for a Class of Conditional Probability Measure-Dependent Evolution Equations.

    PubMed

    Mirzaev, Inom; Byrne, Erin C; Bortz, David M

    2016-01-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by Partial Differential Equation (PDE) models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach.

  19. An inverse problem for a class of conditional probability measure-dependent evolution equations

    NASA Astrophysics Data System (ADS)

    Mirzaev, Inom; Byrne, Erin C.; Bortz, David M.

    2016-09-01

    We investigate the inverse problem of identifying a conditional probability measure in measure-dependent evolution equations arising in size-structured population modeling. We formulate the inverse problem as a least squares problem for the probability measure estimation. Using the Prohorov metric framework, we prove existence and consistency of the least squares estimates and outline a discretization scheme for approximating a conditional probability measure. For this scheme, we prove general method stability. The work is motivated by partial differential equation models of flocculation for which the shape of the post-fragmentation conditional probability measure greatly impacts the solution dynamics. To illustrate our methodology, we apply the theory to a particular PDE model that arises in the study of population dynamics for flocculating bacterial aggregates in suspension, and provide numerical evidence for the utility of the approach.

  20. ASSESSING THE CONDITION OF SOUTH CAROLINA'S ESTUARIES: A NEW APPROACH INVOLVING INTEGRATED MEASURES OF CONDITION

    EPA Science Inventory

    The South Carolina Estuarine and Coastal Assessment Program (SCECAP) was initiated in 1999 to assess the condition of the state's coastal habitats using multiple measures of water quality, sediment quality, and biological condition. Sampling has subsequently been expanded to incl...

  1. The development and application of an automatic boundary segmentation methodology to evaluate the vaporizing characteristics of diesel spray under engine-like conditions

    NASA Astrophysics Data System (ADS)

    Ma, Y. J.; Huang, R. H.; Deng, P.; Huang, S.

    2015-04-01

    Studying the vaporizing characteristics of diesel spray could greatly help to reduce engine emission and improve performance. The high-speed schlieren imaging method is an important optical technique for investigating the macroscopic vaporizing morphological evolution of liquid fuel, and pre-combustion constant volume combustion bombs are often used to simulate the high pressure and high temperature conditions occurring in diesel engines. Complicated background schlieren noises make it difficult to segment the spray region in schlieren spray images. To tackle this problem, this paper develops a vaporizing spray boundary segmentation methodology based on an automatic threshold determination algorithm. The methodology was also used to quantify the macroscopic characteristics of vaporizing sprays including tip penetration, near-field and far-field angles, and projected spray area and spray volume. The spray boundary segmentation methodology was realized in a MATLAB-based program. Comparisons were made between the spray characteristics obtained using the program method and those acquired using a manual method and the Hiroyasu prediction model. It is demonstrated that the methodology can segment and measure vaporizing sprays precisely and efficiently. Furthermore, the experimental results show that the spray angles were slightly affected by the injection pressure at high temperature and high pressure and under inert conditions. A higher injection pressure leads to longer spray tip penetration and a larger projected area and volume, while elevating the temperature of the environment can significantly promote the evaporation of cold fuel.

  2. Conditional Inference and Logic for Intelligent Systems: A Theory of Measure-Free Conditioning

    DTIC Science & Technology

    1991-08-01

    4 TITLE AND SUBTITLE 5 FUNDING NUMBERS CONDITIONAL INFERENCE AND LOGIC FOR INTELLIGENT SYSTEMS PR: ZE90 PR: ZW40 A Theory of Measure-Free Conditioning...200 UNCLASSIFIED tf F I CONDIT[ONAL INFERENCE AND LOGIC FOR INTELIUGENT SYSTEMS: I, A THEORY OF MEASURE-FREE CONDTONING F by L R. Goodman Command and...complete and satisfactory theory of "measure-free" conditioning. If the concept of "conditional event" can be formalized and a suitable algebra of

  3. [Basic questionnaire and methodological criteria for Surveys on Working Conditions, Employment, and Health in Latin America and the Caribbean].

    PubMed

    Benavides, Fernando G; Merino-Salazar, Pamela; Cornelio, Cecilia; Assunção, Ada Avila; Agudelo-Suárez, Andrés A; Amable, Marcelo; Artazcoz, Lucía; Astete, Jonh; Barraza, Douglas; Berhó, Fabián; Milián, Lino Carmenate; Delclòs, George; Funcasta, Lorena; Gerke, Johanna; Gimeno, David; Itatí-Iñiguez, María José; Lima, Eduardo de Paula; Martínez-Iñigo, David; Medeiros, Adriane Mesquita de; Orta, Lida; Pinilla, Javier; Rodrigo, Fernando; Rojas, Marianela; Sabastizagal, Iselle; Vallebuona, Clelia; Vermeylen, Greet; Villalobos, Gloria H; Vives, Alejandra

    2016-10-10

    This article aimed to present a basic questionnaire and minimum methodological criteria for consideration in future Surveys on Working Conditions, Employment, and Health in Latin America and the Caribbean. A virtual and face-to-face consensus process was conducted with participation by a group of international experts who used the surveys available up until 2013 as the point of departure for defining the proposal. The final questionnaire included 77 questions grouped in six dimensions: socio-demographic characteristics of workers and companies; employment conditions; working conditions; health status; resources and preventive activities; and family characteristics. The minimum methodological criteria feature the interviewee's home as the place for the interview and aspects related to the quality of the fieldwork. These results can help improve the comparability of future surveys in Latin America and the Caribbean, which would in turn help improve information on workers' heath in the region.

  4. A Practical Methodology to Measure Unbiased Gas Chromatographic Retention Factor vs. Temperature Relationships

    PubMed Central

    Peng, Baijie; Kuo, Mei-Yi; Yang, Panhia; Hewitt, Joshua T.; Boswell, Paul G.

    2014-01-01

    Compound identification continues to be a major challenge. Gas chromatography-mass spectrometry (GC-MS) is a primary tool used for this purpose, but the GC retention information it provides is underutilized because existing retention databases are experimentally restrictive and unreliable. A methodology called “retention projection” has the potential to overcome these limitations, but it requires the retention factor (k) vs. T relationship of a compound to calculate its retention time. Direct methods of measuring k vs. T relationships from a series of isothermal runs are tedious and time-consuming. Instead, a series of temperature programs can be used to quickly measure the k vs. T relationships, but they are generally not as accurate when measured this way because they are strongly biased by non-ideal behavior of the GC system in each of the runs. In this work, we overcome that problem by using the retention times of 25 n-alkanes to back-calculate the effective temperature profile and hold-up time vs. T profiles produced in each of six temperature programs. When the profiles were measured this way and taken into account, the k vs. T relationships measured from each of two different GC-MS instruments were nearly as accurate as the ones measured isothermally, showing less than 2-fold more error. Furthermore, temperature-programmed retention times calculated in five other labs from the new k vs. T relationships had the same distribution of error as when they were calculated from k vs. T relationships measured isothermally. Free software was developed to make the methodology easy to use. The new methodology potentially provides a relatively fast and easy way to measure unbiased k vs. T relationships. PMID:25496658

  5. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements.

    PubMed

    do Amaral, Leonardo L; de Oliveira, Harley F; Pavoni, Juliana F; Sampaio, Francisco; Netto, Thomaz Ghilardi

    2015-09-01

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source-to-detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92%±0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85%±0.26% and the mean percentage differences in the normalization point doses were -1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments. PACS number: 87.55.Qr, 87.55.km, 87.55.N.

  6. A new transmission methodology for quality assurance in radiotherapy based on radiochromic film measurements.

    PubMed

    do Amaral, Leonardo L; de Oliveira, Harley F; Pavoni, Juliana F; Sampaio, Franciso; Ghillardi Netto, Thomaz

    2015-09-08

    Despite individual quality assurance (QA) being recommended for complex techniques in radiotherapy (RT) treatment, the possibility of errors in dose delivery during therapeutic application has been verified. Therefore, it is fundamentally important to conduct in vivo QA during treatment. This work presents an in vivo transmission quality control methodology, using radiochromic film (RCF) coupled to the linear accelerator (linac) accessory holder. This QA methodology compares the dose distribution measured by the film in the linac accessory holder with the dose distribution expected by the treatment planning software. The calculated dose distribution is obtained in the coronal and central plane of a phantom with the same dimensions of the acrylic support used for positioning the film but in a source-to-detector distance (SDD) of 100 cm, as a result of transferring the IMRT plan in question with all the fields positioned with the gantry vertically, that is, perpendicular to the phantom. To validate this procedure, first of all a Monte Carlo simulation using PENELOPE code was done to evaluate the differences between the dose distributions measured by the film in a SDD of 56.8 cm and 100 cm. After that, several simple dose distribution tests were evaluated using the proposed methodology, and finally a study using IMRT treatments was done. In the Monte Carlo simulation, the mean percentage of points approved in the gamma function comparing the dose distribution acquired in the two SDDs were 99.92% ± 0.14%. In the simple dose distribution tests, the mean percentage of points approved in the gamma function were 99.85% ± 0.26% and the mean percentage differences in the normalization point doses were -1.41%. The transmission methodology was approved in 24 of 25 IMRT test irradiations. Based on these results, it can be concluded that the proposed methodology using RCFs can be applied for in vivo QA in RT treatments.

  7. A Methodology for Evaluating the Relationship Between Measures of Evaluation (MOEVs): The STF Approach

    DTIC Science & Technology

    1991-06-01

    air C2, respectively, will be defined. Chapter IV will describe the STF approach methodology, developed by Monti Callero and Clairice T. Veit, both...1986. 6. Callero , Monti, Willard Naslund, Clairice T. Veit, Subjective Measurement of Tactical Air Command and Control--Vol. 1: Background and Approach...The Rand Corporation, March 1981. 7. Veit, Clairice T. and Monti Callero , Subjective Transfer Function Approach to Complex System Analysis, The Rand

  8. Major methodological constraints to the assessment of environmental status based on the condition of benthic communities

    NASA Astrophysics Data System (ADS)

    Medeiros, João Paulo; Pinto, Vanessa; Sá, Erica; Silva, Gilda; Azeda, Carla; Pereira, Tadeu; Quintella, Bernardo; Raposo de Almeida, Pedro; Lino Costa, José; José Costa, Maria; Chainho, Paula

    2014-05-01

    The Marine Strategy Framework Directive (MSFD) was published in 2008 and requires Member States to take the necessary measures to achieve or maintain good environmental status in aquatic ecosystems by the year of 2020. The MSFD indicates 11 qualitative descriptors for environmental status assessment, including seafloor integrity, using the condition of the benthic community as an assessment indicator. Member States will have to define monitoring programs for each of the MSFD descriptors based on those indicators in order to understand which areas are in a Good Environmental Status and what measures need to be implemented to improve the status of areas that fail to achieve that major objective. Coastal and offshore marine waters are not frequently monitored in Portugal and assessment tools have only been developed very recently with the implementation of the Water Framework Directive (WFD). The lack of historical data and knowledge on the constraints of benthic indicators in coastal areas requires the development of specific studies addressing this issue. The major objective of the current study was to develop and test and experimental design to assess impacts of offshore projects. The experimental design consisted on the seasonal and interannual assessment of benthic invertebrate communities in the area of future implementation of the structures (impact) and two potential control areas 2 km from the impact area. Seasonal benthic samples were collected at nine random locations within the impact and control areas in two consecutive years. Metrics included in the Portuguese benthic assessment tool (P-BAT) were calculated since this multimetric tool was proposed for the assessment of the ecological status in Portuguese coastal areas under the WFD. Results indicated a high taxonomic richness in this coastal area and no significant differences were found between impact and control areas, indicating the feasibility of establishing adequate control areas in marine

  9. Direct sample positioning and alignment methodology for strain measurement by diffraction

    NASA Astrophysics Data System (ADS)

    Ratel, N.; Hughes, D. J.; King, A.; Malard, B.; Chen, Z.; Busby, P.; Webster, P. J.

    2005-05-01

    An ISO (International Organization for Standardization) TTA (Technology Trends Assessment) was published in 2001 for the determination of residual stress using neutron diffraction which identifies sample alignment and positioning as a key source of strain measurement error. Although the measurement uncertainty by neutron and synchrotron x-ray diffraction for an individual measurement of lattice strain is typically of the order of 10-100×10-6, specimens commonly exhibit strain gradients of 1000×10-6mm-1 or more, making sample location a potentially considerable source of error. An integrated approach to sample alignment and positioning is described which incorporates standard base-plates and sample holders, instrument alignment procedures, accurate digitization using a coordinate measuring machine and automatic generation of instrument control scripts. The methodology that has been developed is illustrated by the measurement of the transverse residual strain field in a welded steel T-joint using neutrons.

  10. A novel, noninvasive transdermal fluid sampling methodology: IGF-I measurement following exercise.

    PubMed

    Scofield, D E; McClung, H L; McClung, J P; Kraemer, W J; Rarick, K R; Pierce, J R; Cloutier, G J; Fielding, R A; Matheny, R W; Young, A J; Nindl, B C

    2011-06-01

    This study tested the hypothesis that transdermal fluid (TDF) provides a more sensitive and accurate measure of exercise-induced increases in insulin-like growth factor-I (IGF-I) than serum, and that these increases are detectable proximal, but not distal, to the exercising muscle. A novel, noninvasive methodology was used to collect TDF, followed by sampling of total IGF-I (tIGF-I) and free IGF-I (fIGF-I) in TDF and serum following an acute bout of exercise. Experiment 1: eight men (23 ± 3 yrs, 79 ± 7 kg) underwent two conditions (resting and 60 min of cycling exercise at 60% Vo(2)(peak)) in which serum and forearm TDF were collected for comparison. There were no significant changes in tIGF-I or fIGF-I in TDF obtained from the forearm or from serum following exercise (P > 0.05); however, the proportion of fIGF-I to tIGF-I in TDF was approximately fourfold greater than that of serum (P ≤ 0.05). These data suggest that changes in TDF IGF-I are not evident when TDF is sampled distal from the working tissue. To determine whether exercise-induced increases in local IGF-I could be detected when TDF was sampled directly over the active muscle group, we performed a second experiment. Experiment 2: fourteen subjects (22 ± 4 yr, 68 ± 11 kg) underwent an acute plyometric exercise condition consisting of 10 sets of 10 plyometric jumps with 2-min rest between sets. We observed a significant increase in TDF tIGF-I following exercise (P ≤ 0.05) but no change in serum tIGF-I (P > 0.05). Overall, these data suggest that TDF may provide a noninvasive means of monitoring acute exercise-induced changes in local IGF-I when sampled in proximity to exercising muscles. Moreover, our finding that the proportion of free to tIGF-I was greater in TDF than in serum suggests that changes in local IGF-I may be captured more readily using this system.

  11. Rapid assessment methodology in NORM measurements from building materials of Uzbekistan.

    PubMed

    Safarov, A A; Safarov, A N; Azimov, A N; Darby, I G

    2017-04-01

    Utilizing low cost NaI(Tl) scintillation detector systems we present methodology for the rapid screening of building material samples and the determination of their Radium Equivalent Activity (Raeq). Materials from Uzbekistan as a representative developing country have been measured and a correction coefficient for Radium activity is deduced. The use of the correction coefficient offers the possibility to decrease analysis times thus enabling the express measurement of a large quantity of samples. The reduction in time, cost and the use of inexpensive equipment can democratize the practice of screening NORM in building materials in the international community.

  12. Radiotherapy for supradiaphragmatic Hodgkin's disease: determination of the proper fetal shielding conditions using Monte Carlo methodology.

    PubMed

    Mazonakis, Michalis; Tzedakis, Antonis; Varveris, Charalambos; Damilakis, John

    2011-10-01

    This study aimed to estimate fetal dose from mantle field irradiation with 6 MV photons and to determine the proper fetal shielding conditions. The Monte Carlo N-particle code and mathematical phantoms representing pregnancy at the first, second and third trimesters of gestation were used to calculate fetal dose with or without the presence of a 5-cm-thick lead shield of dimensions 35×35 cm(2). Fetal exposure was calculated for lead thicknesses of 2, 3, 4, 6, 7 and 8 cm. The dependence of fetal dose upon the distance separating the shield from the beam edge and phantom's abdomen was investigated. Dose measurements were performed on a physical phantom using thermoluminescent dosimetry. The radiation dose to an unshielded and shielded fetus was 0.578-0.861% and 0.180-0.641% of the prescribed tumor dose, respectively, depending upon the gestational age. The lead thickness increase from 2 to 5 cm led to a fetal dose reduction up to 23.4%. The use of 5- to 8-cm-thick lead resulted in dose values differing less than 4.5%. The shift of the lead from the closer to the more distant position relative to the field edge increased fetal dose up to 42.5%. The respective increase by changing the distance from the phantom's abdomen was 21.9%. The difference between dose calculations and measurements at specific points was 8.3±3.9%. The presented data may be used for fetal dose assessment with different shielding settings before treatment and, then, for the design and construction of the appropriate shielding device.

  13. Optimization of fermentation conditions for P450 BM-3 monooxygenase production by hybrid design methodology*

    PubMed Central

    Lu, Yan; Mei, Le-he

    2007-01-01

    Factorial design and response surface techniques were used to design and optimize increasing P450 BM-3 expression in E. coli. Operational conditions for maximum production were determined with twelve parameters under consideration: the concentration of FeCl3, induction at OD578 (optical density measured at 578 nm), induction time and inoculum concentration. Initially, Plackett-Burman (PB) design was used to evaluate the process variables relevant in relation to P450 BM-3 production. Four statistically significant parameters for response were selected and utilized in order to optimize the process. With the 416C model of hybrid design, response surfaces were generated, and P450 BM-3 production was improved to 57.90×10−3 U/ml by the best combinations of the physicochemical parameters at optimum levels of 0.12 mg/L FeCl3, inoculum concentration of 2.10%, induction at OD578 equal to 1.07, and with 6.05 h of induction. PMID:17173359

  14. An in-situ soil structure characterization methodology for measuring soil compaction

    NASA Astrophysics Data System (ADS)

    Dobos, Endre; Kriston, András; Juhász, András; Sulyok, Dénes

    2016-04-01

    The agricultural cultivation has several direct and indirect effects on the soil properties, among which the soil structure degradation is the best known and most detectable one. Soil structure degradation leads to several water and nutrient management problems, which reduce the efficiency of agricultural production. There are several innovative technological approaches aiming to reduce these negative impacts on the soil structure. The tests, validation and optimization of these methods require an adequate technology to measure the impacts on the complex soil system. This study aims to develop an in-situ soil structure and root development testing methodology, which can be used in field experiments and which allows one to follow the real time changes in the soil structure - evolution / degradation and its quantitative characterization. The method is adapted from remote sensing image processing technology. A specifically transformed A/4 size scanner is placed into the soil into a safe depth that cannot be reached by the agrotechnical treatments. Only the scanner USB cable comes to the surface to allow the image acquisition without any soil disturbance. Several images from the same place can be taken throughout the vegetation season to follow the soil consolidation and structure development after the last tillage treatment for the seedbed preparation. The scanned image of the soil profile is classified using supervised image classification, namely the maximum likelihood classification algorithm. The resulting image has two principal classes, soil matrix and pore space and other complementary classes to cover the occurring thematic classes, like roots, stones. The calculated data is calibrated with filed sampled porosity data. As the scanner is buried under the soil with no changes in light conditions, the image processing can be automated for better temporal comparison. Besides the total porosity each pore size fractions and their distributions can be calculated for

  15. Sensitive Measures of Condition Change in EEG Data

    SciTech Connect

    Hively, L.M.; Gailey, P.C.; Protopopescu, V.

    1999-03-10

    We present a new, robust, model-independent technique for measuring condition change in nonlinear data. We define indicators of condition change by comparing distribution functions (DF) defined on the attractor for time windowed data sets via L{sub 1}-distance and {chi}{sup 2} statistics. The new measures are applied to EEG data with the objective of detecting the transition between non-seizure and epileptic brain activity in an accurate and timely manner. We find a clear superiority of the new metrics in comparison to traditional nonlinear measures as discriminators of condition change.

  16. Optimization of processing conditions for the sterilization of retorted short-rib patties using the response surface methodology.

    PubMed

    Choi, Su-Hee; Cheigh, Chan-Ick; Chung, Myong-Soo

    2013-05-01

    The aim of this study was to determine the optimum sterilization conditions for short-rib patties in retort trays by considering microbiological safety, nutritive value, sensory characteristics, and textural properties. In total, 27 sterilization conditions with various temperatures, times, and processing methods were tested using a 3(3) factorial design. The response surface methodology (RSM) and contour analysis were applied to find the optimum sterilization conditions for the patties. Quality attributes were significantly affected by the sterilization temperature, time, and processing method. From RSM and contour analysis, the final optimum sterilization condition of the patties that simultaneously satisfied all specifications was determined to be 119.4°C for 18.55min using a water-cascading rotary mode. The findings of the present study suggest that using optimized sterilization conditions will improve the microbial safety, sensory attributes, and nutritional retention for retorted short-rib patties.

  17. Methodology for the calibration of and data acquisition with a six-degree-of-freedom acceleration measurement device

    NASA Astrophysics Data System (ADS)

    Lee, Harvey; Plank, Gordon; Weinstock, Herbert; Coltman, Michael

    1989-06-01

    Described here is a methodology for calibrating and gathering data with a six-degree-of-freedom acceleration measurement device that is intended to measure head acceleration of anthropomorphic dummies and human volunteers in automotive crash testing and head impact trauma studies. Error models (system equations) were developed for systems using six accelerometers in a coplanar (3-2-1) configuration, nine accelerometers in a coplanar (3-3-3) configuration and nine accelerometers in a non-coplanar (3-2-2-2) configuration and the accuracy and stability of these systems were compared. The model was verified under various input and computational conditions. Results of parametric sensitivity analyses which included parameters such as system geometry, coordinate system location, data sample rate and accelerometer cross axis sensitivities are presented. Recommendations to optimize data collection and reduction are given. Complete source listings of all of the software developed are presented.

  18. An innovative methodology for measurement of stress distribution of inflatable membrane structures

    NASA Astrophysics Data System (ADS)

    Zhao, Bing; Chen, Wujun; Hu, Jianhui; Chen, Jianwen; Qiu, Zhenyu; Zhou, Jinyu; Gao, Chengjun

    2016-02-01

    The inflatable membrane structure has been widely used in the fields of civil building, industrial building, airship, super pressure balloon and spacecraft. It is important to measure the stress distribution of the inflatable membrane structure because it influences the safety of the structural design. This paper presents an innovative methodology for the measurement and determination of the stress distribution of the inflatable membrane structure under different internal pressures, combining photogrammetry and the force-finding method. The shape of the inflatable membrane structure is maintained by the use of pressurized air, and the internal pressure is controlled and measured by means of an automatic pressure control system. The 3D coordinates of the marking points pasted on the membrane surface are acquired by three photographs captured from three cameras based on photogrammetry. After digitizing the markings on the photographs, the 3D curved surfaces are rebuilt. The continuous membrane surfaces are discretized into quadrilateral mesh and simulated by membrane links to calculate the stress distributions using the force-finding method. The internal pressure is simplified to the external node forces in the normal direction according to the contributory area of the node. Once the geometry x, the external force r and the topology C are obtained, the unknown force densities q in each link can be determined. Therefore, the stress distributions of the inflatable membrane structure can be calculated, combining the linear adjustment theory and the force density method based on the force equilibrium of inflated internal pressure and membrane internal force without considering the mechanical properties of the constitutive material. As the use of the inflatable membrane structure is attractive in the field of civil building, an ethylene-tetrafluoroethylene (ETFE) cushion is used with the measurement model to validate the proposed methodology. The comparisons between the

  19. Traction and film thickness measurements under starved elastohydrodynamic conditions

    NASA Technical Reports Server (NTRS)

    Wedeven, L. D.

    1974-01-01

    Traction measurements under starved elastohydrodynamic conditions were obtained for a point contact geometry. Simultaneous measurements of the film thickness and the locations of the inlet lubricant boundary were made optically. The thickness of a starved film for combination rolling and sliding conditions varies with the location of the inlet boundary in the same way found previously for pure rolling. A starved film was observed to possess greater traction than a flooded film for the same slide roll ratio. For a given slide roll ratio a starved film simply increases the shear rate in the Hertz region. The maximum shear rate depends on the degree of starvation and has no theoretical limit. Traction measurements under starved conditions were compared with flooded conditions under equivalent shear rates in the Hertz region. When the shear rates in the Hertz region were low and the film severely starved, the measured tractions were found to be much lower than expected.

  20. A methodology to determine boundary conditions from forced convection experiments using liquid crystal thermography

    NASA Astrophysics Data System (ADS)

    Jakkareddy, Pradeep S.; Balaji, C.

    2017-02-01

    This paper reports the results of an experimental study to estimate the heat flux and convective heat transfer coefficient using liquid crystal thermography and Bayesian inference in a heat generating sphere, enclosed in a cubical Teflon block. The geometry considered for the experiments comprises a heater inserted in a hollow hemispherical aluminium ball, resulting in a volumetric heat generation source that is placed at the center of the Teflon block. Calibrated thermochromic liquid crystal sheets are used to capture the temperature distribution at the front face of the Teflon block. The forward model is the three dimensional conduction equation which is solved within the Teflon block to obtain steady state temperatures, using COMSOL. Match up experiments are carried out for various velocities by minimizing the residual between TLC and simulated temperatures for every assumed loss coefficient, to obtain a correlation of average Nusselt number against Reynolds number. This is used for prescribing the boundary condition for the solution to the forward model. A surrogate model obtained by artificial neural network built upon the data from COMSOL simulations is used to drive a Markov Chain Monte Carlo based Metropolis Hastings algorithm to generate the samples. Bayesian inference is adopted to solve the inverse problem for determination of heat flux and heat transfer coefficient from the measured temperature field. Point estimates of the posterior like the mean, maximum a posteriori and standard deviation of the retrieved heat flux and convective heat transfer coefficient are reported. Additionally the effect of number of samples on the performance of the estimation process has been investigated.

  1. A methodology to determine boundary conditions from forced convection experiments using liquid crystal thermography

    NASA Astrophysics Data System (ADS)

    Jakkareddy, Pradeep S.; Balaji, C.

    2016-05-01

    This paper reports the results of an experimental study to estimate the heat flux and convective heat transfer coefficient using liquid crystal thermography and Bayesian inference in a heat generating sphere, enclosed in a cubical Teflon block. The geometry considered for the experiments comprises a heater inserted in a hollow hemispherical aluminium ball, resulting in a volumetric heat generation source that is placed at the center of the Teflon block. Calibrated thermochromic liquid crystal sheets are used to capture the temperature distribution at the front face of the Teflon block. The forward model is the three dimensional conduction equation which is solved within the Teflon block to obtain steady state temperatures, using COMSOL. Match up experiments are carried out for various velocities by minimizing the residual between TLC and simulated temperatures for every assumed loss coefficient, to obtain a correlation of average Nusselt number against Reynolds number. This is used for prescribing the boundary condition for the solution to the forward model. A surrogate model obtained by artificial neural network built upon the data from COMSOL simulations is used to drive a Markov Chain Monte Carlo based Metropolis Hastings algorithm to generate the samples. Bayesian inference is adopted to solve the inverse problem for determination of heat flux and heat transfer coefficient from the measured temperature field. Point estimates of the posterior like the mean, maximum a posteriori and standard deviation of the retrieved heat flux and convective heat transfer coefficient are reported. Additionally the effect of number of samples on the performance of the estimation process has been investigated.

  2. A methodology to condition distorted acoustic emission signals to identify fracture timing from human cadaver spine impact tests.

    PubMed

    Arun, Mike W J; Yoganandan, Narayan; Stemper, Brian D; Pintar, Frank A

    2014-12-01

    While studies have used acoustic sensors to determine fracture initiation time in biomechanical studies, a systematic procedure is not established to process acoustic signals. The objective of the study was to develop a methodology to condition distorted acoustic emission data using signal processing techniques to identify fracture initiation time. The methodology was developed from testing a human cadaver lumbar spine column. Acoustic sensors were glued to all vertebrae, high-rate impact loading was applied, load-time histories were recorded (load cell), and fracture was documented using CT. Compression fracture occurred to L1 while other vertebrae were intact. FFT of raw voltage-time traces were used to determine an optimum frequency range associated with high decibel levels. Signals were bandpass filtered in this range. Bursting pattern was found in the fractured vertebra while signals from other vertebrae were silent. Bursting time was associated with time of fracture initiation. Force at fracture was determined using this time and force-time data. The methodology is independent of selecting parameters a priori such as fixing a voltage level(s), bandpass frequency and/or using force-time signal, and allows determination of force based on time identified during signal processing. The methodology can be used for different body regions in cadaver experiments.

  3. Effects of Testing Conditions on Self-Concept Measurement

    ERIC Educational Resources Information Center

    Chandler, Theodore A.; And Others

    1976-01-01

    Many self-concept measures employ several different scales to which the subject responds in a set order at one sitting. This study examined effects of different testing conditions. The Index of Adjustment and Values (IAV) was administered to 191 graduate students under two different sequences and two time delay conditions. Results indicate…

  4. Assessing the Practical Equivalence of Conversions when Measurement Conditions Change

    ERIC Educational Resources Information Center

    Liu, Jinghua; Dorans, Neil J.

    2012-01-01

    At times, the same set of test questions is administered under different measurement conditions that might affect the psychometric properties of the test scores enough to warrant different score conversions for the different conditions. We propose a procedure for assessing the practical equivalence of conversions developed for the same set of test…

  5. The methodologies and instruments of vehicle particulate emission measurement for current and future legislative regulations

    NASA Astrophysics Data System (ADS)

    Otsuki, Yoshinori; Nakamura, Hiroshi; Arai, Masataka; Xu, Min

    2015-09-01

    Since the health risks associated with fine particles whose aerodynamic diameters are smaller than 2.5 μm was first proven, regulations restricting particulate matter (PM) mass emissions from internal combustion engines have become increasingly severe. Accordingly, the gravimetric method of PM mass measurement is facing its lower limit of detection as the emissions from vehicles are further reduced. For example, the variation in the adsorption of gaseous components such as hydrocarbons from unburned fuel and lubricant oil and the presence of agglomerated particles, which are not directly generated in engine combustion but re-entrainment particulates from walls of sampling pipes, can cause uncertainty in measurement. The PM mass measurement systems and methodologies have been continuously refined in order to improve measurement accuracy. As an alternative metric, the particle measurement programme (PMP) within the United Nations Economic Commission for Europe (UNECE) developed a solid particle number measurement method in order to improve the sensitivity of particulate emission measurement from vehicles. Consequently, particle number (PN) limits were implemented into the regulations in Europe from 2011. Recently, portable emission measurement systems (PEMS) for in-use vehicle emission measurements are also attracting attention, currently in North America and Europe, and real-time PM mass and PN instruments are under evaluation.

  6. Ultrasonic Technique for Density Measurement of Liquids in Extreme Conditions

    PubMed Central

    Kazys, Rymantas; Sliteris, Reimondas; Rekuviene, Regina; Zukauskas, Egidijus; Mazeika, Liudas

    2015-01-01

    An ultrasonic technique, invariant to temperature changes, for a density measurement of different liquids under in situ extreme conditions is presented. The influence of geometry and material parameters of the measurement system (transducer, waveguide, matching layer) on measurement accuracy and reliability is analyzed theoretically along with experimental results. The proposed method is based on measurement of the amplitude of the ultrasonic wave, reflected from the interface of the solid/liquid medium under investigation. In order to enhance sensitivity, the use of a quarter wavelength acoustic matching layer is proposed. Therefore, the sensitivity of the measurement system increases significantly. Density measurements quite often must be performed in extreme conditions at high temperature (up to 220 °C) and high pressure. In this case, metal waveguides between piezoelectric transducer and the measured liquid are used in order to protect the conventional transducer from the influence of high temperature and to avoid depolarization. The presented ultrasonic density measurement technique is suitable for density measurement in different materials, including liquids and polymer melts in extreme conditions. A new calibration algorithm was proposed. The metrological evaluation of the measurement method was performed. The expanded measurement uncertainty Uρ = 7.4 × 10−3 g/cm3 (1%). PMID:26262619

  7. A Measurement and Simulation Based Methodology for Cache Performance Modeling and Tuning

    NASA Technical Reports Server (NTRS)

    Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    We present a cache performance modeling methodology that facilitates the tuning of uniprocessor cache performance for applications executing on shared memory multiprocessors by accurately predicting the effects of source code level modifications. Measurements on a single processor are initially used for identifying parts of code where cache utilization improvements may significantly impact the overall performance. Cache simulation based on trace-driven techniques can be carried out without gathering detailed address traces. Minimal runtime information for modeling cache performance of a selected code block includes: base virtual addresses of arrays, virtual addresses of variables, and loop bounds for that code block. Rest of the information is obtained from the source code. We show that the cache performance predictions are as reliable as those obtained through trace-driven simulations. This technique is particularly helpful to the exploration of various "what-if' scenarios regarding the cache performance impact for alternative code structures. We explain and validate this methodology using a simple matrix-matrix multiplication program. We then apply this methodology to predict and tune the cache performance of two realistic scientific applications taken from the Computational Fluid Dynamics (CFD) domain.

  8. Optimisation of Ultrasound-Assisted Extraction Conditions for Phenolic Content and Antioxidant Capacity from Euphorbia tirucalli Using Response Surface Methodology

    PubMed Central

    Vuong, Quan V.; Goldsmith, Chloe D.; Dang, Trung Thanh; Nguyen, Van Tang; Bhuyan, Deep Jyoti; Sadeqzadeh, Elham; Scarlett, Christopher J.; Bowyer, Michael C.

    2014-01-01

    Euphorbia tirucalli (E. tirucalli) is now widely distributed around the world and is well known as a source of traditional medicine in many countries. This study aimed to utilise response surface methodology (RSM) to optimise ultrasonic-assisted extraction (UAE) conditions for total phenolic compounds (TPC) and antioxidant capacity from E. tirucalli leaf. The results showed that ultrasonic temperature, time and power effected TPC and antioxidant capacity; however, the effects varied. Ultrasonic power had the strongest influence on TPC; whereas ultrasonic temperature had the greatest impact on antioxidant capacity. Ultrasonic time had the least impact on both TPC and antioxidant capacity. The optimum UAE conditions were determined to be 50 °C, 90 min. and 200 W. Under these conditions, the E. tirucalli leaf extract yielded 2.93 mg GAE/g FW of TPC and exhibited potent antioxidant capacity. These conditions can be utilised for further isolation and purification of phenolic compounds from E. tirucalli leaf. PMID:26785074

  9. Optimisation of Ultrasound-Assisted Extraction Conditions for Phenolic Content and Antioxidant Capacity from Euphorbia tirucalli Using Response Surface Methodology.

    PubMed

    Vuong, Quan V; Goldsmith, Chloe D; Dang, Trung Thanh; Nguyen, Van Tang; Bhuyan, Deep Jyoti; Sadeqzadeh, Elham; Scarlett, Christopher J; Bowyer, Michael C

    2014-09-17

    Euphorbia tirucalli (E. tirucalli) is now widely distributed around the world and is well known as a source of traditional medicine in many countries. This study aimed to utilise response surface methodology (RSM) to optimise ultrasonic-assisted extraction (UAE) conditions for total phenolic compounds (TPC) and antioxidant capacity from E. tirucalli leaf. The results showed that ultrasonic temperature, time and power effected TPC and antioxidant capacity; however, the effects varied. Ultrasonic power had the strongest influence on TPC; whereas ultrasonic temperature had the greatest impact on antioxidant capacity. Ultrasonic time had the least impact on both TPC and antioxidant capacity. The optimum UAE conditions were determined to be 50 °C, 90 min. and 200 W. Under these conditions, the E. tirucalli leaf extract yielded 2.93 mg GAE/g FW of TPC and exhibited potent antioxidant capacity. These conditions can be utilised for further isolation and purification of phenolic compounds from E. tirucalli leaf.

  10. Structural acoustic control of plates with variable boundary conditions: design methodology.

    PubMed

    Sprofera, Joseph D; Cabell, Randolph H; Gibbs, Gary P; Clark, Robert L

    2007-07-01

    A method for optimizing a structural acoustic control system subject to variations in plate boundary conditions is provided. The assumed modes method is used to build a plate model with varying levels of rotational boundary stiffness to simulate the dynamics of a plate with uncertain edge conditions. A transducer placement scoring process, involving Hankel singular values, is combined with a genetic optimization routine to find spatial locations robust to boundary condition variation. Predicted frequency response characteristics are examined, and theoretically optimized results are discussed in relation to the range of boundary conditions investigated. Modeled results indicate that it is possible to minimize the impact of uncertain boundary conditions in active structural acoustic control by optimizing the placement of transducers with respect to those uncertainties.

  11. Aerosol classification using airborne High Spectral Resolution Lidar measurements - methodology and examples

    NASA Astrophysics Data System (ADS)

    Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.; Rogers, R. R.; Obland, M. D.; Butler, C. F.; Cook, A. L.; Harper, D. B.; Froyd, K. D.

    2012-01-01

    The NASA Langley Research Center (LaRC) airborne High Spectral Resolution Lidar (HSRL) on the NASA B200 aircraft has acquired extensive datasets of aerosol extinction (532 nm), aerosol optical depth (AOD) (532 nm), backscatter (532 and 1064 nm), and depolarization (532 and 1064 nm) profiles during 18 field missions that have been conducted over North America since 2006. The lidar measurements of aerosol intensive parameters (lidar ratio, depolarization, backscatter color ratio, and spectral depolarization ratio) are shown to vary with location and aerosol type. A methodology based on observations of known aerosol types is used to qualitatively classify the extensive set of HSRL aerosol measurements into eight separate types. Several examples are presented showing how the aerosol intensive parameters vary with aerosol type and how these aerosols are classified according to this new methodology. The HSRL-based classification reveals vertical variability of aerosol types during the NASA ARCTAS field experiment conducted over Alaska and northwest Canada during 2008. In two examples derived from flights conducted during ARCTAS, the HSRL classification of biomass burning smoke is shown to be consistent with aerosol types derived from coincident airborne in situ measurements of particle size and composition. The HSRL retrievals of AOD and inferences of aerosol types are used to apportion AOD to aerosol type; results of this analysis are shown for several experiments.

  12. Aerosol classification using airborne High Spectral Resolution Lidar measurements - methodology and examples

    NASA Astrophysics Data System (ADS)

    Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Hair, J. W.; Rogers, R. R.; Obland, M. D.; Butler, C. F.; Cook, A. L.; Harper, D. B.; Froyd, K. D.

    2011-09-01

    The NASA Langley Research Center (LaRC) airborne High Spectral Resolution Lidar (HSRL) on the NASA B200 aircraft has acquired extensive datasets of aerosol extinction (532 nm), aerosol optical thickness (AOT) (532 nm), backscatter (532 and 1064 nm), and depolarization (532 and 1064 nm) profiles during 18 field missions that have been conducted over North America since 2006. The lidar measurements of aerosol intensive parameters (lidar ratio, depolarization, backscatter color ratio, and spectral depolarization ratio) are shown to vary with location and aerosol type. A methodology based on observations of known aerosol types is used to qualitatively classify the extensive set of HSRL aerosol measurements into eight separate types. Several examples are presented showing how the aerosol intensive parameters vary with aerosol type and how these aerosols are classified according to this new methodology. The HSRL-based classification reveals vertical variability of aerosol types during the NASA ARCTAS field experiment conducted over Alaska and northwest Canada during 2008. In two examples derived from flights conducted during ARCTAS, the HSRL classification of biomass burning smoke is shown to be consistent with aerosol types derived from coincident airborne in situ measurements of particle size and composition. The HSRL retrievals of AOT and inferences of aerosol types are used to apportion AOT to aerosol type; results of this analysis are shown for several experiments.

  13. Dielectric Barrier Discharge (DBD) Plasma Actuators Thrust-Measurement Methodology Incorporating New Anti-Thrust Hypothesis

    NASA Technical Reports Server (NTRS)

    Ashpis, David E.; Laun, Matthew C.

    2014-01-01

    We discuss thrust measurements of Dielectric Barrier Discharge (DBD) plasma actuators devices used for aerodynamic active flow control. After a review of our experience with conventional thrust measurement and significant non-repeatability of the results, we devised a suspended actuator test setup, and now present a methodology of thrust measurements with decreased uncertainty. The methodology consists of frequency scans at constant voltages. The procedure consists of increasing the frequency in a step-wise fashion from several Hz to the maximum frequency of several kHz, followed by frequency decrease back down to the start frequency of several Hz. This sequence is performed first at the highest voltage of interest, then repeated at lower voltages. The data in the descending frequency direction is more consistent and selected for reporting. Sample results show strong dependence of thrust on humidity which also affects the consistency and fluctuations of the measurements. We also observed negative values of thrust or "anti-thrust", at low frequencies between 4 Hz and up to 64 Hz. The anti-thrust is proportional to the mean-squared voltage and is frequency independent. Departures from the parabolic anti-thrust curve are correlated with appearance of visible plasma discharges. We propose the anti-thrust hypothesis. It states that the measured thrust is a sum of plasma thrust and anti-thrust, and assumes that the anti-thrust exists at all frequencies and voltages. The anti-thrust depends on actuator geometry and materials and on the test installation. It enables the separation of the plasma thrust from the measured total thrust. This approach enables more meaningful comparisons between actuators at different installations and laboratories. The dependence on test installation was validated by surrounding the actuator with a large diameter, grounded, metal sleeve.

  14. Don't fear 'fear conditioning': Methodological considerations for the design and analysis of studies on human fear acquisition, extinction, and return of fear.

    PubMed

    Lonsdorf, Tina B; Menz, Mareike M; Andreatta, Marta; Fullana, Miguel A; Golkar, Armita; Haaker, Jan; Heitland, Ivo; Hermann, Andrea; Kuhn, Manuel; Kruse, Onno; Drexler, Shira Meir; Meulders, Ann; Nees, Frauke; Pittig, Andre; Richter, Jan; Römer, Sonja; Shiban, Youssef; Schmitz, Anja; Straube, Benjamin; Vervliet, Bram; Wendt, Julia; Baas, Johanna M P; Merz, Christian J

    2017-03-03

    The so-called 'replicability crisis' has sparked methodological discussions in many areas of science in general, and in psychology in particular. This has led to recent endeavours to promote the transparency, rigour, and ultimately, replicability of research. Originating from this zeitgeist, the challenge to discuss critical issues on terminology, design, methods, and analysis considerations in fear conditioning research is taken up by this work, which involved representatives from fourteen of the major human fear conditioning laboratories in Europe. This compendium is intended to provide a basis for the development of a common procedural and terminology framework for the field of human fear conditioning. Whenever possible, we give general recommendations. When this is not feasible, we provide evidence-based guidance for methodological decisions on study design, outcome measures, and analyses. Importantly, this work is also intended to raise awareness and initiate discussions on crucial questions with respect to data collection, processing, statistical analyses, the impact of subtle procedural changes, and data reporting specifically tailored to the research on fear conditioning.

  15. Technical Guide for "Measuring Up 2006": Documenting Methodology, Indicators, and Data Sources. National Center #06-6

    ERIC Educational Resources Information Center

    National Center for Public Policy and Higher Education, 2006

    2006-01-01

    This "Technical Guide" describes the methodology and concepts used to measure and grade the performance of the 50 states in the higher education arena. Part I presents the methodology for grading states and provides information on data collection and reporting. Part II explains the indicators that comprise each of the graded categories.…

  16. A method of measuring dynamic strain under electromagnetic forming conditions.

    PubMed

    Chen, Jinling; Xi, Xuekui; Wang, Sijun; Lu, Jun; Guo, Chenglong; Wang, Wenquan; Liu, Enke; Wang, Wenhong; Liu, Lin; Wu, Guangheng

    2016-04-01

    Dynamic strain measurement is rather important for the characterization of mechanical behaviors in electromagnetic forming process, but it has been hindered by high strain rate and serious electromagnetic interference for years. In this work, a simple and effective strain measuring technique for physical and mechanical behavior studies in the electromagnetic forming process has been developed. High resolution (∼5 ppm) of strain curves of a budging aluminum tube in pulsed electromagnetic field has been successfully measured using this technique. The measured strain rate is about 10(5) s(-1), which depends on the discharging conditions, nearly one order of magnitude of higher than that under conventional split Hopkins pressure bar loading conditions (∼10(4) s(-1)). It has been found that the dynamic fracture toughness of an aluminum alloy is significantly enhanced during the electromagnetic forming, which explains why the formability is much larger under electromagnetic forging conditions in comparison with conventional forging processes.

  17. Absolute Radiation Measurements in Earth and Mars Entry Conditions

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.

    2014-01-01

    This paper reports on the measurement of radiative heating for shock heated flows which simulate conditions for Mars and Earth entries. Radiation measurements are made in NASA Ames' Electric Arc Shock Tube at velocities from 3-15 km/s in mixtures of N2/O2 and CO2/N2/Ar. The technique and limitations of the measurement are summarized in some detail. The absolute measurements will be discussed in regards to spectral features, radiative magnitude and spatiotemporal trends. Via analysis of spectra it is possible to extract properties such as electron density, and rotational, vibrational and electronic temperatures. Relaxation behind the shock is analyzed to determine how these properties relax to equilibrium and are used to validate and refine kinetic models. It is found that, for some conditions, some of these values diverge from non-equilibrium indicating a lack of similarity between the shock tube and free flight conditions. Possible reasons for this are discussed.

  18. Measurement of Lubricating Condition between Swashplate and Shoe in Swashplate Compressors under Practical Operating Conditions

    NASA Astrophysics Data System (ADS)

    Suzuki, Hisashi; Fukuta, Mitsuhiro; Yanagisawa, Tadashi

    In this paper, lubricating conditions between a swashplate and a shoe in a swashplate compressor for automotive air conditioners is investigated experimentally. The conditions are measured with an electric resistance method that utilizes the swash plate and the shoe as electrodes respectively. The instrumented compressor is connected to an experimental cycle with R134a and operated under various operating conditions of pressure and rotational speed. An improved measurement technique and applying a solid contact ratio to the measurement results permit to measure the lubricating condition at high rotational speed (more than 8000 rpm) and to predic an occurrence of scuffing between the swashplate and the shoe, and therefore enables a detailed study of lubricating characteristics. It is shown by the measurement that the voltage of the contact signal decreases, which means better lubricating condition, with the decrease of the compression pressure and with the increase of the rotational speed from 1000 rpm through 5000 rpm. The lubricating condition tends to worsen at more than 5000 rpm. Furthermore, it is confirmed that the lubricating condition under transient operation is worse obviously as compared with that under steady-state operation.

  19. Design and Verification Methodology of Boundary Conditions for Finite Volume Schemes

    DTIC Science & Technology

    2012-07-01

    and Grossman advocate a curvature corrected symmetry condition for an inviscid wall [3]. Balakrishnan and Fernandez advocate a variety of other methods...boundary source terms is straightforward, generally requiring much less algebraic manipulation than interior source terms. A number of test cases...Meeting, Reno, NV, January 1986. [3] A. Dadone and B. Grossman . Surface boundary conditions for the numerical solution of the euler equations. AIAA

  20. Financial Measures Project: Measuring Financial Conditions of Colleges and Universities, 1978 Working Conference.

    ERIC Educational Resources Information Center

    Coldren, Sharon L., Ed.

    Papers are presented from a 1978 working conference on measuring financial conditions of colleges and universities. Contents include the following: "The Federal Government's Interest in the Development of Financial Measures" by M. Chandler; "Improving the Conceptual Framework for Measuring Financial Condition Using Institutional…

  1. Deception detection with behavioral, autonomic, and neural measures: Conceptual and methodological considerations that warrant modesty.

    PubMed

    Meijer, Ewout H; Verschuere, Bruno; Gamer, Matthias; Merckelbach, Harald; Ben-Shakhar, Gershon

    2016-05-01

    The detection of deception has attracted increased attention among psychological researchers, legal scholars, and ethicists during the last decade. Much of this has been driven by the possibility of using neuroimaging techniques for lie detection. Yet, neuroimaging studies addressing deception detection are clouded by lack of conceptual clarity and a host of methodological problems that are not unique to neuroimaging. We review the various research paradigms and the dependent measures that have been adopted to study deception and its detection. In doing so, we differentiate between basic research designed to shed light on the neurocognitive mechanisms underlying deceptive behavior and applied research aimed at detecting lies. We also stress the distinction between paradigms attempting to detect deception directly and those attempting to establish involvement by detecting crime-related knowledge, and discuss the methodological difficulties and threats to validity associated with each paradigm. Our conclusion is that the main challenge of future research is to find paradigms that can isolate cognitive factors associated with deception, rather than the discovery of a unique (brain) correlate of lying. We argue that the Comparison Question Test currently applied in many countries has weak scientific validity, which cannot be remedied by using neuroimaging measures. Other paradigms are promising, but the absence of data from ecologically valid studies poses a challenge for legal admissibility of their outcomes.

  2. Effects of Specimen Collection Methodologies and Storage Conditions on the Short-Term Stability of Oral Microbiome Taxonomy

    PubMed Central

    Luo, Ting; Srinivasan, Usha; Ramadugu, Kirtana; Shedden, Kerby A.; Neiswanger, Katherine; Trumble, Erika; Li, Jiean J.; McNeil, Daniel W.; Crout, Richard J.; Weyant, Robert J.; Marazita, Mary L.

    2016-01-01

    , resources available at a study site, and shipping requirements. The research presented in this paper measures the effects of multiple storage parameters and collection methodologies on the measured ecology of the oral microbiome from healthy adults and children. These results will potentially enable investigators to conduct oral microbiome studies at maximal efficiency by guiding informed administrative decisions pertaining to the necessary field or clinical work. PMID:27371581

  3. Advanced methodology for measuring the extensive elastic compliance and mechanical loss directly in k31 mode piezoelectric ceramic plates

    NASA Astrophysics Data System (ADS)

    Majzoubi, Maryam; Shekhani, Husain N.; Bansal, Anushka; Hennig, Eberhard; Scholehwar, Timo; Uchino, Kenji

    2016-12-01

    Dielectric, elastic, and piezoelectric constants, and their corresponding losses are defined under constant conditions of two categories; namely, intensive (i.e., E, electric field or T, stress), and extensive (i.e., D, dielectric displacement or x, strain) ones. So far, only the intensive parameters and losses could be measured directly in a k31 mode sample. Their corresponding extensive parameters could be calculated indirectly using the coupling factor and "K" matrix. However, the extensive loss parameters, calculated through this indirect method, could have large uncertainty, due to the error propagation in calculation. In order to overcome this issue, extensive losses should be measured separately from the measurable intensive ones in lead-zirconate-titanate (PZT) k31 mode rectangular plate ceramics. We propose a new mechanical-excitation methodology, using a non-destructive testing approach by means of a partial electrode configuration, instead of the conventional full electrode configuration. For this purpose, a non-electrode sample was prepared, where the electrode covered only 10% of the top and bottom surfaces at the center to actuate the whole sample, and also monitor the responding vibration. The admittance spectrum of this sample, corresponds to PZT properties under dielectric displacement D constant condition. Furthermore, ceramics with partial-electrodes were also prepared to create short and open circuit boundary conditions, attributing to resonance and anti-resonance modes. In the proposed way, we were able to measure both intensive and extensive elastic compliances and mechanical losses directly for the first time. The accuracy of this new method is compared with the conventional measurements by use of indirect calculations. The preliminary results (by neglecting the 10% actuator part difference at this point) were obtained, which were in good agreements (less than 3% difference) with the previous indirect method.

  4. Optimization of Culture Conditions for Fermentation of Soymilk Using Lactobacillus casei by Response Surface Methodology.

    PubMed

    Khoshayand, Feriyar; Goodarzi, Sanaz; Shahverdi, Ahmad Reza; Khoshayand, Mohammad Reza

    2011-12-01

    Soymilk was fermented with Lactobacillus casei, and statistical experimental design was used to investigate factors affecting viable cells of L. casei, including temperature, glucose, niacin, riboflavin, pyridoxine, folic acid and pantothenic acid. Initial screening by Plackett-Burman design revealed that among these factors, temperature, glucose and niacin have significant effects on the growth of L. casei. Further optimization with Box-Behnken design and response surface analysis showed that a second-order polynomial model fits the experimental data appropriately. The optimum conditions for temperature, glucose and niacin were found to be 15.77 °C, 5.23 and 0.63 g/L, respectively. The concentration of viable L. casei cells under these conditions was 8.23 log10 (CFU/mL). The perfect agreement between the observed values and the values predicted by the equation confirms the statistical significance of the model and the model's adequate precision in predicting optimum conditions.

  5. Intercomparison of magnetic field measurements near MV/LV transformer substations: methodological planning and results.

    PubMed

    Violanti, S; Fraschetta, M; Adda, S; Caputo, E

    2009-12-01

    Within the framework of Environmental Agencies system's activities, coordinated by ISPRA (superior institute for environmental protection and research), a comparison among measurements was designed and accomplished, in order to go into depth on the matter of measurement problems and to evaluate magnetic field at power frequencies. These measurements have been taken near medium voltage /low voltage transformer substation. This project was developed with the contribution of several experts who belong to different Regional Agencies. In three of these regions, substations having specific international standard characteristics were chosen; then a measurement and data analysis protocol was arranged. Data analysis showed a good level of coherence among results obtained by different laboratories. However, a range of problems emerged, either during the protocol predisposition and definition of the data analysis procedure or during the execution of measures and data reprocessing, because of the spatial and temporal variability of magnetic field. These problems represent elements of particular interest in determining a correct measurement methodology, whose purpose is the comparison with limits of exposure, attention values and quality targets.

  6. [Methodological issues in the measurement of alcohol consumption: the importance of drinking patterns].

    PubMed

    Valencia Martín, José L; González, M José; Galán, Iñaki

    2014-08-01

    Measurement of alcohol consumption is essential for proper investigation of its effects on health. However, its estimation is extremely complex, because of the diversity of forms of alcohol consumption and their highly heterogeneous classification. Moreover, each form may have different effects on health; therefore, not considering the most important drinking patterns when estimating alcohol intake could mask the important role of consumption patterns in these effects. All these issues make it very difficult to compare the results of different studies and to establish consistent associations for understanding the true effects of alcohol consumption, both overall and specific to each drinking pattern. This article reviews the main methods and sources of information available in Spain for estimating the most important aspects of alcohol consumption, as well as the most frequent methodological problems encountered in the measurement and classification of drinking patterns.

  7. [Methodology and Implementation of Forced Oscillation Technique for Respiratory Mechanics Measurement].

    PubMed

    Zhang, Zhengbo; Ni, Lu; Liu, Xiaoli; Li, Deyu; Wang, Weidong

    2015-11-01

    The forced oscillation technique (FOT) is a noninvasive method for respiratory mechanics measurement. For the FOT, external signals (e.g. forced oscillations around 4-40 Hz) are used to drive the respiratory system, and the mechanical characteristic of the respiratory system can be determined with the linear system identification theory. Thus, respiratory mechanical properties and components at different frequency and location of the airway can be explored by specifically developed forcing waveforms. In this paper, the theory, methodology and clinical application of the FOT is reviewed, including measure ment theory, driving signals, models of respiratory system, algorithm for impedance identification, and requirement on apparatus. Finally, the future development of this technique is also discussed.

  8. Advanced haptic sensor for measuring human skin conditions

    NASA Astrophysics Data System (ADS)

    Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami

    2009-12-01

    This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.

  9. Advanced haptic sensor for measuring human skin conditions

    NASA Astrophysics Data System (ADS)

    Tsuchimi, Daisuke; Okuyama, Takeshi; Tanaka, Mami

    2010-01-01

    This paper is concerned with the development of a tactile sensor using PVDF (Polyvinylidene Fluoride) film as a sensory receptor of the sensor to evaluate softness, smoothness, and stickiness of human skin. Tactile sense is the most important sense in the sensation receptor of the human body along with eyesight, and we can examine skin condition quickly using these sense. But, its subjectivity and ambiguity make it difficult to quantify skin conditions. Therefore, development of measurement device which can evaluate skin conditions easily and objectively is demanded by dermatologists, cosmetic industries, and so on. In this paper, an advanced haptic sensor system that can measure multiple information of skin condition in various parts of human body is developed. The applications of the sensor system to evaluate softness, smoothness, and stickiness of skin are investigated through two experiments.

  10. Using intensive indicators of wetland condition to evaluate a rapid assessment methodology in Oregon tidal wetlands

    EPA Science Inventory

    In 2011, the US EPA and its partners will conduct the first-ever national survey on the condition of the Nation’s wetlands. This survey will utilize a three-tiered assessment approach that includes landscape level indicators (Level 1), rapid indicators (Level 2), and intensive, ...

  11. Hyper-X Mach 10 Engine Flowpath Development: Fifth Entry Test Conditions and Methodology

    NASA Technical Reports Server (NTRS)

    Bakos, R. J.; Tsai, C.-Y.; Rogers, R. C.; Shih, A. T.

    2001-01-01

    A series of Hyper-X Mach 10 flowpath ground tests are underway to obtain engine performance and operation data and to confirm and refine the flowpath design methods. The model used is a full-scale height, partial-width replica of the Hyper-X Research Vehicle propulsive flowpath with truncated forebody and aftbody. This is the fifth test entry for this model in the NASA-HYPULSE facility at GASL. For this entry the facility nozzle and model forebody were modified to better simulate the engine inflow conditions at the target flight conditions. The forebody was modified to be a wide flat plate with no flow fences, the facility nozzle Mach number was increased, and the model was positioned to be tested in a semi-direct-connect arrangement. This paper presents a review of the test conditions, model calibrations, and a description of steady flow confirmation. The test series included runs using hydrogen fuel, and a silane-in-hydrogen fuel mixture. Other test parameters included the model mounting angle (relative to the tunnel flow), and the test gas oxygen fraction to account for the presence of [NO] in the test gas at the M10 conditions.

  12. Noncontacting measurement technologies for space propulsion condition monitoring

    NASA Technical Reports Server (NTRS)

    Randall, M. R.; Barkhoudarian, S.; Collins, J. J.; Schwartzbart, A.

    1987-01-01

    This paper describes four noncontacting measurement technologies that can be used in a turbopump condition monitoring system. The isotope wear analyzer, fiberoptic deflectometer, brushless torque-meter, and fiberoptic pyrometer can be used to monitor component wear, bearing degradation, instantaneous shaft torque, and turbine blade cracking, respectively. A complete turbopump condition monitoring system including these four technologies could predict remaining component life, thus reducing engine operating costs and increasing reliability.

  13. Measuring Pavlovian fear with conditioned freezing and conditioned suppression reveals different roles for the basolateral amygdala.

    PubMed

    McDannald, Michael A; Galarce, Ezequiel M

    2011-02-16

    In Pavlovian fear conditioning, pairing a neutral cue with aversive foot shock endows a cue with fear-eliciting properties. Studies of Pavlovian fear conditioning measuring freezing have demonstrated the basolateral amygdala (BLA) to be critical to both fear learning and memory. The nucleus accumbens core (NAc), while not important to freezing, is important to the enhancement of instrumental responding by cues paired with food reward. In the present study we investigated the role of the BLA and the NAc in another property of fear cues, the ability to suppress instrumental responding for food rewards (conditioned suppression). Sham, BLA and NAc-lesioned rats received a fear discrimination procedure in which one visual cue (CS+) predicted foot shock while a second cue (CS-) did not. Conditioning took place over a baseline of instrumental responding, allowing for concurrent measure of freezing and instrumental suppression. NAc lesions left fear conditioning fully intact. BLA lesions impaired acquisition and discrimination of fear when assessed with conditioned freezing. However, BLA lesions only altered fear acquisition and left discrimination completely intact when assessed with conditioned suppression. These findings suggest a critical role for the BLA in fear when assessed with conditioned freezing but a diminished role when assessed with conditioned suppression.

  14. Multispectroscopic methodology to study Libyan desert glass and its formation conditions.

    PubMed

    Gomez-Nubla, Leticia; Aramendia, Julene; Fdez-Ortiz de Vallejuelo, Silvia; Alonso-Olazabal, Ainhoa; Castro, Kepa; Zuluaga, Maria Cruz; Ortega, Luis Ángel; Murelaga, Xabier; Madariaga, Juan Manuel

    2017-03-27

    Libyan desert glass (LDG) is a melt product whose origin is still a matter of controversy. With the purpose of adding new information about this enigma, the present paper analyzes the inner part of LDG specimens and compares them with the results of LDG surfaces. An integrated analytical methodology was used combining different techniques such as Raman spectroscopy, in point-by-point and imaging modes, scanning electron microscopy with X-ray microanalysis (SEM-EDS), energy-dispersive micro X-ray fluorescence spectrometry (μ-EDXRF), electron probe micro analyzer (EPMA), and optical cathodoluminescence (Optical-CL). According to our results, flow structures of the melt and the amorphous nature of the matrix could be discerned. Moreover, the observed displacement of Raman bands, such as in the cases of quartz and zircon, and the identification of certain compounds such as coesite (the most clarifying phase of high pressures), α-cristobalite, gypsum, anhydrite, corundum, rutile, amorphous calcite, aragonite, and calcite allowed us to know that LDGs could be subjected to shock pressures between 6 and more than 30 GPa, and temperatures between 300 and 1470 °C. The differences of temperature and pressure would be provoked by different cooling processes during the impact. Besides, in most cases the minerals corresponding to high pressure and temperatures were located in the inner part of the LDGs, with some exceptions that could be explained because they were trapped subsequently to the impact; there was more than one impact or heterogeneous cooling.Furthermore, nitrogen and oxygen gases were identified inside bubbles, which could have been introduced from the terrestrial atmosphere during the meteorite impact.These data helped us to clarify some clues about the origin of these enigmatic samples.

  15. Conditions of viscosity measurement for detecting irradiated peppers

    NASA Astrophysics Data System (ADS)

    Hayashi, Toru; Todoriki, Setsuko; Okadome, Hiroshi; Kohyama, Kaoru

    1995-04-01

    Viscosity of gelatinized suspensions of black and white peppers decreased depending upon dose. The viscosity was influenced by gelatinization and viscosity measurement conditions. The difference between unirradiated pepper and an irradiated one was larger at a higher pH and temperature for gelatinization. A viscosity parameter normalized with the starch content of pepper sample and the viscosity of a 5% suspension of corn starch could get rid of the influence of the conditions for viscosity measurement such as a type of viscometer, shear rate and temperature.

  16. Radiation measurements during cavities conditioning on APS RF test stand

    SciTech Connect

    Grudzien, D.M.; Kustom, R.L.; Moe, H.J.; Song, J.J.

    1993-07-01

    In order to determine the shielding structure around the Advanced Photon Source (APS) synchrotron and storage ring RF stations, the X-ray radiation has been measured in the near field and far field regions of the RF cavities during the normal conditioning process. Two cavity types, a prototype 352-MHz single-cell cavity and a 352-MHz five-cell cavity, are used on the APS and are conditioned in the RF test stand. Vacuum measurements are also taken on a prototype 352-MHz single-cell cavity and a 352-MHz five-cell cavity. The data will be compared with data on the five-cell cavities from CERN.

  17. Methodology to measure fundamental performance parameters of x-ray detectors

    NASA Astrophysics Data System (ADS)

    Busse, Falko; Ruetten, Walter; Wischmann, Hans-Aloys; Geiger, Bernhard; Spahn, Martin F.; Bastiaens, Raoul J. M.; Ducourant, Thierry

    2001-06-01

    To judge the potential benefit of a new x-ray detector technology and to be able to compare different technologies, some standard performance measurements must be defined. In addition to technology-related parameters which may influence weight, shape, image distortions and readout speed, there are fundamental performance parameters which directly influence the achievable image quality and dose efficiency of x-ray detectors. A standardization activity for detective quantum efficiency (DQE) for static detectors is already in progress. In this paper we present a methodology for noise power spectrum (NPS), low frequency drop (LFD) and signal to electronic noise ratio (SENR), and the influence of these parameters on DQE. The individual measurement methods are described in detail with their theoretical background and experimental procedure. Corresponding technical phantoms have been developed. The design of the measurement methods and technical phantoms is tuned so that only minimum requirements are placed on the detector properties. The measurement methods can therefore be applied to both static and dynamic x-ray systems. Measurement results from flat panel imagers and II/TV systems are presented.

  18. A new metric for measuring condition in large predatory sharks.

    PubMed

    Irschick, D J; Hammerschlag, N

    2014-09-01

    A simple metric (span condition analysis; SCA) is presented for quantifying the condition of sharks based on four measurements of body girth relative to body length. Data on 104 live sharks from four species that vary in body form, behaviour and habitat use (Carcharhinus leucas, Carcharhinus limbatus, Ginglymostoma cirratum and Galeocerdo cuvier) are given. Condition shows similar levels of variability among individuals within each species. Carcharhinus leucas showed a positive relationship between condition and body size, whereas the other three species showed no relationship. There was little evidence for strong differences in condition between males and females, although more male sharks are needed for some species (e.g. G. cuvier) to verify this finding. SCA is potentially viable for other large marine or terrestrial animals that are captured live and then released.

  19. Methodological approach towards the definition of new storage conditions for inert wastes.

    PubMed

    Perrodin, Y; Méhu, J; Grelier-Volatier, L; Charbonnierb, P; Baranger, P; Thoraval, L

    2002-01-01

    In 1997, the French Ministry of Environment launched studies aiming to define a specific regulation concerning inert waste disposal in order to limit potential impact of such facilities on the environment by fixing minimum requirements. A model (chemical model/hydrodynamic model) was developed to determine dumping conditions. This model was then applied on two defined scenarios (landfill surface, effective rainfalls...) in order to study the sulphate concentrations in aquifer system immediately downstream from the storage facility. Results allow us to determine in which conditions the sulphates concentrations are compatibles with the potentially drinkable character of the groundwater. They more specifically concern the nature of the waste disposed of, the efficient rainfalls and the landfill area.

  20. Analytical methodology for the study of endosulfan bioremediation under controlled conditions with white rot fungi.

    PubMed

    Rivero, Anisleidy; Niell, Silvina; Cesio, Verónica; Cerdeiras, M Pía; Heinzen, Horacio

    2012-10-15

    A general procedure to study the biodegradation of endosulfan under laboratory conditions by white rot fungi isolated from native sources growing in YNB (yeast nitrogen base) media with 1% of glucose is presented. The evaluation of endosulfan biodegradation as well as endosulfan sulfate, endosulfan ether and endosulfan alcohol production throughout the whole bioremedation process was performed using an original and straightforward validated analytical procedure with recoveries between 78 and 112% at all concentration levels studied except for endosulfan sulfate at 50 mg L(-1) that yielded 128% and RSDs<20%. Under the developed conditions, the basidiomycete Bjerkandera adusta was able to degrade 83% of (alpha+beta) endosulfan after 27 days, 6 mg kg(-1) of endosulfan diol were determined; endosulfan ether and endosulfan sulfate were produced below 1 mg kg(-1) (LOQ, limit of quantitation).

  1. Optimization of fermentation conditions for 1,3-propanediol production by marine Klebsiella pneumonia HSL4 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Li, Lili; Zhou, Sheng; Ji, Huasong; Gao, Ren; Qin, Qiwei

    2014-09-01

    The industrially important organic compound 1,3-propanediol (1,3-PDO) is mainly used as a building block for the production of various polymers. In the present study, response surface methodology protocol was followed to determine and optimize fermentation conditions for the maximum production of 1,3-PDO using marine-derived Klebsiella pneumoniae HSL4. Four nutritional supplements together with three independent culture conditions were optimized as follows: 29.3 g/L glycerol, 8.0 g/L K2 HPO4, 7.6 g/L (NH4)2 SO4, 3.0 g/L KH2 PO4, pH 7.1, cultivation at 35°C for 12 h. Under the optimal conditions, a maximum 1,3-PDO concentration of 14.5 g/L, a productivity of 1.21 g/(L·h) and a conversion of glycerol of 0.49 g/g were obtained. In comparison with the control conditions, fermentation under the optimized conditions achieved an increase of 38.8% in 1,3-PDO concentration, 39.0% in productivity and 25.7% in glycerol conversion in flask. This enhancement trend was further confirmed when the fermentation was conducted in a 5-L fermentor. The optimized fermentation conditions could be an important basis for developing lowcost, large-scale methods for industrial production of 1,3-PDO in the future.

  2. A smartphone-driven methodology for estimating physical activities and energy expenditure in free living conditions.

    PubMed

    Guidoux, Romain; Duclos, Martine; Fleury, Gérard; Lacomme, Philippe; Lamaudière, Nicolas; Manenq, Pierre-Henri; Paris, Ludivine; Ren, Libo; Rousset, Sylvie

    2014-12-01

    This paper introduces a function dedicated to the estimation of total energy expenditure (TEE) of daily activities based on data from accelerometers integrated into smartphones. The use of mass-market sensors such as accelerometers offers a promising solution for the general public due to the growing smartphone market over the last decade. The TEE estimation function quality was evaluated using data from intensive numerical experiments based, first, on 12 volunteers equipped with a smartphone and two research sensors (Armband and Actiheart) in controlled conditions (CC) and, then, on 30 other volunteers in free-living conditions (FLC). The TEE given by these two sensors in both conditions and estimated from the metabolic equivalent tasks (MET) in CC served as references during the creation and evaluation of the function. The TEE mean gap in absolute value between the function and the three references was 7.0%, 16.4% and 2.7% in CC, and 17.0% and 23.7% according to Armband and Actiheart, respectively, in FLC. This is the first step in the definition of a new feedback mechanism that promotes self-management and daily-efficiency evaluation of physical activity as part of an information system dedicated to the prevention of chronic diseases.

  3. Comparison of methodologies in determining bone marrow fat percentage under different environmental conditions.

    PubMed

    Murden, David; Hunnam, Jaimie; De Groef, Bert; Rawlin, Grant; McCowan, Christina

    2017-01-01

    The use of bone marrow fat percentage has been recommended in assessing body condition at the time of death in wild and domestic ruminants, but few studies have looked at the effects of time and exposure on animal bone marrow. We investigated the utility of bone marrow fat extraction as a tool for establishing antemortem body condition in postmortem specimens from sheep and cattle, particularly after exposure to high heat, and compared different techniques of fat extraction for this purpose. Femora were collected from healthy and "skinny" sheep and cattle. The bones were either frozen or subjected to 40°C heat; heated bones were either wrapped in plastic to minimize desiccation or were left unwrapped. Marrow fat percentage was determined at different time intervals by oven-drying, or by solvent extraction using hexane in manual equipment or a Soxhlet apparatus. Extraction was performed, where possible, on both wet and dried tissue. Multiple samples were tested from each bone. Bone marrow fat analysis using a manual, hexane-based extraction technique was found to be a moderately sensitive method of assessing antemortem body condition of cattle up to 6 d after death. Multiple replicates should be analyzed where possible. Samples from "skinny" sheep showed a different response to heat from those of "healthy" sheep; "skinny" samples were so reduced in quantity by day 6 (the first sampling day) that no individual testing could be performed. Further work is required to understand the response of sheep marrow.

  4. Mass measuring instrument for use under microgravity conditions

    SciTech Connect

    Fujii, Yusaku; Yokota, Masayuki; Hashimoto, Seiji; Sugita, Yoichi; Ito, Hitomi; Shimada, Kazuhito

    2008-05-15

    A prototype instrument for measuring astronaut body mass under microgravity conditions has been developed and its performance was evaluated by parabolic flight tests. The instrument, which is the space scale, is applied as follows. Connect the subject astronaut to the space scale with a rubber cord. Use a force transducer to measure the force acting on the subject and an optical interferometer to measure the velocity of the subject. The subject's mass is calculated as the impulse divided by the velocity change, i.e., M={integral}Fdt/{delta}v. Parabolic flight by using a jet aircraft produces a zero-gravity condition lasting approximately 20 s. The performance of the prototype space scale was evaluated during such a flight by measuring the mass of a sample object.

  5. An unstructured direct simulation Monte Carlo methodology with Kinetic-Moment inflow and outflow boundary conditions

    NASA Astrophysics Data System (ADS)

    Gatsonis, Nikolaos A.; Chamberlin, Ryan E.; Averkin, Sergey N.

    2013-01-01

    The mathematical and computational aspects of the direct simulation Monte Carlo on unstructured tetrahedral grids (U3DSMC) with a Kinetic-Moment (KM) boundary conditions method are presented. The algorithms for particle injection, particle loading, particle motion, and particle tracking are presented. The KM method applicable to a subsonic or supersonic inflow/outflow boundary, couples kinetic (particle) U3DSMC properties with fluid (moment) properties. The KM method obtains the number density, temperature and mean velocity needed to define the equilibrium, drifting Maxwellian distribution at a boundary. The moment component of KM is based on the local one dimensional inviscid (LODI) boundary conditions method consistent with the 5-moment compressible Euler equations. The kinetic component of KM is based on U3DSMC for interior properties and the equilibrium drifting Maxwellian at the boundary. The KM method is supplemented with a time-averaging procedure, allows for choices in sampling-cell procedures, minimizes fluctuations and accelerates the convergence in subsonic flows. Collision sampling in U3DSMC implements the no-time-counter method and includes elastic and inelastic collisions. The U3DSMC with KM boundary conditions is validated and verified extensively with simulations of subsonic nitrogen flows in a cylindrical tube with imposed inlet pressure and density and imposed outlet pressure. The simulations cover the regime from slip to free-molecular with inlet Knudsen numbers between 0.183 and 18.27 and resulting inlet Mach numbers between 0.037 and 0.027. The pressure and velocity profiles from U3DSMC-KM simulations are compared with analytical solutions obtained from first-order and second-order slip boundary conditions. Mass flow rates from U3DSMC-KM are compared with validated analytical solutions for the entire Knudsen number regime considered. Error and sensitivity analysis is performed and numerical fractional errors are in agreement with theoretical

  6. Design of measurement methodology for the evaluation of human exposure to vibration in residential environments.

    PubMed

    Sica, G; Peris, E; Woodcock, J S; Moorhouse, A T; Waddington, D C

    2014-06-01

    Exposure-response relationships are important tools for policy makers to assess the impact of an environmental stressor on the populace. Their validity lies partly in their statistical strength which is greatly influenced by the size of the sample from which the relationship is derived. As such, the derivation of meaningful exposure-response relationships requires estimates of vibration exposure at a large number of receiver locations. In the United Kingdom a socio-vibrational survey has been conducted with the aim of deriving exposure-response relationships for annoyance due to vibration from (a) railway traffic and (b) the construction of a new light rail system. Response to vibration was measured via a questionnaire conducted face-to-face with residents in their own homes and vibration exposure was estimated using data from a novel measurement methodology. In total, 1281 questionnaires were conducted: 931 for vibration from railway traffic and 350 for vibration from construction sources. Considering the interdisciplinary nature of this work along with the volume of experimental data required, a number of significant technical and logistical challenges needed to be overcome through the planning and implementation of the fieldwork. Four of these challenges are considered in this paper: the site identification for providing a robust sample of the residents affected, the strategies used for measuring both exposure and response and the coordination between the teams carrying out the social survey and the vibration measurements.

  7. Measurement of apolipoproteins B and A by radial immunodiffusion: methodological assessment and clinical applications.

    PubMed

    Cano, M D; Gonzalvo, C; Scheen, A J; Castillo, M J

    1994-01-01

    The clinical evaluation of apolipoproteins is of interest in order to characterize the risk profile for ischemic heart disease both in normolipidemic and hyperlipidemic subjects. In the non-specialized and/or small practice clinical laboratory, the measurement of some apolipoproteins can be undertaken by simple methods of immunological analysis, among which radial immunodiffusion can be of interest due to its simplicity of use and because it does not require specific equipment. In this work several methodological questions concerning the measurement of plasma apolipoproteins B and A by radial immunodiffusion have been addressed; the results show that this method is particularly reliable for the apo B assay. Regression analysis between values obtained with radial immunodiffusion and radioimmunoassay was r = 0.972 for apo B and r = 0.782 for apo A. The recovery rate was above 90% for both apolipoproteins (93.8% for apo B and 99.5% for apo A). The inter and intraassay coefficients of variation were below 5%, and the detection limits were estimated as 9.6 mg/dl for apo A and 6.9 mg/dl for apo B. Neither the ingestion of a standard breakfast (500 Cal, 17 g fat, 120 mg cholesterol) 2 h prior to testing nor freezing the sample significantly affected the measurement of apolipoproteins B and A. Mean plasma concentrations of both apolipoproteins measured by radial immunodiffusion in normo and hyperlipidemic subjects are also presented.

  8. Chlorophyll-a determination via continuous measurement of plankton fluorescence: methodology development.

    PubMed

    Pinto, A M; Von Sperling, E; Moreira, R M

    2001-11-01

    A methodology is presented for the continuous measurement of chlorophyll-a concentration due to plankton, in surface water environments. A Turner 10-AU fluorometer equipped with the F4T5.B2/BP lamp (blue lamp), a Cs 5-60 equivalent excitation path filter, and a 680 nm emission filter, has been used. This configuration allows the in vivo, in situ determination of chlorophyll-a by measuring the fluorescence due to the pigments. In field work the fluorometer, data logging and positioning equipment were placed aboard a manageable boat which navigated following a scheme of regularly spaced crossings. Some water samples were collected during the measurement for laboratory chlorophyll-a measurements by the spectrophotometric method, thus providing for calibration and comparison. Spatial chlorophyll-a concentration distributions can be easily defined in large volumes, such as reservoirs, etc. Two distinct environments have been monitored: in the Vargem das Flores reservoir chlorophyll-a concentrations varied between 0.7 and 2.6 mg/m3, whereas in the Lagoa Santa lake these values lied in the 12 to 18 mg/m3 range. The simplicity, versatility and economy of the method, added to the large amount of data that can be gathered in a single run, clearly justify its use in field environmental studies.

  9. The Harmonizing Outcome Measures for Eczema (HOME) roadmap: a methodological framework to develop core sets of outcome measurements in dermatology.

    PubMed

    Schmitt, Jochen; Apfelbacher, Christian; Spuls, Phyllis I; Thomas, Kim S; Simpson, Eric L; Furue, Masutaka; Chalmers, Joanne; Williams, Hywel C

    2015-01-01

    Core outcome sets (COSs) are consensus-derived minimum sets of outcomes to be assessed in a specific situation. COSs are being increasingly developed to limit outcome-reporting bias, allow comparisons across trials, and strengthen clinical decision making. Despite the increasing interest in outcomes research, methods to develop COSs have not yet been standardized. The aim of this paper is to present the Harmonizing Outcomes Measures for Eczema (HOME) roadmap for the development and implementation of COSs, which was developed on the basis of our experience in the standardization of outcome measurements for atopic eczema. Following the establishment of a panel representing all relevant stakeholders and a research team experienced in outcomes research, the scope and setting of the core set should be defined. The next steps are the definition of a core set of outcome domains such as symptoms or quality of life, followed by the identification or development and validation of appropriate outcome measurement instruments to measure these core domains. Finally, the consented COS needs to be disseminated, implemented, and reviewed. We believe that the HOME roadmap is a useful methodological framework to develop COSs in dermatology, with the ultimate goal of better decision making and promoting patient-centered health care.

  10. Measurement of automobile exhaust emissions under realistic road conditions

    SciTech Connect

    Staab, J.; Schurmann, D.

    1987-01-01

    An exhaust gas measurement system for on-board use has been developed, which enables the direct and continuous determination of the exhaust mass emissions in vehicles on the road. Such measurements under realistic traffic conditions are a valuable supplement to measurements taken on test benches, the latter, however, still being necessary. In the last two years numerous test runs were undertaken. The reliability of the on-board system could be demonstrated and a very informative view of the exhaust emissions behavior of a vehicle on the road was obtained from the test results.

  11. Methodology for application of field rainfall simulator to revise c-factor database for conditions of the Czech Republic

    NASA Astrophysics Data System (ADS)

    Neumann, Martin; Dostál, Tomáš; Krása, Josef; Kavka, Petr; Davidová, Tereza; Brant, Václav; Kroulík, Milan; Mistr, Martin; Novotný, Ivan

    2016-04-01

    The presentation will introduce a methodology of determination of crop and cover management factor (C-faktor) for the universal soil loss equation (USLE) using field rainfall simulator. The aim of the project is to determine the C-factor value for the different phenophases of the main crops of the central-european region, while also taking into account the different agrotechnical methods. By using the field rainfall simulator, it is possible to perform the measurements in specific phenophases, which is otherwise difficult to execute due to the variability and fortuity of the natural rainfall. Due to the number of measurements needed, two identical simulators will be used, operated by two independent teams, with coordinated methodology. The methodology will mainly specify the length of simulation, the rainfall intensity, and the sampling technique. The presentation includes a more detailed account of the methods selected. Due to the wide range of variable crops and soils, it is not possible to execute the measurements for all possible combinations. We therefore decided to perform the measurements for previously selected combinations of soils,crops and agrotechnologies that are the most common in the Czech Republic. During the experiments, the volume of the surface runoff and amount of sediment will be measured in their temporal distribution, as well as several other important parameters. The key values of the 3D matrix of the combinations of the crop, agrotechnique and soil will be determined experimentally. The remaining values will be determined by interpolation or by a model analogy. There are several methods used for C-factor calculation from measured experimental data. Some of these are not suitable to be used considering the type of data gathered. The presentation will discuss the benefits and drawbacks of these methods, as well as the final design of the method used. The problems concerning the selection of a relevant measurement method as well as the final

  12. Probiotics production and alternative encapsulation methodologies to improve their viabilities under adverse environmental conditions.

    PubMed

    Coghetto, Chaline Caren; Brinques, Graziela Brusch; Ayub, Marco Antônio Záchia

    2016-12-01

    Probiotic products are dietary supplements containing live microorganisms producing beneficial health effects on the host by improving intestinal balance and nutrient absorption. Among probiotic microorganisms, those classified as lactic acid bacteria are of major importance to the food and feed industries. Probiotic cells can be produced using alternative carbon and nitrogen sources, such as agroindustrial residues, at the same time contributing to reduce process costs. On the other hand, the survival of probiotic cells in formulated food products, as well as in the host gut, is an essential nutritional aspect concerning health benefits. Therefore, several cell microencapsulation techniques have been investigated as a way to improve cell viability and survival under adverse environmental conditions, such as the gastrointestinal milieu of hosts. In this review, different aspects of probiotic cells and technologies of their related products are discussed, including formulation of culture media, and aspects of cell microencapsulation techniques required to improve their survival in the host.

  13. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    SciTech Connect

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  14. Curriculum-based measurement of math problem solving: a methodology and rationale for establishing equivalence of scores.

    PubMed

    Montague, Marjorie; Penfield, Randall D; Enders, Craig; Huang, Jia

    2010-02-01

    The purpose of this article is to discuss curriculum-based measurement (CBM) as it is currently utilized in research and practice and to propose a new approach for developing measures to monitor the academic progress of students longitudinally. To accomplish this, we first describe CBM and provide several exemplars of CBM in reading and mathematics. Then, we present the research context for developing a set of seven curriculum-based measures for monitoring student progress in math problem solving. The rationale for and advantages of using statistical equating methodology are discussed. Details of the methodology as it was applied to the development of these math problem solving measures are provided.

  15. An implication of novel methodology to study pancreatic acinar mitochondria under in situ conditions.

    PubMed

    Manko, Bohdan O; Klevets, Myron Yu; Manko, Volodymyr V

    2013-03-01

    Mitochondria maintain numerous energy-consuming processes in pancreatic acinar cells, yet characteristics of pancreatic mitochondrial oxidative phosphorylation in native conditions are poorly studied. Besides, it is not known which type of solution is most adequate to preserve functions of pancreatic mitochondria in situ. Here we propose a novel experimental protocol suitable for in situ analysis of pancreatic mitochondria metabolic states. Isolated rat pancreatic acini were permeabilized with low doses of digitonin. Different metabolic states of mitochondria were examined in KCl- and sucrose-based solutions using Clark oxygen electrode. Respiration of digitonin-treated, unlike of intact, acini was substantially intensified by succinate or mixture of pyruvate plus malate. Substrate-stimulated respiration rate did not depend on solution composition. In sucrose-based solution, oligomycin inhibited State 3 respiration at succinate oxidation by 65.4% and at pyruvate plus malate oxidation by 60.2%, whereas in KCl-based solution, by 32.0% and 36.1%, respectively. Apparent respiratory control indices were considerably higher in sucrose-based solution. Rotenone or thenoyltrifluoroacetone severely inhibited respiration, stimulated by pyruvate plus malate or succinate, respectively. This revealed low levels of non-mitochondrial oxygen consumption of permeabilized acinar cells. These results suggest a stronger coupling between respiration and oxidative phosphorylation in sucrose-based solution.

  16. Characteristics measurement methodology of the large-size autostereoscopic 3D LED display

    NASA Astrophysics Data System (ADS)

    An, Pengli; Su, Ping; Zhang, Changjie; Cao, Cong; Ma, Jianshe; Cao, Liangcai; Jin, Guofan

    2014-11-01

    Large-size autostereoscopic 3D LED displays are commonly used in outdoor or large indoor space, and have the properties of long viewing distance and relatively low light intensity at the viewing distance. The instruments used to measure the characteristics (crosstalk, inconsistency, chromatic dispersion, etc.) of the displays should have long working distance and high sensitivity. In this paper, we propose a methodology for characteristics measurement based on a distribution photometer with a working distance of 5.76m and the illumination sensitivity of 0.001 mlx. A display panel holder is fabricated and attached on the turning stage of the distribution photometer. Specific test images are loaded on the display separately, and the luminance data at the distance of 5.76m to the panel are measured. Then the data are transformed into the light intensity at the optimum viewing distance. According to definitions of the characteristics of the 3D displays, the crosstalk, inconsistency, chromatic dispersion could be calculated. The test results and analysis of the characteristics of an autostereoscopic 3D LED display are proposed.

  17. Methodological considerations in the measurement of institutional and structural forms of HIV discrimination.

    PubMed

    Chan, K Y; Reidpath, D D

    2005-07-01

    The systematic measurement of HIV/AIDS-related discrimination is imperative within the current rhetoric that holds discrimination as one of the two 'biggest' barriers to HIV/AIDS pandemic intervention. This paper provides a methodological critique of the UNAIDS (2000b) Protocol for the Identification of Discrimination against People Living with HIV (the Protocol). Specifically, the paper focuses on the Protocol's capacity to accurately identify and measure institutional levels of HIV-related discrimination that allows data that are reliable and comparable across time and contexts. Conceptual issues including the Protocol's objective as an indicator versus a direct measure of discrimination and the role of the Protocol as a tool of research versus a tool of advocacy are explored. Design issues such as the operationalization of discrimination, appropriateness of indicator content, sampling and data collection strategies and issues of scoring are also evaluated. It is hoped that the matters outlined will provide readers with ways of critically reflecting and evaluating the findings of the research papers presented in this Special Issue, as well as pointing to ways of improving research design.

  18. Moving to Capture Children’s Attention: Developing a Methodology for Measuring Visuomotor Attention

    PubMed Central

    Coats, Rachel O.; Mushtaq, Faisal; Williams, Justin H. G.; Aucott, Lorna S.; Mon-Williams, Mark

    2016-01-01

    Attention underpins many activities integral to a child’s development. However, methodological limitations currently make large-scale assessment of children’s attentional skill impractical, costly and lacking in ecological validity. Consequently we developed a measure of ‘Visual Motor Attention’ (VMA)—a construct defined as the ability to sustain and adapt visuomotor behaviour in response to task-relevant visual information. In a series of experiments, we evaluated the capability of our method to measure attentional processes and their contributions in guiding visuomotor behaviour. Experiment 1 established the method’s core features (ability to track stimuli moving on a tablet-computer screen with a hand-held stylus) and demonstrated its sensitivity to principled manipulations in adults’ attentional load. Experiment 2 standardised a format suitable for use with children and showed construct validity by capturing developmental changes in executive attention processes. Experiment 3 tested the hypothesis that children with and without coordination difficulties would show qualitatively different response patterns, finding an interaction between the cognitive and motor factors underpinning responses. Experiment 4 identified associations between VMA performance and existing standardised attention assessments and thereby confirmed convergent validity. These results establish a novel approach to measuring childhood attention that can produce meaningful functional assessments that capture how attention operates in an ecologically valid context (i.e. attention's specific contribution to visuomanual action). PMID:27434198

  19. A new methodology for measurement of sludge residence time distribution in a paddle dryer using X-ray fluorescence analysis.

    PubMed

    Charlou, Christophe; Milhé, Mathieu; Sauceau, Martial; Arlabosse, Patricia

    2015-02-01

    Drying is a necessary step before sewage sludge energetic valorization. Paddle dryers allow working with such a complex material. However, little is known about sludge flow in this kind of processes. This study intends to set up an original methodology for sludge residence time distribution (RTD) measurement in a continuous paddle dryer, based on the detection of mineral tracers by X-ray fluorescence. This accurate analytical technique offers a linear response to tracer concentration in dry sludge; the protocol leads to a good repeatability of RTD measurements. Its equivalence to RTD measurement by NaCl conductivity in sludge leachates is assessed. Moreover, it is shown that tracer solubility has no influence on RTD: liquid and solid phases have the same flow pattern. The application of this technique on sludge with different storage duration at 4 °C emphasizes the influence of this parameter on sludge RTD, and thus on paddle dryer performances: the mean residence time in a paddle dryer is almost doubled between 24 and 48 h of storage for identical operating conditions.

  20. Optimizing conditions for E-and Z-ajoene formation from garlic juice using response surface methodology

    PubMed Central

    Yoo, Miyoung; Lee, Sanghee; Kim, Sunyoung; Shin, Dongbin

    2014-01-01

    The optimum conditions for the formation of E- and Z-ajoene from garlic juice mixed with soybean oil were determined using response surface methodology. A central composite design was used to investigate the effects of three independent variables temperature (°C, X1), reaction time (hours, X2), and oil volume (multiplied by weight, X3). The dependent variables were Z-ajoene (Y1) and E-ajoene (Y2) in oil-macerated garlic. The optimal conditions for E- and Z-ajoene using ridge analysis were 98.80°C, 6.87 h, and weight multiplied by weight 2.57, and 42.24°C, 9.71 h, and weight multiplied by weight 3.08, respectively. These conditions resulted in E- and Z-ajoene compound predicted values of 234.17 and 752.62 μg/g from garlic juice, respectively. The experimental values of E- and Z-ajoene were 222.75 and 833.59 μg/g, respectively. The estimated maximum values at the predicted optimum conditions were in good agreement with experimental values. PMID:25473520

  1. Application of nitric oxide measurements in clinical conditions beyond asthma

    PubMed Central

    Malinovschi, Andrei; Ludviksdottir, Dora; Tufvesson, Ellen; Rolla, Giovanni; Bjermer, Leif; Alving, Kjell; Diamant, Zuzana

    2015-01-01

    Fractional exhaled nitric oxide (FeNO) is a convenient, non-invasive method for the assessment of active, mainly Th2-driven, airway inflammation, which is sensitive to treatment with standard anti-inflammatory therapy. Consequently, FeNO serves as a valued tool to aid diagnosis and monitoring in several asthma phenotypes. More recently, FeNO has been evaluated in several other respiratory, infectious, and/or immunological conditions. In this short review, we provide an overview of several clinical studies and discuss the status of potential applications of NO measurements in clinical conditions beyond asthma. PMID:26672962

  2. Temperature Distribution Measurement of The Wing Surface under Icing Conditions

    NASA Astrophysics Data System (ADS)

    Isokawa, Hiroshi; Miyazaki, Takeshi; Kimura, Shigeo; Sakaue, Hirotaka; Morita, Katsuaki; Japan Aerospace Exploration Agency Collaboration; Univ of Notre Dame Collaboration; Kanagawa Institute of Technology Collaboration; Univ of Electro-(UEC) Team, Comm

    2016-11-01

    De- or anti-icing system of an aircraft is necessary for a safe flight operation. Icing is a phenomenon which is caused by a collision of supercooled water frozen to an object. For the in-flight icing, it may cause a change in the wing cross section that causes stall, and in the worst case, the aircraft would fall. Therefore it is important to know the surface temperature of the wing for de- or anti-icing system. In aerospace field, temperature-sensitive paint (TSP) has been widely used for obtaining the surface temperature distribution on a testing article. The luminescent image from the TSP can be related to the temperature distribution. (TSP measurement system) In icing wind tunnel, we measured the surface temperature distribution of the wing model using the TSP measurement system. The effect of icing conditions on the TSP measurement system is discussed.

  3. Phoretic and Radiometric Force Measurements on Microparticles in Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. James

    1996-01-01

    Thermophoretic, diffusiophoretic and radiometric forces on microparticles are being measured over a wide range of gas phase and particle conditions using electrodynamic levitation of single particles to simulate microgravity conditions. The thermophoretic force, which arises when a particle exists in a gas having a temperature gradient, is measured by levitating an electrically charged particle between heated and cooled plates mounted in a vacuum chamber. The diffusiophoretic force arising from a concentration gradient in the gas phase is measured in a similar manner except that the heat exchangers are coated with liquids to establish a vapor concentration gradient. These phoretic forces and the radiation pressure force acting on a particle are measured directly in terms of the change in the dc field required to levitate the particle with and without the force applied. The apparatus developed for the research and the experimental techniques are discussed, and results obtained by thermophoresis experiments are presented. The determination of the momentum and energy accommodation coefficients associated with molecular collisions between gases molecules and particles and the measurement of the interaction between electromagnetic radiation and small particles are of particular interest.

  4. Absolute gain measurement by the image method under mismatched condition

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Baddour, Maurice F.

    1987-01-01

    Purcell's image method for measuring the absolute gain of an antenna is particularly attractive for small test antennas. The method is simple to use and utilizes only one antenna with a reflecting plane to provide an image for the receiving antenna. However, the method provides accurate results only if the antenna is matched to its waveguide. In this paper, a waveguide junction analysis is developed to determine the gain of an antenna under mismatched condition. Absolute gain measurements for two standard gain horn antennas have been carried out. Experimental results agree closely with published data.

  5. Wall Conditioning and Impurity Measurements in the PEGASUS Experiment

    NASA Astrophysics Data System (ADS)

    Ono, M.; Fonck, R.; Toonen, R.; Thorson, T.; Tritz, K.; Winz, G.

    1999-11-01

    Wall conditioning and impurity effects on plasma evolution are increasingly relevant to the PEGASUS program. Surface conditioning consists of hydrogen glow discharge cleaning (GDC) to remove water and oxides, followed by He GDC to reduce the hydrogen inventory. Isotope exchange measurements indicate that periodic He GDC almost eliminates uncontrolled fueling from gas desorbed from the limiting surfaces. Additional wall conditioning will include Ti gettering and/or boronization. Impurity monitoring is provided by the recent installation of a SPRED multichannel VUV spectrometer (wavelength range = 10-110 nm; 1 msec time resolution), several interference filter (IF) monochromators, and a multichannel Ross-filter SXR diode assembly (for CV, CVI, OVII, and OVIII). The IF monitors indicate increased C radiation upon contact of the plasma with the upper and lower limiters for highly elongated plasmas. This radiation appears correlated with a subsequent rollover in the plasma current, and motivates an upgrade to the poloidal limiters to provide better plasma-wall interaction control.

  6. Measuring the impact of arthritis on worker productivity: perspectives, methodologic issues, and contextual factors.

    PubMed

    Tang, Kenneth; Escorpizo, Reuben; Beaton, Dorcas E; Bombardier, Claire; Lacaille, Diane; Zhang, Wei; Anis, Aslam H; Boonen, Annelies; Verstappen, Suzanne M M; Buchbinder, Rachelle; Osborne, Richard H; Fautrel, Bruno; Gignac, Monique A M; Tugwell, Peter S

    2011-08-01

    Leading up to the Outcome Measures in Rheumatology (OMERACT) 10 meeting, the goal of the Worker Productivity Special Interest Group (WP-SIG) was to make progress on 3 key issues that relate to the application and interpretation of worker productivity outcomes in arthritis: (1) to review existing conceptual frameworks to help consolidate our intended target and scope of measurement; (2) to examine the methodologic issues associated with our goal of combining multiple indicators of worker productivity loss (e.g., absenteeism <-> presenteeism) into a single comprehensive outcome; and (3) to examine the relevant contextual factors of work and potential implications for the interpretation of scores derived from existing outcome measures. Progress was made on all 3 issues at OMERACT 10. We identified 3 theoretical frameworks that offered unique but converging perspectives on worker productivity loss and/or work disability to provide guidance with classification, selection, and future recommendation of outcomes. Several measurement and analytic approaches to combine absenteeism and presenteeism outcomes were proposed, and the need for further validation of such approaches was also recognized. Finally, participants at the WP-SIG were engaged to brainstorm and provide preliminary endorsements to support key contextual factors of worker productivity through an anonymous "dot voting" exercise. A total of 24 specific factors were identified, with 16 receiving ≥ 1 vote among members, reflecting highly diverse views on specific factors that were considered most important. Moving forward, further progress on these issues remains a priority to help inform the best application of worker productivity outcomes in arthritis research.

  7. Extremely low frequency electromagnetic field measurements at the Hylaty station and methodology of signal analysis

    NASA Astrophysics Data System (ADS)

    Kulak, Andrzej; Kubisz, Jerzy; Klucjasz, Slawomir; Michalec, Adam; Mlynarczyk, Janusz; Nieckarz, Zenon; Ostrowski, Michal; Zieba, Stanislaw

    2014-06-01

    We present the Hylaty geophysical station, a high-sensitivity and low-noise facility for extremely low frequency (ELF, 0.03-300 Hz) electromagnetic field measurements, which enables a variety of geophysical and climatological research related to atmospheric, ionospheric, magnetospheric, and space weather physics. The first systematic observations of ELF electromagnetic fields at the Jagiellonian University were undertaken in 1994. At the beginning the measurements were carried out sporadically, during expeditions to sparsely populated areas of the Bieszczady Mountains in the southeast of Poland. In 2004, an automatic Hylaty ELF station was built there, in a very low electromagnetic noise environment, which enabled continuous recording of the magnetic field components of the ELF electromagnetic field in the frequency range below 60 Hz. In 2013, after 8 years of successful operation, the station was upgraded by extending its frequency range up to 300 Hz. In this paper we show the station's technical setup, and how it has changed over the years. We discuss the design of ELF equipment, including antennas, receivers, the time control circuit, and power supply, as well as antenna and receiver calibration. We also discuss the methodology we developed for observations of the Schumann resonance and wideband observations of ELF field pulses. We provide examples of various kinds of signals recorded at the station.

  8. Analysis and methodology for measuring oxygen concentration in liquid sodium with a plugging meter

    SciTech Connect

    Nollet, B. K.; Hvasta, M.; Anderson, M.

    2012-07-01

    Oxygen concentration in liquid sodium is a critical measurement in assessing the potential for corrosion damage in sodium-cooled fast reactors (SFRs). There has been little recent work on sodium reactors and oxygen detection. Thus, the technical expertise dealing with oxygen measurements within sodium is no longer readily available in the U.S. Two methods of oxygen detection that have been investigated are the plugging meter and the galvanic cell. One of the overall goals of the Univ. of Wisconsin's sodium research program is to develop an affordable, reliable galvanic cell oxygen sensor. Accordingly, attention must first be dedicated to a well-known standard known as a plugging meter. Therefore, a sodium loop has been constructed on campus in effort to develop the plugging meter technique and gain experience working with liquid metal. The loop contains both a galvanic cell test section and a plugging meter test section. Consistent plugging results have been achieved below 20 [wppm], and a detailed process for achieving effective plugging has been developed. This paper will focus both on an accurate methodology to obtain oxygen concentrations from a plugging meter, and on how to easily control the oxygen concentration of sodium in a test loop. Details of the design, materials, manufacturing, and operation will be presented. Data interpretation will also be discussed, since a modern discussion of plugging data interpretation does not currently exist. (authors)

  9. Methodological considerations in service use assessment for children and youth with mental health conditions; issues for economic evaluation.

    PubMed

    Woolderink, M; Lynch, F L; van Asselt, A D I; Beecham, J; Evers, S M A A; Paulus, A T G; van Schayck, C P

    2015-05-01

    Economic evaluations are increasingly used in decision-making. Accurate measurement of service use is critical to economic evaluation. This qualitative study, based on expert interviews, aims to identify best approaches to service use measurement for child mental health conditions, and to identify problems in current methods. Results suggest considerable agreement on strengths (e.g., availability of accurate instruments to measure service use) and weaknesses, (e.g., lack of unit prices for services outside the health sector) or alternative approaches to service use measurement. Experts also identified some unresolved problems, for example the lack of uniform definitions for some mental health services.

  10. Optimizing spray drying conditions of sour cherry juice based on physicochemical properties, using response surface methodology (RSM).

    PubMed

    Moghaddam, Arasb Dabbagh; Pero, Milad; Askari, Gholam Reza

    2017-01-01

    In this study, the effects of main spray drying conditions such as inlet air temperature (100-140 °C), maltodextrin concentration (MDC: 30-60%), and aspiration rate (AR) (30-50%) on the physicochemical properties of sour cherry powder such as moisture content (MC), hygroscopicity, water solubility index (WSI), and bulk density were investigated. This investigation was carried out by employing response surface methodology and the process conditions were optimized by using this technique. The MC of the powder was negatively related to the linear effect of the MDC and inlet air temperature (IT) and directly related to the AR. Hygroscopicity of the powder was significantly influenced by the MDC. By increasing MDC in the juice, the hygroscopicity of the powder was decreased. MDC and inlet temperature had a positive effect, but the AR had a negative effect on the WSI of powder. MDC and inlet temperature negatively affected the bulk density of powder. By increasing these two variables, the bulk density of powder was decreased. The optimization procedure revealed that the following conditions resulted in a powder with the maximum solubility and minimum hygroscopicity: MDC = 60%, IT = 134 °C, and AR = 30% with a desirability of 0.875.

  11. Body temperature as a conditional response measure for pavlovian fear conditioning.

    PubMed

    Godsil, B P; Quinn, J J; Fanselow, M S

    2000-01-01

    On six days rats were exposed to each of two contexts. They received an electric shock in one context and nothing in the other. Rats were tested later in each environment without shock. The rats froze and defecated more often in the shock-paired environment; they also exhibited a significantly larger elevation in rectal temperature in that environment. The rats discriminated between each context, and we suggest that the elevation in temperature is the consequence of associative learning. Thus, body temperature can be used as a conditional response measure in Pavlovian fear conditioning experiments that use footshock as the unconditional stimulus.

  12. Use of response surface methodology to optimise environmental stress conditions on Penicillium glabrum, a food spoilage mould.

    PubMed

    Nevarez, Laurent; Vasseur, Valérie; Debaets, Stella; Barbier, Georges

    2010-01-01

    Fungi are ubiquitous microorganisms often associated with spoilage and biodeterioration of a large variety of foods and feedstuffs. Their growth may be influenced by temporary changes in intrinsic or environmental factors such as temperature, water activity, pH, preservatives, atmosphere composition, all of which may represent potential sources of stress. Molecular-based analyses of their physiological responses to environmental conditions would help to better manage the risk of alteration and potential toxicity of food products. However, before investigating molecular stress responses, appropriate experimental stress conditions must be precisely defined. Penicillium glabrum is a filamentous fungus widely present in the environment and frequently isolated in the food processing industry as a contaminant of numerous products. Using response surface methodology, the present study evaluated the influence of two environmental factors (temperature and pH) on P. glabrum growth to determine 'optimised' environmental stress conditions. For thermal and pH shocks, a large range of conditions was applied by varying factor intensity and exposure time according to a two-factorial central composite design. Temperature and exposure duration varied from 30 to 50 °C and from 10 min to 230 min, respectively. The effects of interaction between both variables were observed on fungal growth. For pH, the duration of exposure, from 10 to 230 min, had no significant effect on fungal growth. Experiments were thus carried out on a range of pH from 0.15 to 12.50 for a single exposure time of 240 min. Based on fungal growth results, a thermal shock of 120 min at 40 °C or a pH shock of 240 min at 1.50 or 9.00 may therefore be useful to investigate stress responses to non-optimal conditions.

  13. Modeling of the effect of freezer conditions on the principal constituent parameters of ice cream by using response surface methodology.

    PubMed

    Inoue, K; Ochi, H; Taketsuka, M; Saito, H; Sakurai, K; Ichihashi, N; Iwatsuki, K; Kokubo, S

    2008-05-01

    A systematic analysis was carried out by using response surface methodology to create a quantitative model of the synergistic effects of conditions in a continuous freezer [mix flow rate (L/h), overrun (%), cylinder pressure (kPa), drawing temperature ( degrees C), and dasher speed (rpm)] on the principal constituent parameters of ice cream [rate of fat destabilization (%), mean air cell diameter (mum), and mean ice crystal diameter (mum)]. A central composite face-centered design was used for this study. Thirty-one combinations of the 5 above-mentioned freezer conditions were designed (including replicates at the center point), and ice cream samples were manufactured and examined in a continuous freezer under the selected conditions. The responses were the 3 variables given above. A quadratic model was constructed, with the freezer conditions as the independent variables and the ice cream characteristics as the dependent variables. The coefficients of determination (R(2)) were greater than 0.9 for all 3 responses, but Q(2), the index used here for the capability of the model for predicting future observed values of the responses, was negative for both the mean ice crystal diameter and the mean air cell diameter. Therefore, pruned models were constructed by removing terms that had contributed little to the prediction in the original model and by refitting the regression model. It was demonstrated that these pruned models provided good fits to the data in terms of R(2), Q(2), and ANOVA. The effects of freezer conditions were expressed quantitatively in terms of the 3 responses. The drawing temperature ( degrees C) was found to have a greater effect on ice cream characteristics than any of the other factors.

  14. Optimization of Fermentation Conditions for Recombinant Human Interferon Beta Production by Escherichia coli Using the Response Surface Methodology

    PubMed Central

    Morowvat, Mohammad Hossein; Babaeipour, Valiollah; Rajabi Memari, Hamid; Vahidi, Hossein

    2015-01-01

    Background: The periplasmic overexpression of recombinant human interferon beta (rhIFN-β)-1b using a synthetic gene in Escherichia coli BL21 (DE3) was optimized in shake flasks using Response Surface Methodology (RSM) based on the Box-Behnken Design (BBD). Objectives: This study aimed to predict and develop the optimal fermentation conditions for periplasmic expression of rhIFN-β-1b in shake flasks whilst keeping the acetate excretion as the lowest amount and exploit the best results condition for rhIFN-β in a bench top bioreactor. Materials and Methods: The process variables studied were the concentration of glucose as carbon source, cell density prior the induction (OD 600 nm) and induction temperature. Ultimately, a three-factor three-level BBD was employed during the optimization process. The rhIFN-β production and the acetate excretion served as the evaluated responses. Results: The proposed optimum fermentation condition consisted of 7.81 g L-1 glucose, OD 600 nm prior induction 1.66 and induction temperature of 30.27°C. The model prediction of 0.267 g L-1 of rhIFN-β and 0.961 g L-1 of acetate at the optimum conditions was verified experimentally as 0.255 g L-1 and 0.981 g L-1 of acetate. This agreement between the predicted and observed values confirmed the precision of the applied method to predict the optimum conditions. Conclusions: It can be concluded that the RSM is an effective method for the optimization of recombinant protein expression using synthetic genes in E. coli. PMID:26034535

  15. Probability distributions of continuous measurement results for conditioned quantum evolution

    NASA Astrophysics Data System (ADS)

    Franquet, A.; Nazarov, Yuli V.

    2017-02-01

    We address the statistics of continuous weak linear measurement on a few-state quantum system that is subject to a conditioned quantum evolution. For a conditioned evolution, both the initial and final states of the system are fixed: the latter is achieved by the postselection in the end of the evolution. The statistics may drastically differ from the nonconditioned case, and the interference between initial and final states can be observed in the probability distributions of measurement outcomes as well as in the average values exceeding the conventional range of nonconditioned averages. We develop a proper formalism to compute the distributions of measurement outcomes, and evaluate and discuss the distributions in experimentally relevant setups. We demonstrate the manifestations of the interference between initial and final states in various regimes. We consider analytically simple examples of nontrivial probability distributions. We reveal peaks (or dips) at half-quantized values of the measurement outputs. We discuss in detail the case of zero overlap between initial and final states demonstrating anomalously big average outputs and sudden jump in time-integrated output. We present and discuss the numerical evaluation of the probability distribution aiming at extending the analytical results and describing a realistic experimental situation of a qubit in the regime of resonant fluorescence.

  16. Extension of laboratory-measured soil spectra to field conditions

    NASA Technical Reports Server (NTRS)

    Stoner, E. R.; Baumgardner, M. F.; Weismiller, R. A.; Biehl, L. L.; Robinson, B. F.

    1982-01-01

    Spectral responses of two glaciated soils, Chalmers silty clay loam and Fincastle silt loam, formed under prairie grass and forest vegetation, respectively, were measured in the laboratory under controlled moisture equilibria using an Exotech Model 20C spectroradiometer to obtain spectral data in the laboratory under artificial illumination. The same spectroradiometer was used outdoors under solar illumination to obtain spectral response from dry and moistened field plots with and without corn residue cover, representing the two different soils. Results indicate that laboratory-measured spectra of moist soil are directly proportional to the spectral response of that same field-measured moist bare soil over the 0.52 micrometer to 1.75 micrometer wavelength range. The magnitudes of difference in spectral response between identically treated Chalmers and Fincastle soils are greatest in the 0.6 micrometers to 0.8 micrometer transition region between the visible and near infrared, regardless of field condition or laboratory preparation studied.

  17. Noninvasive measurement system for human respiratory condition and body temperature

    NASA Astrophysics Data System (ADS)

    Toba, Eiji; Sekiguchi, Sadamu; Nishimatsu, Toyonori

    1995-06-01

    A special chromel (C) and alumel wire (A) thermopile has been developed which can measure the human respiratory condition and body temperature without directly contacting a sensor to the human body. The measurement system enables high speed, real time, noninvasive, and simultaneous measurement of respiratory rates and body temperature with the same sensor. The special CA thermopile, with each sensing junction of approximately 25 μm, was constructed by using spot welded thermopile junctions. The thermoelectric power of 17 pairs of special CA thermopile is 0.7 mV/ °C. The special CA thermopile provides high sensitivity and fine frequency characteristics, of which the gain is flat to approximately 10 Hz.

  18. Detection of Chamber Conditioning Through Optical Emission and Impedance Measurements

    NASA Technical Reports Server (NTRS)

    Cruden, Brett A.; Rao, M. V. V. S.; Sharma, Surendra P.; Meyyappan, Meyya

    2001-01-01

    During oxide etch processes, buildup of fluorocarbon residues on reactor sidewalls can cause run-to-run drift and will necessitate some time for conditioning and seasoning of the reactor. Though diagnostics can be applied to study and understand these phenomena, many of them are not practical for use in an industrial reactor. For instance, measurements of ion fluxes and energy by mass spectrometry show that the buildup of insulating fluorocarbon films on the reactor surface will cause a shift in both ion energy and current in an argon plasma. However, such a device cannot be easily integrated into a processing system. The shift in ion energy and flux will be accompanied by an increase in the capacitance of the plasma sheath. The shift in sheath capacitance can be easily measured by a common commercially available impedance probe placed on the inductive coil. A buildup of film on the chamber wall is expected to affect the production of fluorocarbon radicals, and thus the presence of such species in the optical emission spectrum of the plasma can be monitored as well. These two techniques are employed on a GEC (Gaseous Electronics Conference) Reference Cell to assess the validity of optical emission and impedance monitoring as a metric of chamber conditioning. These techniques are applied to experimental runs with CHF3 and CHF3/O2/Ar plasmas, with intermediate monitoring of pure argon plasmas as a reference case for chamber conditions.

  19. Classification of heart valve condition using acoustic measurements

    SciTech Connect

    Clark, G.

    1994-11-15

    Prosthetic heart valves and the many great strides in valve design have been responsible for extending the life spans of many people with serious heart conditions. Even though the prosthetic valves are extremely reliable, they are eventually susceptible to long-term fatigue and structural failure effects expected from mechanical devices operating over long periods of time. The purpose of our work is to classify the condition of in vivo Bjork-Shiley Convexo-Concave (BSCC) heart valves by processing acoustic measurements of heart valve sounds. The structural failures of interest for Bscc valves is called single leg separation (SLS). SLS can occur if the outlet strut cracks and separates from the main structure of the valve. We measure acoustic opening and closing sounds (waveforms) using high sensitivity contact microphones on the patient`s thorax. For our analysis, we focus our processing and classification efforts on the opening sounds because they yield direct information about outlet strut condition with minimal distortion caused by energy radiated from the valve disc.

  20. Current psychometric and methodological issues in the measurement of overgeneral autobiographical memory.

    PubMed

    Griffith, James W; Sumner, Jennifer A; Raes, Filip; Barnhofer, Thorsten; Debeer, Elise; Hermans, Dirk

    2012-12-01

    Autobiographical memory is a multifaceted construct that is related to psychopathology and other difficulties in functioning. Across many studies, a variety of methods have been used to study autobiographical memory. The relationship between overgeneral autobiographical memory (OGM) and psychopathology has been of particular interest, and many studies of this cognitive phenomenon rely on the Autobiographical Memory Test (AMT) to assess it. In this paper, we examine several methodological approaches to studying autobiographical memory, and focus primarily on methodological and psychometric considerations in OGM research. We pay particular attention to what is known about the reliability, validity, and methodological variations of the AMT. The AMT has adequate psychometric properties, but there is great variability in methodology across studies that use it. Methodological recommendations and suggestions for future studies are presented.

  1. Thermophysical Properties Measurement of High-Temperature Liquids Under Microgravity Conditions in Controlled Atmospheric Conditions

    NASA Technical Reports Server (NTRS)

    Watanabe, Masahito; Ozawa, Shumpei; Mizuno, Akotoshi; Hibiya, Taketoshi; Kawauchi, Hiroya; Murai, Kentaro; Takahashi, Suguru

    2012-01-01

    Microgravity conditions have advantages of measurement of surface tension and viscosity of metallic liquids by the oscillating drop method with an electromagnetic levitation (EML) device. Thus, we are preparing the experiments of thermophysical properties measurements using the Materials-Science Laboratories ElectroMagnetic-Levitator (MSL-EML) facilities in the international Space station (ISS). Recently, it has been identified that dependence of surface tension on oxygen partial pressure (Po2) must be considered for industrial application of surface tension values. Effect of Po2 on surface tension would apparently change viscosity from the damping oscillation model. Therefore, surface tension and viscosity must be measured simultaneously in the same atmospheric conditions. Moreover, effect of the electromagnetic force (EMF) on the surface oscillations must be clarified to obtain the ideal surface oscillation because the EMF works as the external force on the oscillating liquid droplets, so extensive EMF makes apparently the viscosity values large. In our group, using the parabolic flight levitation experimental facilities (PFLEX) the effect of Po2 and external EMF on surface oscillation of levitated liquid droplets was systematically investigated for the precise measurements of surface tension and viscosity of high temperature liquids for future ISS experiments. We performed the observation of surface oscillations of levitated liquid alloys using PFLEX on board flight experiments by Gulfstream II (G-II) airplane operated by DAS. These observations were performed under the controlled Po2 and also under the suitable EMF conditions. In these experiments, we obtained the density, the viscosity and the surface tension values of liquid Cu. From these results, we discuss about as same as reported data, and also obtained the difference of surface oscillations with the change of the EMF conditions.

  2. What about temperature? Measuring permeability at magmatic conditions.

    NASA Astrophysics Data System (ADS)

    Kushnir, Alexandra R. L.; Martel, Caroline; Champallier, Rémi; Reuschlé, Thierry

    2015-04-01

    The explosive potential of volcanoes is intimately linked to permeability, which is governed by the connectivity of the porous structure of the magma and surrounding edifice. As magma ascends, volatiles exsolve from the melt and expand, creating a gas phase within the conduit. In the absence of a permeable structure capable of dissipating these gases, the propulsive force of an explosive eruption arises from the gas expansion and the build up of subsurface overpressures. Thus, characterizing the permeability of volcanic rocks under in-situ conditions (high temperature and pressure) allows us to better understand the outgassing potential and explosivity of volcanic systems. Current studies of the permeabilities of volcanic rocks generally measure permeability at room temperature using gas permeameters or model permeability using analytic imaging. Our goal is to perform and assess permeability measurements made at high temperature and high pressure in the interest of approaching the permeability of the samples at magmatic conditions. We measure the permeability of andesitic samples expelled during the 2010 Mt. Merapi eruption. We employ and compare two protocols for measuring permeability at high temperature and under high pressure using argon gas in an internally heated Paterson apparatus with an isolated pore fluid system. We first use the pulse decay method to measure the permeability of our samples, then compare these values to permeability measurements performed under steady state flow. We consider the steady state flow method the more rigorous of the two protocols, as we are more capable of accounting for the temperature gradient within the entire pore fluid system. At temperatures in excess of 700°C and pressures of 100 MPa, permeability values plummet by several orders of magnitude. These values are significantly lower than those commonly reported for room temperature permeameter measurements. The reduction in permeability at high temperature is a

  3. Effect of Culture Condition Variables on Human Endostatin Gene Expression in Escherichia coli Using Response Surface Methodology

    PubMed Central

    Mohajeri, Abbas; Pilehvar-Soltanahmadi, Yones; Abdolalizadeh, Jalal; Karimi, Pouran; Zarghami, Nosratollah

    2016-01-01

    Background Recombinant human endostatin (rhES) is an angiogenesis inhibitor used as a specific drug for the treatment of non-small-cell lung cancer. As mRNA concentration affects the recombinant protein expression level, any factor affecting mRNA concentration can alter the protein expression level. Response surface methodology (RSM) based on the Box-Behnken design (BBD) is a statistical tool for experimental design and for optimizing biotechnological processes. Objectives This investigation aimed to predict and develop the optimal culture conditions for mRNA expression of the synthetic human endostatin (hES) gene in Escherichia coli BL21 (DE3). Materials and Methods The hES gene was amplified, cloned, and expressed in the E. coli expression system. Three factors, including isopropyl β-D-1-thiogalactopyranoside (IPTG) concentration, post-induction time, and cell density before induction, were selected as important factors. The mRNA expression level was determined using real-time PCR. The expression levels of hES mRNA under the different growth conditions were analyzed. SDS-PAGE and western blot analyses were carried out for further confirmation of interest-gene expression. Results A maximum rhES mRNA level of 376.16% was obtained under the following conditions: 0.6 mM IPTG, 7 hours post-induction time, and 0.9 cell density before induction. The level of rhES mRNA was significantly correlated with post-induction time, IPTG concentration, and cell density before induction (P < 0.05). The expression of the hES gene was confirmed by western blot. Conclusions The obtained results indicate that RSM is an effective method for the optimization of culture conditions for hES gene expression in E. coli. PMID:27800134

  4. Optimization of Manufacturing Conditions for Improving Storage Stability of Coffee-Supplemented Milk Beverage Using Response Surface Methodology

    PubMed Central

    Kim, Jae-Hoon; Oh, Duk-Geun; Kim, Moojoong; Chung, Donghwa

    2017-01-01

    This study aimed at optimizing the manufacturing conditions of a milk beverage supplemented with coffee, and monitoring its physicochemical and sensory properties during storage. Raw milk, skim milk powder, coffee extract, and emulsifiers were used to manufacture the beverage. Two sucrose fatty acid esters, F110 and F160, were identified as suitable emulsifiers. The optimum conditions for the beverage manufacture, which can satisfy two conditions at the same time, determined by response surface methodology (RSM), were 5,000 rpm primary homogenization speed and 0.207% sucrose fatty acid emulsifier addition. The particle size and zeta-potential of the beverage under the optimum condition were 190.1 nm and - 25.94±0.06 mV, respectively. In comparison study between F110 added group (GF110) and F160 added group (GF160) during storage, all samples maintained its pH around 6.6 to 6.7, and there was no significant difference (p<0.05). In addition, GF110 showed significantly higher zeta-potential than GF160 (p<0.05). The particle size of GF110 and GF160 were approximately 190.1 and 223.1 nm, respectively at initial. However, size distribution of the GF160 tended to increase during storage. Moreover, increase of the particle size in GF160 was observed in microphotographs of it during storage. The L* values gradually decreased within all groups, whereas the a* and b* values did not show significant variations (p<0.05). Compared with GF160, bitterness, floating cream, and rancid flavor were more pronounced in the GF110. Based on the result obtained from the present study, it appears that the sucrose fatty acid ester F110 is more suitable emulsifier when it comes to manufacturing this beverage than the F160, and also contributes to extending product shelf-life. PMID:28316475

  5. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data.

    PubMed

    Holsclaw, Tracy; Hallgren, Kevin A; Steyvers, Mark; Smyth, Padhraic; Atkins, David C

    2015-12-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased Type I and Type II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in online supplemental materials.

  6. Thermal-Diffusivity Measurements of Mexican Citrus Essential Oils Using Photoacoustic Methodology in the Transmission Configuration

    NASA Astrophysics Data System (ADS)

    Muñoz, G. A. López; González, R. F. López; López, J. A. Balderas; Martínez-Pérez, L.

    2011-05-01

    Photoacoustic methodology in the transmission configuration (PMTC) was used to study the thermophysical properties and their relation with the composition in Mexican citrus essential oils providing the viability of using photothermal techniques for quality control and for authentication of oils and their adulteration. Linear relations for the amplitude (on a semi-log scale) and phase, as functions of the sample's thickness, for the PMTC was obtained through a theoretical model fit to the experimental data for thermal-diffusivity measurements in Mexican orange, pink grapefruit, mandarin, lime type A, centrifuged essential oils, and Mexican distilled lime essential oil. Gas chromatography for distilled lime essential oil and centrifuged lime essential oil type A is reported to complement the study. Experimental results showed close thermal-diffusivity values between Mexican citrus essential oils obtained by centrifugation, but a significant difference of this physical property for distilled lime oil and the corresponding value obtained by centrifugation, which is due to their different chemical compositions involved with the extraction processes.

  7. Measurement error and outcome distributions: Methodological issues in regression analyses of behavioral coding data

    PubMed Central

    Holsclaw, Tracy; Hallgren, Kevin A.; Steyvers, Mark; Smyth, Padhraic; Atkins, David C.

    2015-01-01

    Behavioral coding is increasingly used for studying mechanisms of change in psychosocial treatments for substance use disorders (SUDs). However, behavioral coding data typically include features that can be problematic in regression analyses, including measurement error in independent variables, non-normal distributions of count outcome variables, and conflation of predictor and outcome variables with third variables, such as session length. Methodological research in econometrics has shown that these issues can lead to biased parameter estimates, inaccurate standard errors, and increased type-I and type-II error rates, yet these statistical issues are not widely known within SUD treatment research, or more generally, within psychotherapy coding research. Using minimally-technical language intended for a broad audience of SUD treatment researchers, the present paper illustrates the nature in which these data issues are problematic. We draw on real-world data and simulation-based examples to illustrate how these data features can bias estimation of parameters and interpretation of models. A weighted negative binomial regression is introduced as an alternative to ordinary linear regression that appropriately addresses the data characteristics common to SUD treatment behavioral coding data. We conclude by demonstrating how to use and interpret these models with data from a study of motivational interviewing. SPSS and R syntax for weighted negative binomial regression models is included in supplementary materials. PMID:26098126

  8. Anonymous indexing of health conditions for a similarity measure.

    PubMed

    Song, Insu; Marsh, Nigel V

    2012-07-01

    A health social network is an online information service which facilitates information sharing between closely related members of a community with the same or a similar health condition. Over the years, many automated recommender systems have been developed for social networking in order to help users find their communities of interest. For health social networking, the ideal source of information for measuring similarities of patients is the medical information of the patients. However, it is not desirable that such sensitive and private information be shared over the Internet. This is also true for many other security sensitive domains. A new information-sharing scheme is developed where each patient is represented as a small number of (possibly disjoint) d-words (discriminant words) and the d-words are used to measure similarities between patients without revealing sensitive personal information. The d-words are simple words like "food,'' and thus do not contain identifiable personal information. This makes our method an effective one-way hashing of patient assessments for a similarity measure. The d-words can be easily shared on the Internet to find peers who might have similar health conditions.

  9. The application of conditioning paradigms in the measurement of pain

    PubMed Central

    Li, Jun-Xu

    2013-01-01

    Pain is a private experience that involves both sensory and emotional components. Animal studies of pain can only be inferred by their responses, and therefore the measurement of reflexive responses dominate the pain literature for nearly a century. It has been argued that although reflexive responses are important to unveil the sensory nature of pain in organisms, pain affect is equally important but largely ignored in pain studies primarily due to the lack of validated animal models. One strategy to begin to understand pain affect is to use conditioning principles to indirectly reveal the affective condition of pain. This review critically analyzed several procedures that are thought to measure affective learning of pain. The procedures regarding the current knowledge, the applications, and their advantages and disadvantages in pain research are discussed. It is proposed that these procedures should be combined with traditional reflex-based pain measurements in future studies of pain, which could greatly benefit both the understanding of neural underpinnings of pain and preclinical assessment of novel analgesics. PMID:23500202

  10. Closed-loop snowplow applicator control using road condition measurements

    NASA Astrophysics Data System (ADS)

    Erdogan, Gurkan; Alexander, Lee; Rajamani, Rajesh

    2011-04-01

    Closed-loop control of a snowplow applicator, based on direct measurement of the road surface condition, is a valuable technology for the optimisation of winter road maintenance costs and for the protection of the environment from the negative impacts of excessive usage of de-icing chemicals. To this end, a novel friction measurement wheel is designed to provide a continuous measurement of road friction coefficient, which is, in turn, utilised to control the applicator automatically on a snowplow. It is desired that the automated snowplow applicator deploy de-icing materials right from the beginning of any slippery surface detected by the friction wheel, meaning that no portion of the slippery road surface should be left untreated behind, as the snowplow travels over it at a reasonably high speed. This paper describes the developed wheel-based measurement system, the friction estimation algorithm and the expected performance of the closed-loop applicator system. Conventional and zero velocity applicators are introduced and their hardware time delays are measured in addition to the time delay of the friction estimation algorithm. The overall performance of the closed-loop applicator control system is shown to be reliable at typical snowplowing speeds if the zero velocity applicator is used.

  11. A Novel Fiber Bragg Grating Based Sensing Methodology for Direct Measurement of Surface Strain on Body Muscles during Physical Exercises

    NASA Astrophysics Data System (ADS)

    Prasad Arudi Subbarao, Guru; Subbaramajois Narasipur, Omkar; Kalegowda, Anand; Asokan, Sundarrajan

    2012-07-01

    The present work proposes a new sensing methodology, which uses Fiber Bragg Gratings (FBGs) to measure in vivo the surface strain and strain rate on calf muscles while performing certain exercises. Two simple exercises, namely ankle dorsi-flexion and ankle plantar-flexion, have been considered and the strain induced on the medial head of the gastrocnemius muscle while performing these exercises has been monitored. The real time strain generated has been recorded and the results are compared with those obtained using a commercial Color Doppler Ultrasound (CDU) system. It is found that the proposed sensing methodology is promising for surface strain measurements in biomechanical applications.

  12. A METHODOLOGY TO INTEGRATE MAGNETIC RESONANCE AND ACOUSTIC MEASUREMENTS FOR RESERVOIR CHARACTERIZATION

    SciTech Connect

    Jorge O. Parra; Chris L. Hackert; Lorna L. Wilson

    2002-09-20

    The work reported herein represents the third year of development efforts on a methodology to interpret magnetic resonance and acoustic measurements for reservoir characterization. In this last phase of the project we characterize a vuggy carbonate aquifer in the Hillsboro Basin, Palm Beach County, South Florida, using two data sets--the first generated by velocity tomography and the second generated by reflection tomography. First, we integrate optical macroscopic (OM), scanning electron microscope (SEM) and x-ray computed tomography (CT) images, as well as petrography, as a first step in characterizing the aquifer pore system. This pore scale integration provides information with which to evaluate nuclear magnetic resonance (NMR) well log signatures for NMR well log calibration, interpret ultrasonic data, and characterize flow units at the field scale between two wells in the aquifer. Saturated and desaturated NMR core measurements estimate the irreducible water in the rock and the variable T{sub 2} cut-offs for the NMR well log calibration. These measurements establish empirical equations to extract permeability from NMR well logs. Velocity and NMR-derived permeability and porosity relationships integrated with velocity tomography (based on crosswell seismic measurements recorded between two wells 100 m apart) capture two flow units that are supported with pore scale integration results. Next, we establish a more detailed picture of the complex aquifer pore structures and the critical role they play in water movement, which aids in our ability to characterize not only carbonate aquifers, but reservoirs in general. We analyze petrography and cores to reveal relationships between the rock physical properties that control the compressional and shear wave velocities of the formation. A digital thin section analysis provides the pore size distributions of the rock matrix, which allows us to relate pore structure to permeability and to characterize flow units at the

  13. High Accuracy Temperature Measurements Using RTDs with Current Loop Conditioning

    NASA Technical Reports Server (NTRS)

    Hill, Gerald M.

    1997-01-01

    To measure temperatures with a greater degree of accuracy than is possible with thermocouples, RTDs (Resistive Temperature Detectors) are typically used. Calibration standards use specialized high precision RTD probes with accuracies approaching 0.001 F. These are extremely delicate devices, and far too costly to be used in test facility instrumentation. Less costly sensors which are designed for aeronautical wind tunnel testing are available and can be readily adapted to probes, rakes, and test rigs. With proper signal conditioning of the sensor, temperature accuracies of 0.1 F is obtainable. For reasons that will be explored in this paper, the Anderson current loop is the preferred method used for signal conditioning. This scheme has been used in NASA Lewis Research Center's 9 x 15 Low Speed Wind Tunnel, and is detailed.

  14. Statistical Optimization of Ultraviolet Irradiate Conditions for Vitamin D2 Synthesis in Oyster Mushrooms (Pleurotus ostreatus) Using Response Surface Methodology

    PubMed Central

    Wu, Wei-Jie; Ahn, Byung-Yong

    2014-01-01

    Response surface methodology (RSM) was used to determine the optimum vitamin D2 synthesis conditions in oyster mushrooms (Pleurotus ostreatus). Ultraviolet B (UV-B) was selected as the most efficient irradiation source for the preliminary experiment, in addition to the levels of three independent variables, which included ambient temperature (25–45°C), exposure time (40–120 min), and irradiation intensity (0.6–1.2 W/m2). The statistical analysis indicated that, for the range which was studied, irradiation intensity was the most critical factor that affected vitamin D2 synthesis in oyster mushrooms. Under optimal conditions (ambient temperature of 28.16°C, UV-B intensity of 1.14 W/m2, and exposure time of 94.28 min), the experimental vitamin D2 content of 239.67 µg/g (dry weight) was in very good agreement with the predicted value of 245.49 µg/g, which verified the practicability of this strategy. Compared to fresh mushrooms, the lyophilized mushroom powder can synthesize remarkably higher level of vitamin D2 (498.10 µg/g) within much shorter UV-B exposure time (10 min), and thus should receive attention from the food processing industry. PMID:24736742

  15. Assessment of hygienic conditions of ground pepper (Piper nigrum L.) on the market in Sao Paulo City, by means of two methodologies for detecting the light filth

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Pepper should to be collected, processed, and packed under optimum conditions to avoid the presence of foreign matter. The hygienic conditions of ground pepper marketted in São Paulo city were assessed in determining the presence of foreign matter by means of two extraction methodologies. This study...

  16. Geometric scaling of artificial hair sensors for flow measurement under different conditions

    NASA Astrophysics Data System (ADS)

    Su, Weihua; Reich, Gregory W.

    2017-03-01

    Artificial hair sensors (AHSs) have been developed for prediction of the local flow speed and aerodynamic force around an airfoil and subsequent application in vibration control of the airfoil. Usually, a specific sensor design is only sensitive to the flow speeds within its operating flow measurement region. This paper aims at expanding this flow measurement concept of using AHSs to different flow speed conditions by properly sizing the parameters of the sensors, including the dimensions of the artificial hair, capillary, and carbon nanotubes (CNTs) that make up the sensor design, based on a baseline sensor design and its working flow condition. In doing so, the glass fiber hair is modeled as a cantilever beam with an elastic foundation, subject to the distributed aerodynamic drag over the length of the hair. Hair length and diameter, capillary depth, and CNT height are scaled by keeping the maximum compressive strain of the CNTs constant for different sensors under different speed conditions. Numerical studies will demonstrate the feasibility of the geometric scaling methodology by designing AHSs for aircraft with different dimensions and flight conditions, starting from the same baseline sensor. Finally, the operating bandwidth of the scaled sensors are explored.

  17. "MARK I" MEASUREMENT METHODOLOGY FOR POLLUTION PREVENTION PROGRESS OCCURRING AS A RESULT OF PRODUCT DECISIONS

    EPA Science Inventory

    A methodology for assessing progress in pollution prevention resulting from product redesign, reformulation or replacement is described. The method compares the pollution generated by the original product with that from the modified or replacement product, taking into account, if...

  18. Collector probe measurements of ohmic conditioning discharges in TFTR

    SciTech Connect

    Kilpatrick, S.J.; Dylla, H.F.; Manos, D.M.; Cohen, S.A.; Wampler, W.R.; Bastasz, R.

    1989-03-01

    Special limiter conditioning techniques using low density deuterium or helium discharges have produced enhanced plasma confinement in TFTR. Measurements with a rotatable collector probe have been made to increase our understanding of the boundary layer during these conditioning sequences. A set of silicon films behind slits facing in the ion and electron drift directions was exposed to four different D/sup +/ and He/sup 2 +/ discharge sequences. The amounts of deuterium and impurities trapped in the surface regions of the samples have been measured by different analytical techniques, including nuclear reaction analysis for retained deuterium, Rutherford backscattering spectroscopy for carbon and metals, and Auger electron spectroscopy for carbon, oxygen, and metals. Up to 1.9 /times/ 10/sup 17/ cm/sup /minus/2/ of deuterium was detected in codeposited carbon layers with D/C generally in the range of the bulk saturation limit. Radial profiles and ion drift/electron drift asymmetries are discussed. 21 refs., 3 figs., 1 tab.

  19. Safety Assessment for a Surface Repository in the Chernobyl Exclusion Zone - Methodology for Assessing Disposal under Intervention Conditions - 13476

    SciTech Connect

    Haverkamp, B.; Krone, J.; Shybetskyi, I.

    2013-07-01

    The Radioactive Waste Disposal Facility (RWDF) Buryakovka was constructed in 1986 as part of the intervention measures after the accident at Chernobyl NPP (ChNPP). Today, RWDF Buryakovka is still being operated but its maximum capacity is nearly reached. Plans for enlargement of the facility exist since more than 10 years but have not been implemented yet. In the framework of an European Commission Project DBE Technology GmbH prepared a safety analysis report of the facility in its current state (SAR) and a preliminary safety analysis report (PSAR) based on the planned enlargement. Due to its history RWDF Buryakovka does not fully comply with today's best international practices and the latest Ukrainian regulations in this area. The most critical aspects are its inventory of long-lived radionuclides, and the non-existent multi-barrier waste confinement system. A significant part of the project was dedicated, therefore, to the development of a methodology for the safety assessment taking into consideration the facility's special situation and to reach an agreement with all stakeholders involved in the later review and approval procedure of the safety analysis reports. Main aspect of the agreed methodology was to analyze the safety, not strictly based on regulatory requirements but on the assessment of the actual situation of the facility including its location within the Exclusion Zone. For both safety analysis reports, SAR and PSAR, the assessment of the long-term safety led to results that were either within regulatory limits or within the limits allowing for a specific situational evaluation by the regulator. (authors)

  20. Inference of human affective states from psychophysiological measurements extracted under ecologically valid conditions

    PubMed Central

    Betella, Alberto; Zucca, Riccardo; Cetnarski, Ryszard; Greco, Alberto; Lanatà, Antonio; Mazzei, Daniele; Tognetti, Alessandro; Arsiwalla, Xerxes D.; Omedas, Pedro; De Rossi, Danilo; Verschure, Paul F. M. J.

    2014-01-01

    Compared to standard laboratory protocols, the measurement of psychophysiological signals in real world experiments poses technical and methodological challenges due to external factors that cannot be directly controlled. To address this problem, we propose a hybrid approach based on an immersive and human accessible space called the eXperience Induction Machine (XIM), that incorporates the advantages of a laboratory within a life-like setting. The XIM integrates unobtrusive wearable sensors for the acquisition of psychophysiological signals suitable for ambulatory emotion research. In this paper, we present results from two different studies conducted to validate the XIM as a general-purpose sensing infrastructure for the study of human affective states under ecologically valid conditions. In the first investigation, we recorded and classified signals from subjects exposed to pictorial stimuli corresponding to a range of arousal levels, while they were free to walk and gesticulate. In the second study, we designed an experiment that follows the classical conditioning paradigm, a well-known procedure in the behavioral sciences, with the additional feature that participants were free to move in the physical space, as opposed to similar studies measuring physiological signals in constrained laboratory settings. Our results indicate that, by using our sensing infrastructure, it is indeed possible to infer human event-elicited affective states through measurements of psychophysiological signals under ecological conditions. PMID:25309310

  1. Lab measurements to support modeling terahertz propagation in brownout conditions

    NASA Astrophysics Data System (ADS)

    Fiorino, Steven T.; Grice, Phillip M.; Krizo, Matthew J.; Bartell, Richard J.; Haiducek, John D.; Cusumano, Salvatore J.

    2010-04-01

    Brownout, the loss of visibility caused by dust and debris introduced into the atmosphere by the downwash of a helicopter, currently represents a serious challenge to U.S. military operations in Iraq and Afghanistan, where it has been cited as a factor in the majority of helicopter accidents. Brownout not only reduces visibility, but can create visual illusions for the pilot and difficult conditions for crew beneath the aircraft. Terahertz imaging may provide one solution to this problem. Terahertz frequency radiation readily propagates through the dirt aerosols present in brownout, and therefore can provide an imaging capability to improve effective visibility for pilots, helping prevent the associated accidents. To properly model the success of such systems, it is necessary to determine the optical properties of such obscurants in the terahertz regime. This research attempts to empirically determine, and measure in the laboratory, the full complex index of refraction optical properties of dirt aerosols representative of brownout conditions. These properties are incorporated into the AFIT/CDE Laser Environmental Effects Definition and Reference (LEEDR) software, allowing this program to more accurately assess the propagation of terahertz radiation under brownout conditions than was done in the past with estimated optical properties.

  2. Suspended matter concentrations in coastal waters: Methodological improvements to quantify individual measurement uncertainty

    NASA Astrophysics Data System (ADS)

    Röttgers, Rüdiger; Heymann, Kerstin; Krasemann, Hajo

    2014-12-01

    Measurements of total suspended matter (TSM) concentration and the discrimination of the particulate inorganic (PIM) and organic matter fraction by the loss on ignition methods are susceptible to significant and contradictory bias errors by: (a) retention of sea salt in the filter (despite washing with deionized water), and (b) filter material loss during washing and combustion procedures. Several methodological procedures are described to avoid or correct errors associated with these biases but no analysis of the final uncertainty for the overall mass concentration determination has yet been performed. Typically, the exact values of these errors are unknown and can only be estimated. Measurements were performed in coastal and estuarine waters of the German Bight that allowed the individual error for each sample to be determined with respect to a systematic mass offset. This was achieved by using different volumes of the sample and analyzing the mass over volume relationship by linear regression. The results showed that the variation in the mass offset is much larger than expected (mean mass offset: 0.85 ± 0.84 mg, range: -2.4 - 7.5 mg) and that it often leads to rather large relative errors even when TSM concentrations were high. Similarly large variations were found for the mass offset for PIM measurements. Correction with a mean offset determined with procedural control filters reduced the maximum error to <60%. The determination errors for the TSM concentration was <40% when three different volume were used, and for the majority of the samples the error was <10%. When six different volumes were used and outliers removed, the error was always <25%, very often errors of only a few percent were obtained. The approach proposed here can determine the individual determination error for each sample, is independent of bias errors, can be used for TSM and PIM determination, and allows individual quality control for samples from coastal and estuarine waters. It should

  3. Evaluation of methodological aspects of digestibility measurements in ponies fed different grass hays.

    PubMed

    Schaafstra, F J W C; van Doorn, D A; Schonewille, J T; Wartena, F C; Zoon, M V; Blok, M C; Hendriks, W H

    2015-10-01

    Methodological aspects of digestibility measurements of feedstuffs for equines were studied in four Welsh pony geldings consuming four grass-hay diets in a 4 × 4 Latin square design. Diets contained either a low (L), medium (M), high (H), or very high (VH) ADF content (264, 314, 375, or 396 g·kg DM, respectively). Diets were supplemented with minerals, vitamins, and TiO (3.9 g Ti·d). Daily feces excreted were collected quantitatively over 10 consecutive days and analyzed for moisture, ash, ADL, AIA, and titanium (Ti). Minimum duration of total fecal collection (TFC) required for an accurate estimation of apparent organic matter digestibility (OMD) of grass hay was assessed. Based on literature and the calculated cumulative OMD assessed over 10 consecutive days of TFC, a minimum duration of at least 5 consecutive days of fecal collection is recommended for accurate estimation of dry matter digestibility (DMD) and OMD in ponies. The 5-d collection should be preceded by a 14-d adaptation period to allow the animals to adapt to the diets and become accustomed to the collection procedures. Mean fecal recovery over 10 d across diets for ADL, AIA, and Ti was 93.1% (SE 1.9), 98.9% (SE 5.5), and 97.1% (SE 1.8), respectively. Evaluation of CV of mean fecal recoveries obtained by ADL, AIA, and Ti showed that variation in fecal Ti (6.8) and ADL excretion (7.0) was relatively low compared to AIA (12.3). In conclusion, the use of internal ADL and externally supplemented Ti are preferred as markers to be used in digestibility trials in equine fed grass-hay diets.

  4. Nuclear Structure Measurements of Fermium-254 and Advances in Target Production Methodologies

    NASA Astrophysics Data System (ADS)

    Gothe, Oliver Ralf

    The Berkeley Gas-filled Separator (BGS) has been upgraded with a new gas control system. It allows for accurate control of hydrogen and helium gas mixtures. This greatly increases the capabilities of the separator by reducing background signals in the focal plane detector for asymmetric nuclear reactions. It has also been shown that gas mixtures can be used to focus the desired reaction products into a smaller area, thereby increasing the experimental efficiency. A new electrodeposition cell has been developed to produce metal oxide targets for experiments at the BGS. The new cell has been characterized and was used to produce americium targets for the production of element 115 in the reaction 243Am(48Ca.3n) 288115. Additionally, a new method of producing targets for nuclear reactions was explored. A procedure for producing targets via Polymer Assisted Deposition (PAD) was developed and targets produced via this method were tested using the nuclear reaction 208Pb(40Ar.4 n)244Fm to determine their in-beam performance. It was determined that the silicon nitride backings used in this procedure are not feasible due to their crystal structures, and alternative backing materials have been tested and proposed. A previously unknown level in 254Fm has been identified at 985.7 keV utilizing a newly developed low background coincident apparatus. 254m was produced in the reaction 208Pb(48Ca. n)254No. Reaction products were guided to the two-clover low background detector setup via a recoil transfer chamber. The new level has been assigned a spin of 2- and has tentatively been identified as the octupole vibration in 254Fm. Transporting evaporation residues to a two-clover, low background detector setup can effectively be used to perform gamma-spectroscopy measurements of nuclei that are not accessible by current common methodologies. This technique provides an excellent addition to previously available tools such as in-beam spectroscopy and gamma-ray tracking arrays.

  5. Measurement in Learning Games Evolution: Review of Methodologies Used in Determining Effectiveness of "Math Snacks" Games and Animations

    ERIC Educational Resources Information Center

    Trujillo, Karen; Chamberlin, Barbara; Wiburg, Karin; Armstrong, Amanda

    2016-01-01

    This article captures the evolution of research goals and methodologies used to assess the effectiveness and impact of a set of mathematical educational games and animations for middle-school aged students. The researchers initially proposed using a mixed model research design of formative and summative measures, such as user-testing,…

  6. Optimisation of the operational conditions of trichloroethylene degradation using Trametes versicolor under quinone redox cycling conditions using central composite design methodology.

    PubMed

    Vilaplana, Marcel; García, Ana Belén; Caminal, Gloria; Guillén, Francisco; Sarrà, Montserrat

    2012-04-01

    Extracellular radicals produced by Trametes versicolor under quinone redox cycling conditions can degrade a large variety of pollutant compounds, including trichloroethylene (TCE). This study investigated the effect of the agitation speed and the gas-liquid phase volume ratio on TCE degradation using central composite design (CCD) methodology for a future scale-up to a reactor system. The agitation speed ranged from 90 to 200 rpm, and the volume ratio ranged from 0.5 to 4.4. The results demonstrated the important and positive effect of the agitation speed and an interaction between the two factors on TCE degradation. Although the volume ratio did not have a significant effect if the agitation speed value was between 160 and 200 rpm, at lower speed values, the specific pollutant degradation was clearly more extensive at low volume ratios than at high volume ratios. The fitted response surface was validated by performing an experiment using the parameter combination in the model that maximised TCE degradation. The results of the experiments carried out using different biomass concentrations demonstrated that the biomass concentration had a positive effect on pollutant degradation if the amount of biomass present was lower than 1.6 g dry weight l(-1). The results show that the maximum TCE degradation was obtained at the highest speed (200 rpm), gas-liquid phase volume ratio (4.4), and a biomass concentration of 1.6 g dry weight l(-1).

  7. An analysis of the pilot point methodology for automated calibration of an ensemble of conditionally simulated transmissivity fields

    USGS Publications Warehouse

    Cooley, R.L.

    2000-01-01

    An analysis of the pilot point method for automated calibration of an ensemble of conditionally simulated transmissivity fields was conducted on the basis of the simplifying assumption that the flow model is a linear function of log transmissivity. The analysis shows that the pilot point and conditional simulation method of model calibration and uncertainty analysis can produce accurate uncertainty measures if it can be assumed that errors of unknown origin in the differences between observed and model-computed water pressures are small. When this assumption is not met, the method could yield significant errors from overparameterization and the neglect of potential sources of model inaccuracy. The conditional simulation part of the method is also shown to be a variant of the percentile bootstrap method, so that when applied to a nonlinear model, the method is subject to bootstrap errors. These sources of error must be considered when using the method.

  8. Optimization of Extraction Condition for Alisol B and Alisol B Acetate in Alismatis Rhizoma using Response Surface Methodology

    PubMed Central

    Lee, A Yeong; Park, Jun Yeon; Chun, Jin Mi; Moon, Byeong Cheol; Kang, Byoung Kab; Seo, Young Bae; Shin, Hyeun-Kyoo; Kim, Ho Kyoung

    2012-01-01

    Alismatis Rhizoma is a perennial herb originating from the rhizomes of Alisma orientalis (Sam) Juzep and the same species which have been used to treat seborrheic dermatitis, eczema, polydipsia, and pedal edema. We aimed to determine the concentrations of the compounds alisol B and alisol B acetate present in a sample of the herb using high-performance liquid chromatography coupled with a photodiode array detector. We selected methanol as the optimal solvent considering the structures of alisol B and alisol B acetate. We estimated the proportion of alisol B and alisol B acetate in a standard extract to be 0.0434% and 0.2365% in methanol, respectively. To optimize extraction, we employed response surface methodology to determine the yields of alisol B and alisol B acetate, which mapped out a central composite design consisting of 15 experimental points. The extraction parameters were time, concentration, and sample weight. The predicted concentration of alisol B derivatives was estimated to be 0.2388% under the following conditions: 81 min of extraction time, 76% of methanol concentration, and 1.52g of sample weight. PMID:23335845

  9. Reverse micellar extraction of lectin from black turtle bean (Phaseolus vulgaris): optimisation of extraction conditions by response surface methodology.

    PubMed

    He, Shudong; Shi, John; Walid, Elfalleh; Zhang, Hongwei; Ma, Ying; Xue, Sophia Jun

    2015-01-01

    Lectin from black turtle bean (Phaseolus vulgaris) was extracted and purified by reverse micellar extraction (RME) method. Response surface methodology (RSM) was used to optimise the processing parameters for both forward and backward extraction. Hemagglutinating activity analysis, SDS-PAGE, RP-HPLC and FTIR techniques were used to characterise the lectin. The optimum extraction conditions were determined as 77.59 mM NaCl, pH 5.65, AOT 127.44 mM sodium bis (2-ethylhexyl) sulfosuccinate (AOT) for the forward extraction; and 592.97 mM KCl, pH 8.01 for the backward extraction. The yield was 63.21 ± 2.35 mg protein/g bean meal with a purification factor of 8.81 ± 0.17. The efficiency of RME was confirmed by SDS-PAGE and RP-HPLC, respectively. FTIR analysis indicated there were no significant changes in the secondary protein structure. Comparison with conventional chromatographic method confirmed that the RME method could be used for the purification of lectin from the crude extract.

  10. Measurement of Two-Phase Flow Characteristics Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Keshock, E. G.; Lin, C. S.; Edwards, L. G.; Knapp, J.; Harrison, M. E.; Xhang, X.

    1999-01-01

    This paper describes the technical approach and initial results of a test program for studying two-phase annular flow under the simulated microgravity conditions of KC-135 aircraft flights. A helical coil flow channel orientation was utilized in order to circumvent the restrictions normally associated with drop tower or aircraft flight tests with respect to two-phase flow, namely spatial restrictions preventing channel lengths of sufficient size to accurately measure pressure drops. Additionally, the helical coil geometry is of interest in itself, considering that operating in a microgravity environment vastly simplifies the two-phase flows occurring in coiled flow channels under 1-g conditions for virtually any orientation. Pressure drop measurements were made across four stainless steel coil test sections, having a range of inside tube diameters (0.95 to 1.9 cm), coil diameters (25 - 50 cm), and length-to-diameter ratios (380 - 720). High-speed video photographic flow observations were made in the transparent straight sections immediately preceding and following the coil test sections. A transparent coil of tygon tubing of 1.9 cm inside diameter was also used to obtain flow visualization information within the coil itself. Initial test data has been obtained from one set of KC-135 flight tests, along with benchmark ground tests. Preliminary results appear to indicate that accurate pressure drop data is obtainable using a helical coil geometry that may be related to straight channel flow behavior. Also, video photographic results appear to indicate that the observed slug-annular flow regime transitions agree quite reasonably with the Dukler microgravity map.

  11. Generic design methodology for the development of three-dimensional structured-light sensory systems for measuring complex objects

    NASA Astrophysics Data System (ADS)

    Marin, Veronica E.; Chang, Wei Hao Wayne; Nejat, Goldie

    2014-11-01

    Structured-light (SL) techniques are emerging as popular noncontact approaches for obtaining three-dimensional (3-D) measurements of complex objects for real-time applications in manufacturing, bioengineering, and robotics. The performance of SL systems is determined by the emitting (i.e., projector) and capturing (i.e., camera) hardware components and the triangulation configuration between them and an object of interest. A generic design methodology is presented to determine optimal triangulation configurations for SL systems. These optimal configurations are determined with respect to a set of performance metrics: (1) minimizing the 3-D reconstruction errors, (2) maximizing the pixel-to-pixel correspondence between the projector and camera, and (3) maximizing the dispersion of the measured 3-D points within a measurement volume, while satisfying design constraints based on hardware and user-defined specifications. The proposed methodology utilizes a 3-D geometric triangulation model based on ray-tracing geometry and pin-hole models for the projector and camera. Using the methodology, a set of optimal system configurations can be determined for a given set of hardware components. The design methodology was applied to a real-time SL system for surface profiling of complex objects. Experiments were conducted with an optimal sensor configuration and its performance verified with respect to a nonoptimal hardware configuration.

  12. A novel methodology to measure methane bubble sizes in the water column

    NASA Astrophysics Data System (ADS)

    Hemond, H.; Delwiche, K.; Senft-Grupp, S.; Manganello, T.

    2014-12-01

    The fate of methane ebullition from lake sediments is dependent on initial bubble size. Rising bubbles are subject to dissolution, reducing the fraction of methane that ultimately enters the atmosphere while increasing concentrations of aqueous methane. Smaller bubbles not only rise more slowly, but dissolve more rapidly larger bubbles. Thus, understanding methane bubble size distributions in the water column is critical to predicting atmospheric methane emissions from ebullition. However, current methods of measuring methane bubble sizes in-situ are resource-intensive, typically requiring divers, video equipment, sonar, or hydroacoustic instruments. The complexity and cost of these techniques points to the strong need for a simple, autonomous device that can measure bubble size distributions and be deployed unattended over long periods of time. We describe a bubble sizing device that can be moored in the subsurface and can intercept and measure the size of bubbles as they rise. The instrument uses a novel optical measurement technique with infrared LEDs and IR-sensitive photodetectors combined with a custom-designed printed circuit board. An on-board microcomputer handles raw optical signals and stores the relevant information needed to calculate bubble volume. The electronics are housed within a pressure case fabricated from standard PVC fittings and are powered by size C alkaline batteries. The bill of materials cost is less than $200, allowing us to deploy multiple sensors at various locations within Upper Mystic Lake, MA. This novel device will provide information on how methane bubble sizes may vary both spatially and temporally. We present data from tests under controlled laboratory conditions and from deployments in Upper Mystic Lake.

  13. Measurements of optical underwater turbulence under controlled conditions

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.; Gladysz, S.; Almeida de Sá Barros, R.; Matt, S.; Nootz, G. A.; Josset, D. B.; Hou, W.

    2016-05-01

    Laser beam propagation underwater is becoming an important research topic because of high demand for its potential applications. Namely, ability to image underwater at long distances is highly desired for scientific and military purposes, including submarine awareness, diver visibility, and mine detection. Optical communication in the ocean can provide covert data transmission with much higher rates than that available with acoustic techniques, and it is now desired for certain military and scientific applications that involve sending large quantities of data. Unfortunately underwater environment presents serious challenges for propagation of laser beams. Even in clean ocean water, the extinction due to absorption and scattering theoretically limit the useful range to few attenuation lengths. However, extending the laser light propagation range to the theoretical limit leads to significant beam distortions due to optical underwater turbulence. Experiments show that the magnitude of the distortions that are caused by water temperature and salinity fluctuations can significantly exceed the magnitude of the beam distortions due to atmospheric turbulence even for relatively short propagation distances. We are presenting direct measurements of optical underwater turbulence in controlled conditions of laboratory water tank using two separate techniques involving wavefront sensor and LED array. These independent approaches will enable development of underwater turbulence power spectrum model based directly on the spatial domain measurements and will lead to accurate predictions of underwater beam propagation.

  14. Optimization of Extraction Conditions for Maximal Phenolic, Flavonoid and Antioxidant Activity from Melaleuca bracteata Leaves Using the Response Surface Methodology

    PubMed Central

    Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping

    2016-01-01

    Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the “tea tree oil” derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography–mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant. PMID

  15. Optimization of Extraction Conditions for Maximal Phenolic, Flavonoid and Antioxidant Activity from Melaleuca bracteata Leaves Using the Response Surface Methodology.

    PubMed

    Hou, Wencheng; Zhang, Wei; Chen, Guode; Luo, Yanping

    2016-01-01

    Melaleuca bracteata is a yellow-leaved tree belonging to the Melaleuca genus. Species from this genus are known to be good sources of natural antioxidants, for example, the "tea tree oil" derived from M. alternifolia is used in food processing to extend the shelf life of products. In order to determine whether M. bracteata contains novel natural antioxidants, the components of M. bracteata ethanol extracts were analyzed by gas chromatography-mass spectrometry. Total phenolic and flavonoid contents were extracted and the antioxidant activities of the extracts evaluated. Single-factor experiments, central composite rotatable design (CCRD) and response surface methodology (RSM) were used to optimize the extraction conditions for total phenolic content (TPC) and total flavonoid content (TFC). Ferric reducing power (FRP) and 1,1-Diphenyl-2-picrylhydrazyl radical (DPPH·) scavenging capacity were used as the evaluation indices of antioxidant activity. The results showed that the main components of M. bracteata ethanol extracts are methyl eugenol (86.86%) and trans-cinnamic acid methyl ester (6.41%). The single-factor experiments revealed that the ethanol concentration is the key factor determining the TPC, TFC, FRP and DPPH·scavenging capacity. RSM results indicated that the optimal condition of all four evaluation indices was achieved by extracting for 3.65 days at 53.26°C in 34.81% ethanol. Under these conditions, the TPC, TFC, FRP and DPPH·scavenging capacity reached values of 88.6 ± 1.3 mg GAE/g DW, 19.4 ± 0.2 mg RE/g DW, 2.37 ± 0.01 mM Fe2+/g DW and 86.0 ± 0.3%, respectively, which were higher than those of the positive control, methyl eugenol (FRP 0.97 ± 0.02 mM, DPPH·scavenging capacity 58.6 ± 0.7%) at comparable concentrations. Therefore, the extracts of M. bracteata leaves have higher antioxidant activity, which did not only attributed to the methyl eugenol. Further research could lead to the development of a potent new natural antioxidant.

  16. The effect of self-assembly conditions on the size of di- and tri-block copolymer micelles: solicitation from response surface methodology.

    PubMed

    Honary, Soheila; Lavasanifar, Afsaneh

    2014-08-27

    Abstract The objective of this study was to assess the application of Response Surface Methodology in defining the effect of self-assembly condition on the average diameter of polymeric micelles. Di- and tri-block copolymers of poly(ethylene oxide)-b-poly(α-benzylcarboxylate-ϵ-caprolactone) (PEO-PBCL) and PBCL-b-poly(ethylene glycol)-b-PBCL (PBCL-PEG-PBCL) were synthesized through ring opening polymerization of α-benzyl-ε-carboxylate using MePEO or dihydroxy PEG as initiator, respectively. Polymeric micelles were formed through solubilization of block copolymers in acetone followed by drop-wise addition of this solution to water. Polymer concentration was changed and the intensity mean diameter of self-assembled structures was measured by dynamic light scattering. The experimental data were fitted to a mathematical model. The experimental conditions leading to the production of micelles of certain size (30, 60 or 90 nm for tri-block and 30 nm for di-block copolymers) was predicted. A good match between predicted and experimental data was observed. The results showed it would be possible to obtain micelles of certain size using block copolymers of different molecular weights or obtain micelles of different size at a given block copolymer molecular weight, by manipulating the polymer concentration. These observations show reproducible micelles of defined average diameter can be prepared by co-solvent evaporation by controlling the used polymer concentration.

  17. Measuring Sense of Community: A Methodological Interpretation of the Factor Structure Debate

    ERIC Educational Resources Information Center

    Peterson, N. Andrew; Speer, Paul W.; Hughey, Joseph

    2006-01-01

    Instability in the factor structure of the Sense of Community Index (SCI) was tested as a methodological artifact. Confirmatory factor analyses, tested with two data sets, supported neither the proposed one-factor nor the four-factor (needs fulfillment, group membership, influence, and emotional connection) SCI. Results demonstrated that the SCI…

  18. Defining and Measuring Engagement and Learning in Science: Conceptual, Theoretical, Methodological, and Analytical Issues

    ERIC Educational Resources Information Center

    Azevedo, Roger

    2015-01-01

    Engagement is one of the most widely misused and overgeneralized constructs found in the educational, learning, instructional, and psychological sciences. The articles in this special issue represent a wide range of traditions and highlight several key conceptual, theoretical, methodological, and analytical issues related to defining and measuring…

  19. Measuring Team Shared Understanding Using the Analysis-Constructed Shared Mental Model Methodology

    ERIC Educational Resources Information Center

    Johnson, Tristan E.; O'Connor, Debra L.

    2008-01-01

    Teams are an essential part of successful performance in learning and work environments. Analysis-constructed shared mental model (ACSMM) methodology is a set of techniques where individual mental models are elicited and sharedness is determined not by the individuals who provided their mental models but by an analytical procedure. This method…

  20. What Do We Measure? Methodological versus Institutional Validity in Student Surveys

    ERIC Educational Resources Information Center

    Johnson, Jeffrey Alan

    2011-01-01

    This paper examines the tension in the process of designing student surveys between the methodological requirements of good survey design and the institutional needs for survey data. Building on the commonly used argumentative approach to construct validity, I build an interpretive argument for student opinion surveys that allows assessment of the…

  1. The Measure of a Nation: The USDA and the Rise of Survey Methodology

    ERIC Educational Resources Information Center

    Mahoney, Kevin T.; Baker, David B.

    2007-01-01

    Survey research has played a major role in American social science. An outgrowth of efforts by the United States Department of Agriculture in the 1930s, the Division of Program Surveys (DPS) played an important role in the development of survey methodology. The DPS was headed by the ambitious and entrepreneurial Rensis Likert, populated by young…

  2. The Continued Salience of Methodological Issues for Measuring Psychiatric Disorders in International Surveys

    ERIC Educational Resources Information Center

    Tausig, Mark; Subedi, Janardan; Broughton, Christopher; Pokimica, Jelena; Huang, Yinmei; Santangelo, Susan L.

    2011-01-01

    We investigated the extent to which methodological concerns explicitly addressed by the designers of the World Mental Health Surveys persist in the results that were obtained using the WMH-CIDI instrument. We compared rates of endorsement of mental illness symptoms in the United States (very high) and Nepal (very low) as they were affected by…

  3. Antioxidant activity in barley (Hordeum Vulgare L.) grains roasted in a microwave oven under conditions optimized using response surface methodology.

    PubMed

    Omwamba, Mary; Hu, Qiuhui

    2010-01-01

    Microwave processing and cooking of foods is a recent development that is gaining momentum in household as well as large-scale food applications. Barley contains phenol compounds which possess antioxidant activity. In this study the microwave oven roasting condition was optimized to obtain grains with high antioxidant activity measured as the ability to scavenge 1,1-diphenyl-2-picrylhydrazyl (DPPH) free radical. Antioxidant activity of grains roasted under optimum conditions was assessed based on DPPH radical scavenging activity, reducing power and inhibition of oxidation in linoleic acid system. The optimum condition for obtaining roasted barley with high antioxidant activity (90.5% DPPH inhibition) was found to be at 600 W microwave power, 8.5 min roasting time, and 61.5 g or 2 layers of grains. The roasting condition influenced antioxidant activity both individually and interactively. Statistical analysis showed that the model was significant (P < 0.0001). The acetone extract had significantly high inhibition of lipid peroxidation and DPPH radical scavenging activity compared to the aqueous extract and alpha-tocopherol. The reducing power of acetone extracts was not significantly different from alpha-tocopherol. The acetone extract had twice the amount of phenol content compared to the aqueous extract indicating its high extraction efficiency. GC-MS analysis revealed the presence of phenol acids, amino phenols, and quinones. The aqueous extract did not contain 3,4-dihydroxybenzaldehyde and 4-hydroxycinnamic acid which are phenol compounds reported to contribute to antioxidant activity in barley grain.

  4. A methodology for successfully producing global translations of patient reported outcome measures for use in multiple countries.

    PubMed

    Two, Rebecca; Verjee-Lorenz, Aneesa; Clayson, Darren; Dalal, Mehul; Grotzinger, Kelly; Younossi, Zobair M

    2010-01-01

    The production of accurate and culturally relevant translations of patient reported outcome (PRO) measures is essential for the success of international clinical trials. Although there are many reports in publication regarding the translation of PRO measures, the techniques used to produce single translations for use in multiple countries (global translations) are not well documented. This article addresses this apparent lack of documentation and presents the methodology used to create global translations of the Chronic Liver Disease Questionnaire-Hepatitis C Virus (CLDQ-HCV). The challenges of creating a translation for use in multiple countries are discussed, and the criteria for a global translation project explained. Based on a thorough translation and linguistic validation methodology including a concept elaboration, multiple forward translations, two back translations, reviews by in-country clinicians and the instrument developer, pilot testing in each target country and multiple sets of proofreading, the key concept of the global translation methodology is consistent international harmonization, achieved through the involvement of linguists from each target country at every stage of the process. This methodology enabled the successful resolution of the translation issues encountered, and resulted in consistent translations of the CLDQ-HCV that were linguistically and culturally appropriate for all target countries.

  5. Ciona intestinalis as an emerging model organism: its regeneration under controlled conditions and methodology for egg dechorionation.

    PubMed

    Liu, Li-ping; Xiang, Jian-hai; Dong, Bo; Natarajan, Pavanasam; Yu, Kui-jie; Cai, Nan-er

    2006-06-01

    The ascidian Ciona intestinalis is a model organism of developmental and evolutionary biology and may provide crucial clues concerning two fundamental matters, namely, how chordates originated from the putative deuterostome ancestor and how advanced chordates originated from the simplest chordates. In this paper, a whole-life-span culture of C. intestinalis was conducted. Fed with the diet combination of dry Spirulina, egg yolk, Dicrateria sp., edible yeast and weaning diet for shrimp, C. intestinalis grew up to average 59 mm and matured after 60 d cultivation. This culture process could be repeated using the artificially cultured mature ascidians as material. When the fertilized eggs were maintained under 10, 15, 20, 25 degrees C, they hatched within 30 h, 22 h, 16 h and 12 h 50 min respectively experiencing cleavage, blastulation, gastrulation, neurulation, tailbud stage and tadpole stage. The tadpole larvae were characterized as typical but simplified chordates because of their dorsal nerve cord, notochord and primordial brain. After 8 - 24 h freely swimming, the tadpole larvae settled on the substrates and metamorphosized within 1- 2 d into filter feeding sessile juvenile ascidians. In addition, unfertilized eggs were successfully dechorionated in filtered seawater containing 1% Tripsin, 0.25% EDTA at pH of 10.5 within 40 min. After fertilization, the dechorionated eggs developed well and hatched at normal hatching rate. In conclusion, this paper presented feasible methodology for rearing the tadpole larvae of C. intestinalis into sexual maturity under controlled conditions and detailed observations on the embryogenesis of the laboratory cultured ascidians, which will facilitate developmental and genetic research using this model system.

  6. Use of Response Surface Methodology to Optimize Culture Conditions for Hydrogen Production by an Anaerobic Bacterial Strain from Soluble Starch

    NASA Astrophysics Data System (ADS)

    Kieu, Hoa Thi Quynh; Nguyen, Yen Thi; Dang, Yen Thi; Nguyen, Binh Thanh

    2016-05-01

    Biohydrogen is a clean source of energy that produces no harmful byproducts during combustion, being a potential sustainable energy carrier for the future. Therefore, biohydrogen produced by anaerobic bacteria via dark fermentation has attracted attention worldwide as a renewable energy source. However, the hydrogen production capability of these bacteria depends on major factors such as substrate, iron-containing hydrogenase, reduction agent, pH, and temperature. In this study, the response surface methodology (RSM) with central composite design (CCD) was employed to improve the hydrogen production by an anaerobic bacterial strain isolated from animal waste in Phu Linh, Soc Son, Vietnam (PL strain). The hydrogen production process was investigated as a function of three critical factors: soluble starch concentration (8 g L-1 to 12 g L-1), ferrous iron concentration (100 mg L-1 to 200 mg L-1), and l-cysteine concentration (300 mg L-1 to 500 mg L-1). RSM analysis showed that all three factors significantly influenced hydrogen production. Among them, the ferrous iron concentration presented the greatest influence. The optimum hydrogen concentration of 1030 mL L-1 medium was obtained with 10 g L-1 soluble starch, 150 mg L-1 ferrous iron, and 400 mg L-1 l-cysteine after 48 h of anaerobic fermentation. The hydrogen concentration produced by the PL strain was doubled after using RSM. The obtained results indicate that RSM with CCD can be used as a technique to optimize culture conditions for enhancement of hydrogen production by the selected anaerobic bacterial strain. Hydrogen production from low-cost organic substrates such as soluble starch using anaerobic fermentation methods may be one of the most promising approaches.

  7. A methodological frame for assessing benzene induced leukemia risk mitigation due to policy measures.

    PubMed

    Karakitsios, Spyros P; Sarigiannis, Dimosthenis Α; Gotti, Alberto; Kassomenos, Pavlos A; Pilidis, Georgios A

    2013-01-15

    The study relies on the development of a methodology for assessing the determinants that comprise the overall leukemia risk due to benzene exposure and how these are affected by outdoor and indoor air quality regulation. An integrated modeling environment was constructed comprising traffic emissions, dispersion models, human exposure models and a coupled internal dose/biology-based dose-response risk assessment model, in order to assess the benzene imposed leukemia risk, as much as the impact of traffic fleet renewal and smoking banning to these levels. Regarding traffic fleet renewal, several "what if" scenarios were tested. The detailed full-chain methodology was applied in a South-Eastern European urban setting in Greece and a limited version of the methodology in Helsinki. Non-smoking population runs an average risk equal to 4.1·10(-5) compared to 23.4·10(-5) for smokers. The estimated lifetime risk for the examined occupational groups was higher than the one estimated for the general public by 10-20%. Active smoking constitutes a dominant parameter for benzene-attributable leukemia risk, much stronger than any related activity, occupational or not. From the assessment of mitigation policies it was found that the associated leukemia risk in the optimum traffic fleet scenario could be reduced by up to 85% for non-smokers and up to 8% for smokers. On the contrary, smoking banning provided smaller gains for (7% for non-smokers, 1% for smokers), while for Helsinki, smoking policies were found to be more efficient than traffic fleet renewal. The methodology proposed above provides a general framework for assessing aggregated exposure and the consequent leukemia risk from benzene (incorporating mechanistic data), capturing exposure and internal dosimetry dynamics, translating changes in exposure determinants to actual changes in population risk, providing a valuable tool for risk management evaluation and consequently to policy support.

  8. A new methodology of determining directional albedo models from Nimbus 7 ERB scanning radiometer measurements

    NASA Technical Reports Server (NTRS)

    House, Frederick B.

    1986-01-01

    The Nimbus 7 Earth Radiation Budget (ERB) data set is reviewed to examine its strong and weak points. In view of the timing of this report relative to the processing schedule of Nimbus 7 ERB observations, emphasis is placed on the methodology of interpreting the scanning radiometer data to develop directional albedo models. These findings enhance the value of the Nimbus 7 ERB data set and can be applied to the interpretation of both the scanning and nonscanning radiometric observations.

  9. Linking Interoperability Characters and Measures of Effectiveness: A Methodology for Evaluating Architectures

    DTIC Science & Technology

    2009-06-18

    4 Figure 3. Argus-IS: A-160 Hummingbird (Electro-optical Sensor) ................................... 6 Figure 4. Gotcha : ISR Pallet aboard a cargo...O Single engine, rotary wing unmanned aerial vehicle, E/O sensor with 3-km radius FOV, and 20-hr mission time Gotcha SAR ISR pallet aboard fixed...researchers. Figure 4. Gotcha : ISR Pallet aboard a Cargo Aircraft (e.g. C-130) (SAR Sensor) The first methodology component captures relevant sensor

  10. Measuring the structure factor of simple fluids under extreme conditions

    NASA Astrophysics Data System (ADS)

    Weck, Gunnar

    2013-06-01

    The structure and dynamics of fluids, although a long standing matter of investigations, is still far from being well established. In particular, with the existence of a first order liquid-liquid phase transition (LLT) discovered in liquid phosphorus at 0.9 GPa and 1300 K it is now recognized that the fluid state could present complex structural changes. At present, very few examples of LLTs have been clearly evidenced, which may mean that a larger range of densities must be probed. First order transitions between a molecular and a polymeric liquid have been recently predicted by first principles calculations in liquid nitrogen at 88 GPa and 2000 K and in liquid CO2 at 45 GPa and 1850 K. The only device capable of reaching these extreme conditions is the diamond anvil cell (DAC), in which, the sample is sandwiched between two diamond anvils of thickness 100 times larger. Consequently, the diffracted signal from the sample is very weak compared to the Compton signal of the anvils, and becomes hardly measurable for pressures above ~20 GPa. A similar problem has been faced by the high pressure community using large volume press so as to drastically reduce the x-ray background from the sample environment. In the angle-dispersive diffraction configuration, it was proposed to use a multichannel collimator (MCC). This solution has been implemented to fit the constraints of the Paris-Edimburg (PE) large volume press and it is now routinely used on beamline ID27 of the European Synchrotron Radiation Facility. In this contribution, we present our adaptation of the MCC device accessible at ID27 for the DAC experiment. Because of the small sample volume a careful alignment procedure between the MCC slits and the DAC had to be implemented. The data analysis procedure initially developed by Eggert et al. has also been completed in order to take into account the complex contribution of the MCC slits. A large reduction of the Compton diffusion from the diamond anvils is obtained

  11. Methodological challenges in measuring vaccine effectiveness using population cohorts in low resource settings

    PubMed Central

    King, C.; Beard, J.; Crampin, A.C.; Costello, A.; Mwansambo, C.; Cunliffe, N.A.; Heyderman, R.S.; French, N.; Bar-Zeev, N.

    2015-01-01

    Post-licensure real world evaluation of vaccine implementation is important for establishing evidence of vaccine effectiveness (VE) and programme impact, including indirect effects. Large cohort studies offer an important epidemiological approach for evaluating VE, but have inherent methodological challenges. Since March 2012, we have conducted an open prospective cohort study in two sites in rural Malawi to evaluate the post-introduction effectiveness of 13-valent pneumococcal conjugate vaccine (PCV13) against all-cause post-neonatal infant mortality and monovalent rotavirus vaccine (RV1) against diarrhoea-related post-neonatal infant mortality. Our study sites cover a population of 500,000, with a baseline post-neonatal infant mortality of 25 per 1000 live births. We conducted a methodological review of cohort studies for vaccine effectiveness in a developing country setting, applied to our study context. Based on published literature, we outline key considerations when defining the denominator (study population), exposure (vaccination status) and outcome ascertainment (mortality and cause of death) of such studies. We assess various definitions in these three domains, in terms of their impact on power, effect size and potential biases and their direction, using our cohort study for illustration. Based on this iterative process, we discuss the pros and cons of our final per-protocol analysis plan. Since no single set of definitions or analytical approach accounts for all possible biases, we propose sensitivity analyses to interrogate our assumptions and methodological decisions. In the poorest regions of the world where routine vital birth and death surveillance are frequently unavailable and the burden of disease and death is greatest We conclude that provided the balance between definitions and their overall assumed impact on estimated VE are acknowledged, such large scale real-world cohort studies can provide crucial information to policymakers by providing

  12. Methodological challenges in measuring vaccine effectiveness using population cohorts in low resource settings.

    PubMed

    King, C; Beard, J; Crampin, A C; Costello, A; Mwansambo, C; Cunliffe, N A; Heyderman, R S; French, N; Bar-Zeev, N

    2015-09-11

    Post-licensure real world evaluation of vaccine implementation is important for establishing evidence of vaccine effectiveness (VE) and programme impact, including indirect effects. Large cohort studies offer an important epidemiological approach for evaluating VE, but have inherent methodological challenges. Since March 2012, we have conducted an open prospective cohort study in two sites in rural Malawi to evaluate the post-introduction effectiveness of 13-valent pneumococcal conjugate vaccine (PCV13) against all-cause post-neonatal infant mortality and monovalent rotavirus vaccine (RV1) against diarrhoea-related post-neonatal infant mortality. Our study sites cover a population of 500,000, with a baseline post-neonatal infant mortality of 25 per 1000 live births. We conducted a methodological review of cohort studies for vaccine effectiveness in a developing country setting, applied to our study context. Based on published literature, we outline key considerations when defining the denominator (study population), exposure (vaccination status) and outcome ascertainment (mortality and cause of death) of such studies. We assess various definitions in these three domains, in terms of their impact on power, effect size and potential biases and their direction, using our cohort study for illustration. Based on this iterative process, we discuss the pros and cons of our final per-protocol analysis plan. Since no single set of definitions or analytical approach accounts for all possible biases, we propose sensitivity analyses to interrogate our assumptions and methodological decisions. In the poorest regions of the world where routine vital birth and death surveillance are frequently unavailable and the burden of disease and death is greatest We conclude that provided the balance between definitions and their overall assumed impact on estimated VE are acknowledged, such large scale real-world cohort studies can provide crucial information to policymakers by providing

  13. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology.

    PubMed

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments.

  14. Neurolinguistic measures of typological effects in multilingual transfer: introducing an ERP methodology

    PubMed Central

    Rothman, Jason; Alemán Bañón, José; González Alonso, Jorge

    2015-01-01

    This article has two main objectives. First, we offer an introduction to the subfield of generative third language (L3) acquisition. Concerned primarily with modeling initial stages transfer of morphosyntax, one goal of this program is to show how initial stages L3 data make significant contributions toward a better understanding of how the mind represents language and how (cognitive) economy constrains acquisition processes more generally. Our second objective is to argue for and demonstrate how this subfield will benefit from a neuro/psycholinguistic methodological approach, such as event-related potential experiments, to complement the claims currently made on the basis of exclusively behavioral experiments. PMID:26300800

  15. Quality of life assessment in children: a review of conceptual and methodological issues in multidimensional health status measures.

    PubMed Central

    Pal, D K

    1996-01-01

    STUDY OBJECTIVE: To clarify concepts and methodological problems in existing multidimensional health status measures for children. DESIGN: Thematic review of instruments found by computerised and manual searches, 1979-95. SUBJECTS: Nine health status instruments. MAIN RESULTS: Many instruments did not satisfy criteria of being child centered or family focussed; few had sufficient psychometric properties for research or clinical use; underlying conceptual assumptions were rarely explicit. CONCLUSIONS: Quality of life measures should be viewed cautiously. Interdisciplinary discussion is required, as well as discussion with children and parents, to establish constructs that are truly useful. PMID:8882220

  16. Measurements methodology for evaluation of Digital TV operation in VHF high-band

    NASA Astrophysics Data System (ADS)

    Pudwell Chaves de Almeida, M.; Vladimir Gonzalez Castellanos, P.; Alfredo Cal Braz, J.; Pereira David, R.; Saboia Lima de Souza, R.; Pereira da Soledade, A.; Rodrigues Nascimento Junior, J.; Ferreira Lima, F.

    2016-07-01

    This paper describes the experimental setup of field measurements carried out for evaluating the operation of the ISDB-TB (Integrated Services Digital Broadcasting, Terrestrial, Brazilian version) standard digital TV in the VHF-highband. Measurements were performed in urban and suburban areas in a medium-sized Brazilian city. Besides the direct measurements of received power and environmental noise, a measurement procedure involving the injection of Gaussian additive noise was employed to achieve the signal to noise ratio threshold at each measurement site. The analysis includes results of static reception measurements for evaluating the received field strength and the signal to noise ratio thresholds for correct signal decoding.

  17. 2015 Review on the Extension of the AMedP-8(C) Methodology to New Agents, Materials, and Conditions

    DTIC Science & Technology

    2016-06-01

    Defense Planning & Response Project,” for the Joint Staff, Joint Requirements Office (JRO) for Chemical , Biological, Radiological and Nuclear (CBRN...Treaty Organization (NATO) planning guide documenting a methodology to estimate casualties from chemical , biological, radiological, and nuclear (CBRN...March 2011, included the parameters to estimate casualties caused by three chemical agents, five biological agents, seven radioisotopes, nuclear

  18. 2010 Review on the Extension of the AMedP-8(C) Methodology to New Agents, Materials, and Conditions

    DTIC Science & Technology

    2010-12-01

    pneumonic plague, smallpox, and Venezuelan equine encephalitis (VEE)). D. Human Response Injury Profile (HRIP) Parameters The HRIP methodology...would be a significantly different response due to the use of antibiotics (as chemoprophylaxis) or immunizations, and a separate set of prophylaxis...vaccination protocols or existing vaccination research programs and for bacterial agents that respond to antibiotics . E. The 2009 Report and

  19. A Novel Methodology for Measurements of an LED's Heat Dissipation Factor

    NASA Astrophysics Data System (ADS)

    Jou, R.-Y.; Haung, J.-H.

    2015-12-01

    Heat generation is an inevitable byproduct with high-power light-emitting diode (LED) lighting. The increase in junction temperature that accompanies the heat generation sharply degrades the optical output of the LED and has a significant negative influence on the reliability and durability of the LED. For these reasons, the heat dissipation factor, Kh, is an important factor in modeling and thermal design of LED installations. In this study, a methodology is proposed and experiments are conducted to determine LED heat dissipation factors. Experiments are conducted for two different brands of LED. The average heat dissipation factor of the Edixeon LED is 0.69, and is 0.60 for the OSRAM LED. By using the developed test method and comparing the results to the calculated luminous fluxes using theoretical equations, the interdependence of optical, electrical, and thermal powers can be predicted with a reasonable accuracy. The difference between the theoretical and experimental values is less than 9 %.

  20. [Techniques for measuring phakic and pseudophakic accommodation. Methodology for distinguishing between neurological and mechanical accommodative insufficiency].

    PubMed

    Roche, O; Roumes, C; Parsa, C

    2007-11-01

    The methods available for studying accommodation are evaluated: Donder's "push-up" method, dynamic retinoscopy, infrared optometry using the Scheiner principle, and wavefront analysis are each discussed with their inherent advantages and limitations. Based on the methodology described, one can also distinguish between causes of accommodative insufficiency. Dioptric insufficiency (accommodative lag) that remains equal at various testing distances from the subject indicates a sensory/neurologic (afferent), defect, whereas accommodative insufficiency changing with distance indicates a mechanical/restrictive (efferent) defect, such as in presbyopia. Determining accommodative insufficiency and the cause can be particularly useful when examining patients with a variety of diseases associated with reduced accommodative ability (e.g., Down syndrome and cerebral palsy) as well as in evaluating the effectiveness of various potentially accommodating intraocular lens designs.

  1. Experimental Measurements And Evaluation Of Indoor Microclimate Conditions

    NASA Astrophysics Data System (ADS)

    Kraliková, Ružena; Sokolová, Hana

    2015-07-01

    The paper deals with monitoring of workplace where technological equipment produces heat during hot summer days. The thermo-hygric microclimate measurement took place during daily work shift, and was carried out at 5 choosen measuring points. Since there was radiation heat presented in workplace and workers worked at different places, the thermal environment was classified as a heterogeneous and unstationary area. The measurement, result processing and interpretation was carried out according to the valid legislation of Slovak Republic.

  2. Creating ‘obesogenic realities’; do our methodological choices make a difference when measuring the food environment?

    PubMed Central

    2013-01-01

    Background The use of Geographical Information Systems (GIS) to objectively measure ‘obesogenic’ food environment (foodscape) exposure has become common-place. This increase in usage has coincided with the development of a methodologically heterogeneous evidence-base, with subsequent perceived difficulties for inter-study comparability. However, when used together in previous work, different types of food environment metric have often demonstrated some degree of covariance. Differences and similarities between density and proximity metrics, and within methodologically different conceptions of density and proximity metrics need to be better understood. Methods Frequently used measures of food access were calculated for North East England, UK. Using food outlet data from local councils, densities of food outlets per 1000 population and per km2 were calculated for small administrative areas. Densities (counts) were also calculated based on population-weighted centroids of administrative areas buffered at 400/800/1000m street network and Euclidean distances. Proximity (street network and Euclidean distances) from these centroids to the nearest food outlet were also calculated. Metrics were compared using Spearman’s rank correlations. Results Measures of foodscape density and proximity were highly correlated. Densities per km2 and per 1000 population were highly correlated (rs = 0.831). Euclidean and street network based measures of proximity (rs = 0.865) and density (rs = 0.667-0.764, depending on neighbourhood size) were also highly correlated. Density metrics based on administrative areas and buffered centroids of administrative areas were less strongly correlated (rs = 0.299-0.658). Conclusions Density and proximity metrics were largely comparable, with some exceptions. Whilst results suggested a substantial degree of comparability across existing studies, future comparability could be ensured by moving towards a more standardised set of

  3. Measures of Chronic Conditions and Diseases Associated With Aging in the National Social Life, Health, and Aging Project

    PubMed Central

    Pham-Kanter, Genevieve; Leitsch, Sara A.

    2009-01-01

    Objectives This paper presents a description of the methods used in the National Social Life, Health, and Aging Project to detect the presence of chronic conditions and diseases associated with aging. It also discusses the validity and distribution of these measures. Methods Markers associated with common chronic diseases and conditions of aging were collected from 3,005 community-dwelling older adults living in the United States, aged 57–85 years, during 2006. Dried blood spots, physical function tests, anthropometric measurements, self-reported history, and self-rated assessments were used to detect the presence of chronic conditions associated with aging or of risk factors associated with the development of chronic diseases. Results The distribution of each measure, disaggregated by age group and gender, is presented. Conclusions This paper describes the methodology used as well as the distribution of each of these measures. In addition, we discuss how the measures used in the study relate to specific chronic diseases and conditions associated with aging and how these measures might be used in social science analyses. PMID:19204070

  4. UNDERSTANDING PHYSICAL CONDITIONS IN HIGH-REDSHIFT GALAXIES THROUGH C I FINE STRUCTURE LINES: DATA AND METHODOLOGY

    SciTech Connect

    Jorgenson, Regina A.; Wolfe, Arthur M.; Prochaska, J. Xavier

    2010-10-10

    We probe the physical conditions in high-redshift galaxies, specifically, the damped Ly{alpha} systems (DLAs) using neutral carbon (C I) fine structure lines and molecular hydrogen (H{sub 2}). We report five new detections of C I and analyze the C I in an additional two DLAs with previously published data. We also present one new detection of H{sub 2} in a DLA. We present a new method of analysis that simultaneously constrains both the volume density and the temperature of the gas, as opposed to previous studies that a priori assumed a gas temperature. We use only the column density of C I measured in the fine structure states and the assumption of ionization equilibrium in order to constrain the physical conditions in the gas. We present a sample of 11 C I velocity components in six DLAs and compare their properties to those derived by the global C II* technique. The resulting median values for this sample are (n(H I)) = 69 cm{sup -3}, (T) = 50 K, and (log(P/k)) = 3.86 cm{sup -3} K, with standard deviations, {sigma}{sub n(H{sub i})} = 134 cm{sup -3}, {sigma}{sub T} = 52 K, and {sigma}{sub log(P/k)} = 3.68 cm{sup -3} K. This can be compared with the integrated median values for the same DLAs: (n(H I)) = 2.8 cm{sup -3}, (T) = 139 K, and (log(P/k)) = 2.57 cm{sup -3} K, with standard deviations {sigma}{sub n(H{sub i})} = 3.0 cm{sup -3}, {sigma}{sub T} = 43 K, and {sigma}{sub log(P/k)} = 0.22 cm{sup -3} K. Interestingly, the pressures measured in these high-redshift C I clouds are similar to those found in the Milky Way. We conclude that the C I gas is tracing a higher-density, higher-pressure region, possibly indicative of post-shock gas or a photodissociation region on the edge of a molecular cloud. We speculate that these clouds may be direct probes of the precursor sites of star formation in normal galaxies at high redshift.

  5. Measurement of Vehicle Air Conditioning Pull-Down Period

    SciTech Connect

    Thomas, John F.; Huff, Shean P.; Moore, Larry G.; West, Brian H.

    2016-08-01

    Air conditioner usage was characterized for high heat-load summer conditions during short driving trips using a 2009 Ford Explorer and a 2009 Toyota Corolla. Vehicles were parked in the sun with windows closed to allow the cabin to become hot. Experiments were conducted by entering the instrumented vehicles in this heated condition and driving on-road with the windows up and the air conditioning set to maximum cooling, maximum fan speed and the air flow setting to recirculate cabin air rather than pull in outside humid air. The main purpose was to determine the length of time the air conditioner system would remain at or very near maximum cooling power under these severe-duty conditions. Because of the variable and somewhat uncontrolled nature of the experiments, they serve only to show that for short vehicle trips, air conditioning can remain near or at full cooling capacity for 10-minutes or significantly longer and the cabin may be uncomfortably warm during much of this time.

  6. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Jorge O.; Hackert, Chris L.; Collier, Hughbert A.; Bennett, Michael

    2002-01-29

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate NMR techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This is accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging are being linked with a balanced petrographical analysis of the core and theoretical model.

  7. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, Ph.D., Jorge O.

    2002-06-10

    The objective of the project was to develop an advanced imaging method, including pore scale imaging, to integrate nuclear magnetic resonance (NMR) techniques and acoustic measurements to improve predictability of the pay zone in hydrocarbon reservoirs. This will be accomplished by extracting the fluid property parameters using NMR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurement techniques and core imaging were linked with a balanced petrographical analysis of cores and theoretical modeling.

  8. A Methodology to Integrate Magnetic Resonance and Acoustic Measurements for Reservoir Characterization

    SciTech Connect

    Parra, J.O.

    2001-01-26

    The objective of this project was to develop an advanced imaging method, including pore scale imaging, to integrate magnetic resonance (MR) techniques and acoustic measurements to improve predictability of the pay zone in two hydrocarbon reservoirs. This was accomplished by extracting the fluid property parameters using MR laboratory measurements and the elastic parameters of the rock matrix from acoustic measurements to create poroelastic models of different parts of the reservoir. Laboratory measurements were compared with petrographic analysis results to determine the relative roles of petrographic elements such as porosity type, mineralogy, texture, and distribution of clay and cement in creating permeability heterogeneity.

  9. A methodology for investigating interdependencies between measured throughfall, meteorological variables and canopy structure on a small catchment.

    NASA Astrophysics Data System (ADS)

    Maurer, Thomas; Gustavos Trujillo Siliézar, Carlos; Oeser, Anne; Pohle, Ina; Hinz, Christoph

    2016-04-01

    In evolving initial landscapes, vegetation development depends on a variety of feedback effects. One of the less understood feedback loops is the interaction between throughfall and plant canopy development. The amount of throughfall is governed by the characteristics of the vegetation canopy, whereas vegetation pattern evolution may in turn depend on the spatio-temporal distribution of throughfall. Meteorological factors that may influence throughfall, while at the same time interacting with the canopy, are e.g. wind speed, wind direction and rainfall intensity. Our objective is to investigate how throughfall, vegetation canopy and meteorological variables interact in an exemplary eco-hydrological system in its initial development phase, in which the canopy is very heterogeneous and rapidly changing. For that purpose, we developed a methodological approach combining field methods, raster image analysis and multivariate statistics. The research area for this study is the Hühnerwasser ('Chicken Creek') catchment in Lower Lusatia, Brandenburg, Germany, where after eight years of succession, the spatial distribution of plant species is highly heterogeneous, leading to increasingly differentiated throughfall patterns. The constructed 6-ha catchment offers ideal conditions for our study due to the rapidly changing vegetation structure and the availability of complementary monitoring data. Throughfall data were obtained by 50 tipping bucket rain gauges arranged in two transects and connected via a wireless sensor network that cover the predominant vegetation types on the catchment (locust copses, dense sallow thorn bushes and reeds, base herbaceous and medium-rise small-reed vegetation, and open areas covered by moss and lichens). The spatial configuration of the vegetation canopy for each measurement site was described via digital image analysis of hemispheric photographs of the canopy using the ArcGIS Spatial Analyst, GapLight and ImageJ software. Meteorological data

  10. Grapevine Yield and Leaf Area Estimation Using Supervised Classification Methodology on RGB Images Taken under Field Conditions

    PubMed Central

    Diago, Maria-Paz; Correa, Christian; Millán, Borja; Barreiro, Pilar; Valero, Constantino; Tardaguila, Javier

    2012-01-01

    The aim of this research was to implement a methodology through the generation of a supervised classifier based on the Mahalanobis distance to characterize the grapevine canopy and assess leaf area and yield using RGB images. The method automatically processes sets of images, and calculates the areas (number of pixels) corresponding to seven different classes (Grapes, Wood, Background, and four classes of Leaf, of increasing leaf age). Each one is initialized by the user, who selects a set of representative pixels for every class in order to induce the clustering around them. The proposed methodology was evaluated with 70 grapevine (V. vinifera L. cv. Tempranillo) images, acquired in a commercial vineyard located in La Rioja (Spain), after several defoliation and de-fruiting events on 10 vines, with a conventional RGB camera and no artificial illumination. The segmentation results showed a performance of 92% for leaves and 98% for clusters, and allowed to assess the grapevine’s leaf area and yield with R2 values of 0.81 (p < 0.001) and 0.73 (p = 0.002), respectively. This methodology, which operates with a simple image acquisition setup and guarantees the right number and kind of pixel classes, has shown to be suitable and robust enough to provide valuable information for vineyard management. PMID:23235443

  11. Evaluation of constraint methodologies applied to a shallow-flaw cruciform bend specimen tested under biaxial loading conditions

    SciTech Connect

    Bass, B.R.; McAfee, W.J.; Williams, P.T.; Pennell, W.E.

    1998-01-01

    A technology to determine shallow-flaw fracture toughness of reactor pressure vessel (RPV) steels is being developed for application to the safety assessment of RPVs containing postulated shallow surface flaws. Matrices of cruciform beam tests were developed to investigate and quantify the effects of temperature, biaxial loading, and specimen size on fracture initiation toughness of two-dimensional (constant depth), shallow surface flaws. The cruciform beam specimens were developed at Oak Ridge National Laboratory (ORNL) to introduce a prototypic, far-field. out-of-plane biaxial stress component in the test section that approximates the nonlinear stresses resulting from pressurized-thermal-shock or pressure-temperature loading of an RPV. Tests were conducted under biaxial load ratios ranging from uniaxial to equibiaxial. These tests demonstrated that biaxial loading can have a pronounced effect on shallow-flaw fracture toughness in the lower transition temperature region for RPV materials. The cruciform fracture toughness data were used to evaluate fracture methodologies for predicting the observed effects of biaxial loading on shallow-flaw fracture toughness. Initial emphasis was placed on assessment of stress-based methodologies. namely, the J-Q formulation, the Dodds-Anderson toughness scaling model, and the Weibull approach. Applications of these methodologies based on the hydrostatic stress fracture criterion indicated an effect of loading-biaxiality on fracture toughness, the conventional maximum principal stress criterion indicated no effect.

  12. Construct Validity for Measures of Childhood Depression: Application of Multitrait-Multimethod Methodology.

    ERIC Educational Resources Information Center

    Saylor, Conway Fleming; And Others

    1984-01-01

    Presents results from two studies that investigate the measurement of childhood depression through self-report and reports from others. Results were discussed in terms of the need to consider self-report in the discussion of depression with its covert components and in terms of the need for improved measurement instruments. (BH)

  13. Negotiating Measurement: Methodological and Interpersonal Considerations in the Choice and Interpretation of Instruments

    ERIC Educational Resources Information Center

    Braverman, Marc T.

    2013-01-01

    Sound evaluation planning requires numerous decisions about how constructs in a program theory will be translated into measures and instruments that produce evaluation data. This article, the first in a dialogue exchange, examines how decisions about measurement are (and should be) made, especially in the context of small-scale local program…

  14. Methodologies for Investigating Item- and Test-Level Measurement Equivalence in International Large-Scale Assessments

    ERIC Educational Resources Information Center

    Oliveri, Maria Elena; Olson, Brent F.; Ercikan, Kadriye; Zumbo, Bruno D.

    2012-01-01

    In this study, the Canadian English and French versions of the Problem-Solving Measure of the Programme for International Student Assessment 2003 were examined to investigate their degree of measurement comparability at the item- and test-levels. Three methods of differential item functioning (DIF) were compared: parametric and nonparametric item…

  15. A methodology to quantify the differences between alternative methods of heart rate variability measurement.

    PubMed

    García-González, M A; Fernández-Chimeno, M; Guede-Fernández, F; Ferrer-Mileo, V; Argelagós-Palau, A; Álvarez-Gómez, L; Parrado, E; Moreno, J; Capdevila, L; Ramos-Castro, J

    2016-01-01

    This work proposes a systematic procedure to report the differences between heart rate variability time series obtained from alternative measurements reporting the spread and mean of the differences as well as the agreement between measuring procedures and quantifying how stationary, random and normal the differences between alternative measurements are. A description of the complete automatic procedure to obtain a differences time series (DTS) from two alternative methods, a proposal of a battery of statistical tests, and a set of statistical indicators to better describe the differences in RR interval estimation are also provided. Results show that the spread and agreement depend on the choice of alternative measurements and that the DTS cannot be considered generally as a white or as a normally distributed process. Nevertheless, in controlled measurements the DTS can be considered as a stationary process.

  16. Measuring Science Teachers' Stress Level Triggered by Multiple Stressful Conditions

    ERIC Educational Resources Information Center

    Halim, Lilia; Samsudin, Mohd Ali; Meerah, T. Subahan M.; Osman, Kamisah

    2006-01-01

    The complexity of science teaching requires science teachers to encounter a range of tasks. Some tasks are perceived as stressful while others are not. This study aims to investigate the extent to which different teaching situations lead to different stress levels. It also aims to identify the easiest and most difficult conditions to be regarded…

  17. Measurement of Phonated Intervals during Four Fluency-Inducing Conditions

    ERIC Educational Resources Information Center

    Davidow, Jason H.; Bothe, Anne K.; Andreatta, Richard D.; Ye, Jun

    2009-01-01

    Purpose: Previous investigations of persons who stutter have demonstrated changes in vocalization variables during fluency-inducing conditions (FICs). A series of studies has also shown that a reduction in short intervals of phonation, those from 30 to 200 ms, is associated with decreased stuttering. The purpose of this study, therefore, was to…

  18. Use of Balanced Scorecard Methodology for Performance Measurement of the Health Extension Program in Ethiopia.

    PubMed

    Teklehaimanot, Hailay D; Teklehaimanot, Awash; Tedella, Aregawi A; Abdella, Mustofa

    2016-05-04

    In 2004, Ethiopia introduced a community-based Health Extension Program to deliver basic and essential health services. We developed a comprehensive performance scoring methodology to assess the performance of the program. A balanced scorecard with six domains and 32 indicators was developed. Data collected from 1,014 service providers, 433 health facilities, and 10,068 community members sampled from 298 villages were used to generate weighted national, regional, and agroecological zone scores for each indicator. The national median indicator scores ranged from 37% to 98% with poor performance in commodity availability, workforce motivation, referral linkage, infection prevention, and quality of care. Indicator scores showed significant difference by region (P < 0.001). Regional performance varied across indicators suggesting that each region had specific areas of strength and deficiency, with Tigray and the Southern Nations, Nationalities and Peoples Region being the best performers while the mainly pastoral regions of Gambela, Afar, and Benishangul-Gumuz were the worst. The findings of this study suggest the need for strategies aimed at improving specific elements of the program and its performance in specific regions to achieve quality and equitable health services.

  19. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-05-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  20. Advanced ultrasonic measurement methodology for non-invasive interrogation and identification of fluids in sealed containers

    NASA Astrophysics Data System (ADS)

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-01

    Government agencies and homeland security related organizations have identified the need to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a prototype portable, hand-held, hazardous materials acoustic inspection prototype that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials encountered in various law enforcement inspection activities, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the prototype. High bandwidth ultrasonic transducers combined with an advanced pulse compression technique allowed researchers to 1) obtain high signal-to-noise ratios and 2) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of work conducted in the laboratory have demonstrated that the prototype experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  1. Advanced Ultrasonic Measurement Methodology for Non-Invasive Interrogation and Identification of Fluids in Sealed Containers

    SciTech Connect

    Tucker, Brian J.; Diaz, Aaron A.; Eckenrode, Brian A.

    2006-03-16

    The Hazardous Materials Response Unit (HMRU) and the Counterterrorism and Forensic Science Research Unit (CTFSRU), Laboratory Division, Federal Bureau of Investigation (FBI) have been mandated to develop and establish a wide range of unprecedented capabilities for providing scientific and technical forensic services to investigations involving hazardous chemical, biological, and radiological materials, including extremely dangerous chemical and biological warfare agents. Pacific Northwest National Laboratory (PNNL) has developed a portable, hand-held, hazardous materials acoustic inspection device (HAZAID) that provides noninvasive container interrogation and material identification capabilities using nondestructive ultrasonic velocity and attenuation measurements. Due to the wide variety of fluids as well as container sizes and materials, the need for high measurement sensitivity and advanced ultrasonic measurement techniques were identified. The HAZAID prototype was developed using a versatile electronics platform, advanced ultrasonic wave propagation methods, and advanced signal processing techniques. This paper primarily focuses on the ultrasonic measurement methods and signal processing techniques incorporated into the HAZAID prototype. High bandwidth ultrasonic transducers combined with the advanced pulse compression technique allowed researchers to 1) impart large amounts of energy, 2) obtain high signal-to-noise ratios, and 3) obtain accurate and consistent time-of-flight (TOF) measurements through a variety of highly attenuative containers and fluid media. Results of this feasibility study demonstrated that the HAZAID experimental measurement technique also provided information regarding container properties, which will be utilized in future container-independent measurements of hidden liquids.

  2. A Novel Instrument and Methodology for the In-Situ Measurement of the Stress in Thin Films

    NASA Technical Reports Server (NTRS)

    Broadway, David M.; Omokanwaye, Mayowa O.; Ramsey, Brian D.

    2014-01-01

    We introduce a novel methodology for the in-situ measurement of mechanical stress during thin film growth utilizing a highly sensitive non-contact variation of the classic spherometer. By exploiting the known spherical deformation of the substrate the value of the stress induced curvature is inferred by measurement of only one point on the substrate's surface-the sagittal. From the known curvature the stress can be calculated using the well-known Stoney equation. Based on this methodology, a stress sensor has been designed which is simple, highly sensitive, compact, and low cost. As a result of its compact nature, the sensor can be mounted in any orientation to accommodate a given deposition geometry without the need for extensive modification to an already existing deposition system. The technique employs the use of a double side polished substrate that offers good specular reflectivity and is isotropic in its mechanical properties, such as <111> oriented crystalline silicon or amorphous soda lime glass, for example. The measurement of the displacement of the uncoated side during deposition is performed with a high resolution (i.e. 5nm), commercially available, inexpensive, fiber optic sensor which can be used in both high vacuum and high temperature environments (i.e. 10(exp-7) Torr and 480oC, respectively). A key attribute of this instrument lies in its potential to achieve sensitivity that rivals other measurement techniques such as the micro cantilever method but, due to the comparatively larger substrate area, offers a more robust and practical alternative for subsequent measurement of additional characteristics of the film that can might be correlated to film stress. We present measurement results of nickel films deposited by magnetron sputtering which show good qualitative agreement to the know behavior of polycrystalline films previously reported by Hoffman.

  3. A New Methodology For In-Situ Residual Stress Measurement In MEMS Structures

    SciTech Connect

    Sebastiani, M.; Bemporad, E.; Melone, G.; Rizzi, L.; Korsunsky, A. M.

    2010-11-24

    In this paper, a new approach is presented for local residual stress measurement in MEMS structures. The newly proposed approach involves incremental focused ion beam (FIB) milling of annular trenches at material surface, combined with high resolution SEM imaging and Digital Image Correlation (DIC) analysis for the measurement of the strain relief over the surface of the remaining central pillar. The proposed technique allows investigating the average residual stress on suspended micro-structures, with a spatial resolution lower than 1 {mu}m. Results are presented for residual stress measurement on double clamped micro-beams, whose layers are obtained by DC-sputtering (PVD) deposition. Residual stresses were also independently measured by the conventional curvature method (Stoney's equation) on a similar homogeneous coating obtained by the same deposition parameters and a comparison and discussion of obtained results is performed.

  4. A New Methodology For In-Situ Residual Stress Measurement In MEMS Structures

    NASA Astrophysics Data System (ADS)

    Sebastiani, M.; Bemporad, E.; Melone, G.; Rizzi, L.; Korsunsky, A. M.

    2010-11-01

    In this paper, a new approach is presented for local residual stress measurement in MEMS structures. The newly proposed approach involves incremental focused ion beam (FIB) milling of annular trenches at material surface, combined with high resolution SEM imaging and Digital Image Correlation (DIC) analysis for the measurement of the strain relief over the surface of the remaining central pillar. The proposed technique allows investigating the average residual stress on suspended micro-structures, with a spatial resolution lower than 1 μm. Results are presented for residual stress measurement on double clamped micro-beams, whose layers are obtained by DC-sputtering (PVD) deposition. Residual stresses were also independently measured by the conventional curvature method (Stoney's equation) on a similar homogeneous coating obtained by the same deposition parameters and a comparison and discussion of obtained results is performed.

  5. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    SciTech Connect

    Freeman, Janine; Freestate, David; Hobbs, William; Riley, Cameron

    2016-06-05

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  6. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation: Preprint

    SciTech Connect

    Freeman, Janine; Freestate, David; Riley, Cameron; Hobbs, William

    2016-11-01

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  7. Using Measured Plane-of-Array Data Directly in Photovoltaic Modeling: Methodology and Validation

    SciTech Connect

    Freeman, Janine; Freestate, David; Hobbs, William; Riley, Cameron

    2016-11-21

    Measured plane-of-array (POA) irradiance may provide a lower-cost alternative to standard irradiance component data for photovoltaic (PV) system performance modeling without loss of accuracy. Previous work has shown that transposition models typically used by PV models to calculate POA irradiance from horizontal data introduce error into the POA irradiance estimates, and that measured POA data can correlate better to measured performance data. However, popular PV modeling tools historically have not directly used input POA data. This paper introduces a new capability in NREL's System Advisor Model (SAM) to directly use POA data in PV modeling, and compares SAM results from both POA irradiance and irradiance components inputs against measured performance data for eight operating PV systems.

  8. Perspectives for clinical measures of dynamic foot function-reference data and methodological considerations.

    PubMed

    Rathleff, M S; Nielsen, R G; Simonsen, O; Olesen, C G; Kersting, U G

    2010-02-01

    Several studies have investigated if static posture assessments qualify to predict dynamic function of the foot showing diverse outcomes. However, it was suggested that dynamic measures may be better suited to predict foot-related overuse problems. The purpose of this study was to establish the reliability for dynamic measures of longitudinal arch angle (LAA) and navicular height (NH) and to examine to what extent static and dynamic measures thereof are related. Intra-rater reliability of LAA and NH measures was tested on a sample of 17 control subjects. Subsequently, 79 subjects were tested while walking on a treadmill. The ranges and minimum values for LAA and NH during ground contact were identified over 20 consecutive steps. A geometric error model was used to simulate effects of marker placement uncertainty and skin movement artifacts. Results demonstrated the highest reliability for the minimum NH (MinNH), followed by the minimum LAA (MinLAA), the dynamic range of navicular height (DeltaNH) and the range of LAA (DeltaLAA) while all measures were highly reliable. Marker location uncertainty and skin movement artifacts had the smallest effects on measures of NH. The use of an alignment device for marker placement was shown to reduce error ranges for NH measures. Therefore, DeltaNH and MinNH were recommended for functional dynamic foot characterization in the sagittal plane. There is potential for such measures to be a suitable predictor for overuse injuries while being obtainable in clinical settings. Future research needs to include such dynamic but simple foot assessments in large-scale clinical studies.

  9. Comparison of efficiency of distance measurement methodologies in mango (Mangifera indica) progenies based on physicochemical descriptors.

    PubMed

    Alves, E O S; Cerqueira-Silva, C B M; Souza, A M; Santos, C A F; Lima Neto, F P; Corrêa, R X

    2012-03-14

    We investigated seven distance measures in a set of observations of physicochemical variables of mango (Mangifera indica) submitted to multivariate analyses (distance, projection and grouping). To estimate the distance measurements, five mango progeny (total of 25 genotypes) were analyzed, using six fruit physicochemical descriptors (fruit weight, equatorial diameter, longitudinal diameter, total soluble solids in °Brix, total titratable acidity, and pH). The distance measurements were compared by the Spearman correlation test, projection in two-dimensional space and grouping efficiency. The Spearman correlation coefficients between the seven distance measurements were, except for the Mahalanobis' generalized distance (0.41 ≤ rs ≤ 0.63), high and significant (rs ≥ 0.91; P < 0.001). Regardless of the origin of the distance matrix, the unweighted pair group method with arithmetic mean grouping method proved to be the most adequate. The various distance measurements and grouping methods gave different values for distortion (-116.5 ≤ D ≤ 74.5), cophenetic correlation (0.26 ≤ rc ≤ 0.76) and stress (-1.9 ≤ S ≤ 58.9). Choice of distance measurement and analysis methods influence the.

  10. Contact Thermocouple Methodology and Evaluation for Temperature Measurement in the Laboratory

    NASA Technical Reports Server (NTRS)

    Brewer, Ethan J.; Pawlik, Ralph J.; Krause, David L.

    2013-01-01

    Laboratory testing of advanced aerospace components very often requires highly accurate temperature measurement and control devices, as well as methods to precisely analyze and predict the performance of such components. Analysis of test articles depends on accurate measurements of temperature across the specimen. Where possible, this task is accomplished using many thermocouples welded directly to the test specimen, which can produce results with great precision. However, it is known that thermocouple spot welds can initiate deleterious cracks in some materials, prohibiting the use of welded thermocouples. Such is the case for the nickel-based superalloy MarM-247, which is used in the high temperature, high pressure heater heads for the Advanced Stirling Converter component of the Advanced Stirling Radioisotope Generator space power system. To overcome this limitation, a method was developed that uses small diameter contact thermocouples to measure the temperature of heater head test articles with the same level of accuracy as welded thermocouples. This paper includes a brief introduction and a background describing the circumstances that compelled the development of the contact thermocouple measurement method. Next, the paper describes studies performed on contact thermocouple readings to determine the accuracy of results. It continues on to describe in detail the developed measurement method and the evaluation of results produced. A further study that evaluates the performance of different measurement output devices is also described. Finally, a brief conclusion and summary of results is provided.

  11. Technical Guide Documenting Methodology, Indicators, and Data Sources for "Measuring Up 2004: The National and State Report Card on Higher Education." National Center #04-6

    ERIC Educational Resources Information Center

    Ryu, Mikyung

    2004-01-01

    This "Technical Guide" describes the methodology and concepts used to measure and grade the performance of the 50 states in the higher education arena. Part I presents the methodology for grading states and provides information on data collection and reporting. Part II explains the indicators that comprise each of the graded categories.…

  12. Doctoral Training in Statistics, Measurement, and Methodology in Psychology: Replication and Extension of Aiken, West, Sechrest, and Reno's (1990) Survey of PhD Programs in North America

    ERIC Educational Resources Information Center

    Aiken, Leona S.; West, Stephen G.; Millsap, Roger E.

    2008-01-01

    In a survey of all PhD programs in psychology in the United States and Canada, the authors documented the quantitative methodology curriculum (statistics, measurement, and research design) to examine the extent to which innovations in quantitative methodology have diffused into the training of PhDs in psychology. In all, 201 psychology PhD…

  13. Extreme Sea Conditions in Shallow Water: Estimation based on in-situ measurements

    NASA Astrophysics Data System (ADS)

    Le Crom, Izan; Saulnier, Jean-Baptiste

    2013-04-01

    The design of marine renewable energy devices and components is based, among others, on the assessment of the environmental extreme conditions (winds, currents, waves, and water level) that must be combined together in order to evaluate the maximal loads on a floating/fixed structure, and on the anchoring system over a determined return period. Measuring devices are generally deployed at sea over relatively short durations (a few months to a few years), typically when describing water free surface elevation, and extrapolation methods based on hindcast data (and therefore on wave simulation models) have to be used. How to combine, in a realistic way, the action of the different loads (winds and waves for instance) and which correlation of return periods should be used are highly topical issues. However, the assessment of the extreme condition itself remains a not-fully-solved, crucial, and sensitive task. Above all in shallow water, extreme wave height, Hmax, is the most significant contribution in the dimensioning process of EMR devices. As a case study, existing methodologies for deep water have been applied to SEMREV, the French marine energy test site. The interest of this study, especially at this location, goes beyond the simple application to SEMREV's WEC and floating wind turbines deployment as it could also be extended to the Banc de Guérande offshore wind farm that are planned close by. More generally to pipes and communication cables as it is a redundant problematic. The paper will first present the existing measurements (wave and wind on site), the prediction chain that has been developed via wave models, the extrapolation methods applied to hindcast data, and will try to formulate recommendations for improving this assessment in shallow water.

  14. Setting the light conditions for measuring root transparency for age-at-death estimation methods.

    PubMed

    Adserias-Garriga, Joe; Nogué-Navarro, Laia; Zapico, Sara C; Ubelaker, Douglas H

    2017-03-30

    Age-at-death estimation is one of the main goals in forensic identification, being an essential parameter to determine the biological profile, narrowing the possibility of identification in cases involving missing persons and unidentified bodies. The study of dental tissues has been long considered as a proper tool for age estimation with several age estimation methods based on them. Dental age estimation methods can be divided into three categories: tooth formation and development, post-formation changes, and histological changes. While tooth formation and growth changes are important for fetal and infant consideration, when the end of dental and skeletal growth is achieved, post-formation or biochemical changes can be applied. Lamendin et al. in J Forensic Sci 37:1373-1379, (1992) developed an adult age estimation method based on root transparency and periodontal recession. The regression formula demonstrated its accuracy of use for 40 to 70-year-old individuals. Later on, Prince and Ubelaker in J Forensic Sci 47(1):107-116, (2002) evaluated the effects of ancestry and sex and incorporated root height into the equation, developing four new regression formulas for males and females of African and European ancestry. Even though root transparency is a key element in the method, the conditions for measuring this element have not been established. The aim of the present study is to set the light conditions measured in lumens that offer greater accuracy when applying the Lamendin et al. method modified by Prince and Ubelaker. The results must be also taken into account in the application of other age estimation methodologies using root transparency to estimate age-at-death.

  15. Conditional Standard Errors of Measurement for Composite Scores Using IRT

    ERIC Educational Resources Information Center

    Kolen, Michael J.; Wang, Tianyou; Lee, Won-Chan

    2012-01-01

    Composite scores are often formed from test scores on educational achievement test batteries to provide a single index of achievement over two or more content areas or two or more item types on that test. Composite scores are subject to measurement error, and as with scores on individual tests, the amount of error variability typically depends on…

  16. Measuring ICT Use and Contributing Conditions in Primary Schools

    ERIC Educational Resources Information Center

    Vanderlinde, Ruben; Aesaert, Koen; van Braak, Johan

    2015-01-01

    Information and communication technology (ICT) use became of major importance for primary schools across the world as ICT has the potential to foster teaching and learning processes. ICT use is therefore a central measurement concept (dependent variable) in many ICT integration studies. This data paper presents two datasets (2008 and 2011) that…

  17. Measurement of Rubidium Number Density Under Optically Thick Conditions

    DTIC Science & Technology

    2010-11-15

    Voigt profiles . A Voigt line shape is represented by equations 4.1 and 4.2. (4.1) gV oigt(λ, λFF ′) = 1 2π √ π ∫ ∞ −∞  ∆λL exp(−t2) (λ− λFF ′ − t ∆λD...various cell conditions of temperature and pressure were then fit to a pressure broadened Voigt profile thereby allowing the determination of the...broadened Voigt profile thereby allowing the determination of the rubidium number den- sity. 1. Background In recent years, alkali metals have garnered a

  18. Thermal measurements for jets in disturbed and undisturbed crosswind conditions

    NASA Technical Reports Server (NTRS)

    Wark, Candace E.; Foss, John F.

    1988-01-01

    A direct comparison is made of the thermal field properties for a low-disturbance and a high-disturbance level condition affecting the low-temperature air jets introduced into gas turbine combustor aft sections in order both to cool the high-temperature gases and quench the combustion reactions. Sixty-four fast-response thermocouples were simultaneously sampled and corrected for their time constant effect at a downstream plane close to the jet exit. Histograms formed from independent samples were sufficiently smooth to approximate a pdf.

  19. A methodology suitable for TEM local measurements of carbon concentration in retained austenite

    SciTech Connect

    Kammouni, A.; Saikaly, W. Dumont, M.; Marteau, C.; Bano, X.; Charai, A.

    2008-09-15

    Carbon concentration in retained austenite grains is of great importance determining the mechanical properties of hot-rolled TRansformation Induced Plasticity steels. Among the different techniques available to measure such concentrations, Kikuchi lines obtained in Transmission Electron Microscopy provide a relatively easy and accurate method. The major problem however is to be able to locate an austenitic grain in the observed Transmission Electron Microscopy thin foil. Focused Ion Beam in combination with Scanning Electron Microscopy was used to successfully prepare a thin foil for Transmission Electron Microscopy and carbon concentration measurements from a 700 nm retained austenite grain.

  20. Development and characterization of an annular denuder methodology for the measurement of divalent inorganic reactive gaseous mercury in ambient air.

    PubMed

    Landis, Matthew S; Stevens, Robert K; Schaedlich, Frank; Prestbo, Eric M

    2002-07-01

    Atmospheric mercury is predominantly present in the gaseous elemental form (Hg0). However, anthropogenic emissions (e.g., incineration, fossil fuel combustion) emit and natural processes create particulate-phase mercury(Hg(p)) and divalent reactive gas-phase mercury (RGM). RGM species (e.g., HgCl2, HgBr2) are water-soluble and have much shorter residence times in the atmosphere than Hg0 due to their higher removal rates through wet and dry deposition mechanisms. Manual and automated annular denuder methodologies, to provide high-resolution (1-2 h) ambient RGM measurements, were developed and evaluated. Following collection of RGM onto KCl-coated quartz annular denuders, RGM was thermally decomposed and quantified as Hg0. Laboratory and field evaluations of the denuders found the RGM collection efficiency to be >94% and mean collocated precision to be <15%. Method detection limits for sampling durations ranging from 1 to 12 h were 6.2-0.5 pg m(-3), respectively. As part of this research, the authors observed that methods to measure Hg(p) had a significant positive artifact when RGM coexists with Hg(p). This artifact was eliminated if a KCl-coated annular denuder preceded the filter. This new atmospheric mercury speciation methodology has dramatically enhanced our ability to investigate the mechanisms of transformation and deposition of mercury in the atmosphere.

  1. METHODOLOGY FOR MEASURING PM 2.5 SEPARATOR CHARACTERISTICS USING AN AEROSIZER

    EPA Science Inventory

    A method is presented that enables the measurement of the particle size separation characteristics of an inertial separator in a rapid fashion. Overall penetration is determined for discrete particle sizes using an Aerosizer (Model LD, TSI, Incorporated, Particle Instruments/Am...

  2. METHODOLOGICAL APPROACH FOR MEASURING PRIORITY DBPS IN REVERSE OSMOSIS CONCENTRATED DRINKING WATER

    EPA Science Inventory

    Many disinfection by-products (DBPs) are formed when drinking water is chlorinated, but only a few are routinely measured or regulated. Various studies have revealed a plethora of DBPs for which sensitive and quantitative analytical methods have always been a major limiting facto...

  3. Rigorous Measures of Implementation: A Methodological Framework for Evaluating Innovative STEM Programs

    ERIC Educational Resources Information Center

    Cassata-Widera, Amy; Century, Jeanne; Kim, Dae Y.

    2011-01-01

    The practical need for multidimensional measures of fidelity of implementation (FOI) of reform-based science, technology, engineering, and mathematics (STEM) instructional materials, combined with a theoretical need in the field for a shared conceptual framework that could support accumulating knowledge on specific enacted program elements across…

  4. Standardized Test Methodology for Measuring Pressure Suit Glove Performance and Demonstration Units.

    DTIC Science & Technology

    1994-01-01

    Foam Sensor Data: Run 3 ......................................................................... 1 3 5 Tekscan Ink Sensor Data: Run 1...1 4 6 Tekscan Ink Sensor Data: Run 2 ... .................... ........... 1 4 7 Tekscan Ink Sensor Data...sensor and Tekscan techniques for range of motion measurement. With the foam sensor, runs 1-3, the pressure was increased by 1 psi every 60 seconds up to

  5. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit.

    PubMed

    Liu, Shi Qiang; Zhu, Rong

    2016-01-29

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm³) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ± 10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ± 1 g, respectively.

  6. Explosive Strength of the Knee Extensors: The Influence of Criterion Trial Detection Methodology on Measurement Reproducibility.

    PubMed

    Dirnberger, Johannes; Wiesinger, Hans-Peter; Wiemer, Nicolas; Kösters, Alexander; Müller, Erich

    2016-04-01

    The present study was conducted to assess test-retest reproducibility of explosive strength measurements during single-joint isometric knee extension using the IsoMed 2000 dynamometer. Thirty-one physically active male subjects (mean age: 23.7 years) were measured on two occasions separated by 48-72 h. The intraclass correlation coefficient (ICC 2,1) and the coefficient of variation (CV) were calculated for (i) maximum torque (MVC), (ii) the peak rate of torque development (RTDpeak) as well as for (iii) the average rate of torque development (RTD) and the impulse taken at several predefined time intervals (0-30 to 0-300 ms); thereby explosive strength variables were derived in two conceptually different versions: on the one hand from the MVC-trial (version I), on the other hand from the trial showing the RTDpeak (version II). High ICC-values (0.80-0.99) and acceptable CV-values (1.9-8.7%) could be found for MVC as well as for the RTD and the impulse taken at time intervals of ≥100 ms, regardless of whether version I or II was used. In contrast, measurements of the RTDpeak as well as the RTD and the impulse taken during the very early contraction phase (i.e. RTD/impulse0-30ms and RTD/impulse0-50ms) showed clearly weaker reproducibility results (ICC: 0.53-0.84; CV: 7.3-16.4%) and gave rise to considerable doubts as to clinical usefulness, especially when derived using version I. However, if there is a need to measure explosive strength for earlier time intervals in practice, it is, in view of stronger reproducibility results, recommended to concentrate on measures derived from version II, which is based on the RTDpeak-trial.

  7. Radio weak lensing shear measurement in the visibility domain - I. Methodology

    NASA Astrophysics Data System (ADS)

    Rivi, M.; Miller, L.; Makhathini, S.; Abdalla, F. B.

    2016-12-01

    The high sensitivity of the new generation of radio telescopes such as the Square Kilometre Array (SKA) will allow cosmological weak lensing measurements at radio wavelengths that are competitive with optical surveys. We present an adaptation to radio data of lensfit, a method for galaxy shape measurement originally developed and used for optical weak lensing surveys. This likelihood method uses an analytical galaxy model and makes a Bayesian marginalization of the likelihood over uninteresting parameters. It has the feature of working directly in the visibility domain, which is the natural approach to adopt with radio interferometer data, avoiding systematics introduced by the imaging process. As a proof of concept, we provide results for visibility simulations of individual galaxies with flux density S ≥ 10 μJy at the phase centre of the proposed SKA1-MID baseline configuration, adopting 12 frequency channels in the band 950-1190 MHz. Weak lensing shear measurements from a population of galaxies with realistic flux and scalelength distributions are obtained after natural gridding of the raw visibilities. Shear measurements are expected to be affected by `noise bias': we estimate the bias in the method as a function of signal-to-noise ratio (SNR). We obtain additive and multiplicative bias values that are comparable to SKA1 requirements for SNR > 18 and SNR > 30, respectively. The multiplicative bias for SNR >10 is comparable to that found in ground-based optical surveys such as CFHTLenS, and we anticipate that similar shear measurement calibration strategies to those used for optical surveys may be used to good effect in the analysis of SKA radio interferometer data.

  8. Radon-222 activity flux measurement using activated charcoal canisters: revisiting the methodology.

    PubMed

    Alharbi, Sami H; Akber, Riaz A

    2014-03-01

    The measurement of radon ((222)Rn) activity flux using activated charcoal canisters was examined to investigate the distribution of the adsorbed (222)Rn in the charcoal bed and the relationship between (222)Rn activity flux and exposure time. The activity flux of (222)Rn from five sources of varying strengths was measured for exposure times of one, two, three, five, seven, 10, and 14 days. The distribution of the adsorbed (222)Rn in the charcoal bed was obtained by dividing the bed into six layers and counting each layer separately after the exposure. (222)Rn activity decreased in the layers that were away from the exposed surface. Nevertheless, the results demonstrated that only a small correction might be required in the actual application of charcoal canisters for activity flux measurement, where calibration standards were often prepared by the uniform mixing of radium ((226)Ra) in the matrix. This was because the diffusion of (222)Rn in the charcoal bed and the detection efficiency as a function of the charcoal depth tended to counterbalance each other. The influence of exposure time on the measured (222)Rn activity flux was observed in two situations of the canister exposure layout: (a) canister sealed to an open bed of the material and (b) canister sealed over a jar containing the material. The measured (222)Rn activity flux decreased as the exposure time increased. The change in the former situation was significant with an exponential decrease as the exposure time increased. In the latter case, lesser reduction was noticed in the observed activity flux with respect to exposure time. This reduction might have been related to certain factors, such as absorption site saturation or the back diffusion of (222)Rn gas occurring at the canister-soil interface.

  9. Measuring functional connectivity using MEG: Methodology and comparison with fcMRI

    PubMed Central

    Brookes, Matthew J.; Hale, Joanne R.; Zumer, Johanna M.; Stevenson, Claire M.; Francis, Susan T.; Barnes, Gareth R.; Owen, Julia P.; Morris, Peter G.; Nagarajan, Srikantan S.

    2011-01-01

    Functional connectivity (FC) between brain regions is thought to be central to the way in which the brain processes information. Abnormal connectivity is thought to be implicated in a number of diseases. The ability to study FC is therefore a key goal for neuroimaging. Functional connectivity (fc) MRI has become a popular tool to make connectivity measurements but the technique is limited by its indirect nature. A multimodal approach is therefore an attractive means to investigate the electrodynamic mechanisms underlying hemodynamic connectivity. In this paper, we investigate resting state FC using fcMRI and magnetoencephalography (MEG). In fcMRI, we exploit the advantages afforded by ultra high magnetic field. In MEG we apply envelope correlation and coherence techniques to source space projected MEG signals. We show that beamforming provides an excellent means to measure FC in source space using MEG data. However, care must be taken when interpreting these measurements since cross talk between voxels in source space can potentially lead to spurious connectivity and this must be taken into account in all studies of this type. We show good spatial agreement between FC measured independently using MEG and fcMRI; FC between sensorimotor cortices was observed using both modalities, with the best spatial agreement when MEG data are filtered into the β band. This finding helps to reduce the potential confounds associated with each modality alone: while it helps reduce the uncertainties in spatial patterns generated by MEG (brought about by the ill posed inverse problem), addition of electrodynamic metric confirms the neural basis of fcMRI measurements. Finally, we show that multiple MEG based FC metrics allow the potential to move beyond what is possible using fcMRI, and investigate the nature of electrodynamic connectivity. Our results extend those from previous studies and add weight to the argument that neural oscillations are intimately related to functional

  10. System Error Compensation Methodology Based on a Neural Network for a Micromachined Inertial Measurement Unit

    PubMed Central

    Liu, Shi Qiang; Zhu, Rong

    2016-01-01

    Errors compensation of micromachined-inertial-measurement-units (MIMU) is essential in practical applications. This paper presents a new compensation method using a neural-network-based identification for MIMU, which capably solves the universal problems of cross-coupling, misalignment, eccentricity, and other deterministic errors existing in a three-dimensional integrated system. Using a neural network to model a complex multivariate and nonlinear coupling system, the errors could be readily compensated through a comprehensive calibration. In this paper, we also present a thermal-gas MIMU based on thermal expansion, which measures three-axis angular rates and three-axis accelerations using only three thermal-gas inertial sensors, each of which capably measures one-axis angular rate and one-axis acceleration simultaneously in one chip. The developed MIMU (100 × 100 × 100 mm3) possesses the advantages of simple structure, high shock resistance, and large measuring ranges (three-axes angular rates of ±4000°/s and three-axes accelerations of ±10 g) compared with conventional MIMU, due to using gas medium instead of mechanical proof mass as the key moving and sensing elements. However, the gas MIMU suffers from cross-coupling effects, which corrupt the system accuracy. The proposed compensation method is, therefore, applied to compensate the system errors of the MIMU. Experiments validate the effectiveness of the compensation, and the measurement errors of three-axis angular rates and three-axis accelerations are reduced to less than 1% and 3% of uncompensated errors in the rotation range of ±600°/s and the acceleration range of ±1 g, respectively. PMID:26840314

  11. High frequency measurement of P- and S-wave velocities on crystalline rock massif surface - methodology of measurement

    NASA Astrophysics Data System (ADS)

    Vilhelm, Jan; Slavík, Lubomír

    2014-05-01

    For the purpose of non-destructive monitoring of rock properties in the underground excavation it is possible to perform repeated high-accuracy P- and S-wave velocity measurements. This contribution deals with preliminary results gained during the preparation of micro-seismic long-term monitoring system. The field velocity measurements were made by pulse-transmission technique directly on the rock outcrop (granite) in Bedrichov gallery (northern Bohemia). The gallery at the experimental site was excavated using TBM (Tunnel Boring Machine) and it is used for drinking water supply, which is conveyed in a pipe. The stable measuring system and its automatic operation lead to the use of piezoceramic transducers both as a seismic source and as a receiver. The length of measuring base at gallery wall was from 0.5 to 3 meters. Different transducer coupling possibilities were tested namely with regard of repeatability of velocity determination. The arrangement of measuring system on the surface of the rock massif causes better sensitivity of S-transducers for P-wave measurement compared with the P-transducers. Similarly P-transducers were found more suitable for S-wave velocity determination then P-transducers. The frequency dependent attenuation of fresh rock massif results in limited frequency content of registered seismic signals. It was found that at the distance between the seismic source and receiver from 0.5 m the frequency components above 40 kHz are significantly attenuated. Therefore for the excitation of seismic wave 100 kHz transducers are most suitable. The limited frequency range should be also taken into account for the shape of electric impulse used for exciting of piezoceramic transducer. The spike pulse generates broad-band seismic signal, short in the time domain. However its energy after low-pass filtration in the rock is significantly lower than the energy of seismic signal generated by square wave pulse. Acknowledgments: This work was partially

  12. Conditions necessary for low-level measurements of reactive oxidants

    SciTech Connect

    Nakareseisoon, S.

    1988-01-01

    Chlorine dioxide and ozone are considered to be the alternatives to chlorine for the disinfection of drinking water supplies and also for the treatment of wastewaters prior to discharge. Chlorine dioxide, under normal circumstances, is reduced to chlorite ion which is toxic. The recommended seven-day suggested no-adverse-response levels (SNARL's) of chlorite ion is 0.007 mg/l (7 ppb). Chlorite ion at these low levels cannot be satisfactorily determined by existing methods, and so, it became necessary to develop an analytical method for determining ppb levels of chlorite ion. Such a method can be developed using differential pulse polarography (DPP). The electrochemical reduction of chlorite ion has been studied between pH 3.7-14 and in an ionic strength range of 0.05-3.0 M. The optimum conditions are pH 4.1-4.4 and an ionic strength of 0.45 M. The current under these conditions is a linear function of chlorite ion concentration ranging from 2.77 {times} 10{sup {minus}7} to 2.80 {times} 10{sup {minus}4} M (19 ppb to 19 ppm). The imprecision is better than {plus minus} 1.0% and {plus minus} 3.4% at concentrations of 2.87 {times} 10{sup {minus}5} M and 1.74 {times} 10{sup {minus}6} M, respectively, with a detection limit of 1 {times} 10{sup {minus}7} M (7 ppb). The rate of ozone decomposition has been studied in highly basic solutions (8-15 NaOH), where ozone becomes stable. The mechanism of ozone regeneration was proposed to explain the observed kinetic and to clarify the contradiction concerning the very slow observed rate of ozone decomposition in basic solution.

  13. Technical Note: New methodology for measuring viscosities in small volumes characteristic of environmental chamber particle samples

    NASA Astrophysics Data System (ADS)

    Renbaum-Wolff, L.; Grayson, J. W.; Bertram, A. K.

    2013-01-01

    Herein, a method for the determination of viscosities of small sample volumes is introduced, with important implications for the viscosity determination of particle samples from environmental chambers (used to simulate atmospheric conditions). The amount of sample needed is < 1 μl, and the technique is capable of determining viscosities (η) ranging between 10-3 and 103 Pascal seconds (Pa s) in samples that cover a range of chemical properties and with real-time relative humidity and temperature control; hence, the technique should be well-suited for determining the viscosities, under atmospherically relevant conditions, of particles collected from environmental chambers. In this technique, supermicron particles are first deposited on an inert hydrophobic substrate. Then, insoluble beads (~1 μm in diameter) are embedded in the particles. Next, a flow of gas is introduced over the particles, which generates a shear stress on the particle surfaces. The sample responds to this shear stress by generating internal circulations, which are quantified with an optical microscope by monitoring the movement of the beads. The rate of internal circulation is shown to be a function of particle viscosity but independent of the particle material for a wide range of organic and organic-water samples. A calibration curve is constructed from the experimental data that relates the rate of internal circulation to particle viscosity, and this calibration curve is successfully used to predict viscosities in multicomponent organic mixtures.

  14. Improving the reliability of closed chamber methodologies for methane emissions measurement in treatment wetlands.

    PubMed

    Corbella, Clara; Puigagut, Jaume

    2013-01-01

    Non-homogeneous mixing of methane (NHM) within closed chambers was studied under laboratory conditions. The experimental set-up consisted of a PVC vented chamber of 5.3 litres of effective volume fitted with a power-adjustable 12 V fan. NHM within the chamber was studied according to fan position (top vs lateral), fan airflow strength (23 vs 80 cubic feet per minute) and the mixing time before sample withdrawal (5, 10, 15 and 20 minutes). The potential bias of methane flux densities caused by NHM was addressed by monitoring the difference between linearly expected and estimated flux densities of ca. 400, ca. 800 and ca. 1,600 mg CH(4).m(-2) d(-1). Methane within the chamber was under non-homogeneous conditions. Accordingly, methane concentrations at the bottom of the chamber were between 20 to 70% higher than those recorded at the middle or top sections of the chamber, regardless of fan position, fan air-flow strength or time before sample withdrawal. NHM led to notable biases on flux density estimation. Accordingly, flux density estimated from top and middle sampling sections were systematically lower (ca. 50%) than those expected. Flux densities estimated from bottom samples were between 10% higher and 25% lower than expected, regardless of the flux density considered.

  15. Quantitative colorimetric measurement of cellulose degradation under microbial culture conditions.

    PubMed

    Haft, Rembrandt J F; Gardner, Jeffrey G; Keating, David H

    2012-04-01

    We have developed a simple, rapid, quantitative colorimetric assay to measure cellulose degradation based on the absorbance shift of Congo red dye bound to soluble cellulose. We term this assay "Congo Red Analysis of Cellulose Concentration," or "CRACC." CRACC can be performed directly in culture media, including rich and defined media containing monosaccharides or disaccharides (such as glucose and cellobiose). We show example experiments from our laboratory that demonstrate the utility of CRACC in probing enzyme kinetics, quantifying cellulase secretion, and assessing the physiology of cellulolytic organisms. CRACC complements existing methods to assay cellulose degradation, and we discuss its utility for a variety of applications.

  16. Manipulating measurement scales in medical statistical analysis and data mining: A review of methodologies

    PubMed Central

    Marateb, Hamid Reza; Mansourian, Marjan; Adibi, Peyman; Farina, Dario

    2014-01-01

    Background: selecting the correct statistical test and data mining method depends highly on the measurement scale of data, type of variables, and purpose of the analysis. Different measurement scales are studied in details and statistical comparison, modeling, and data mining methods are studied based upon using several medical examples. We have presented two ordinal–variables clustering examples, as more challenging variable in analysis, using Wisconsin Breast Cancer Data (WBCD). Ordinal-to-Interval scale conversion example: a breast cancer database of nine 10-level ordinal variables for 683 patients was analyzed by two ordinal-scale clustering methods. The performance of the clustering methods was assessed by comparison with the gold standard groups of malignant and benign cases that had been identified by clinical tests. Results: the sensitivity and accuracy of the two clustering methods were 98% and 96%, respectively. Their specificity was comparable. Conclusion: by using appropriate clustering algorithm based on the measurement scale of the variables in the study, high performance is granted. Moreover, descriptive and inferential statistics in addition to modeling approach must be selected based on the scale of the variables. PMID:24672565

  17. [Nitrous oxide emissions from municipal solid waste landfills and its measuring methodology: a review].

    PubMed

    Jia, Ming-Sheng; Wang, Xiao-Jun; Chen, Shao-Hua

    2014-06-01

    Nitrous oxide (N2O) is one of three major greenhouse gases and the dominant ozone-depleting substance. Landfilling is the major approach for the treatment and disposal of municipal solid waste (MSW), while MSW landfills can be an important anthropogenic source for N2O emissions. Measurements at lab-scale and full-scale landfills have demonstrated that N2O can be emitted in substantial amounts in MSW landfills; however, a large variation in reported emission values exists. Currently, the mechanisms of N2O production and emission in landfills and its contribution to global warming are still lack of sufficient studies. Meanwhile, obtaining reliable N2O fluxes data in landfills remains a question with existing in-situ measurement techniques. This paper summarized relevant literature data on this issue and analyzed the potential production and emission mechanisms of N2O in traditional anaerobic sanitary landfill by dividing it into the MSW buried and the cover soil. The corresponding mechanisms in nitrogen removal bioreactor landfills were analyzed. Finally, the applicability of existing in-situ approaches measuring N2O fluxes in landfills, such as chamber and micrometeorological methods, was discussed and areas in which further research concerning N2O emissions in landfills was urgently required were proposed as well.

  18. Vertical profiles of aerosol volume from high-spectral-resolution infrared transmission measurements. I. Methodology.

    PubMed

    Eldering, A; Irion, F W; Chang, A Y; Gunson, M R; Mills, F P; Steele, H M

    2001-06-20

    The wavelength-dependent aerosol extinction in the 800-1250-cm(-1) region has been derived from ATMOS (atmospheric trace molecule spectroscopy) high-spectral-resolution IR transmission measurements. Using models of aerosol and cloud extinction, we have performed weighted nonlinear least-squares fitting to determine the aerosol-volume columns and vertical profiles of stratospheric sulfate aerosol and cirrus cloud volume. Modeled extinction by use of cold-temperature aerosol optical constants for a 70-80% sulfuric-acid-water solution shows good agreement with the measurements, and the derived aerosol volumes for a 1992 occultation are consistent with data from other experiments after the eruption of Mt. Pinatubo. The retrieved sulfuric acid aerosol-volume profiles are insensitive to the aerosol-size distribution and somewhat sensitive to the set of optical constants used. Data from the nonspherical cirrus extinction model agree well with a 1994 mid-latitude measurement indicating the presence of cirrus clouds at the tropopause.

  19. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 26 2013-07-01 2013-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  20. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  1. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 25 2011-07-01 2011-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  2. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 25 2014-07-01 2014-07-01 false Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  3. 40 CFR 201.25 - Measurement location and weather conditions for measurement on receiving property of the noise of...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 26 2012-07-01 2011-07-01 true Measurement location and weather conditions for measurement on receiving property of the noise of retarders, car coupling, locomotive load... EQUIPMENT; INTERSTATE RAIL CARRIERS Measurement Criteria § 201.25 Measurement location and...

  4. Thermal conductivity measurements of particulate materials under Martian conditions

    NASA Technical Reports Server (NTRS)

    Presley, M. A.; Christensen, P. R.

    1993-01-01

    The mean particle diameter of surficial units on Mars has been approximated by applying thermal inertia determinations from the Mariner 9 Infrared Radiometer and the Viking Infrared Thermal Mapper data together with thermal conductivity measurement. Several studies have used this approximation to characterize surficial units and infer their nature and possible origin. Such interpretations are possible because previous measurements of the thermal conductivity of particulate materials have shown that particle size significantly affects thermal conductivity under martian atmospheric pressures. The transfer of thermal energy due to collisions of gas molecules is the predominant mechanism of thermal conductivity in porous systems for gas pressures above about 0.01 torr. At martian atmospheric pressures the mean free path of the gas molecules becomes greater than the effective distance over which conduction takes place between the particles. Gas particles are then more likely to collide with the solid particles than they are with each other. The average heat transfer distance between particles, which is related to particle size, shape and packing, thus determines how fast heat will flow through a particulate material.The derived one-to-one correspondence of thermal inertia to mean particle diameter implies a certain homogeneity in the materials analyzed. Yet the samples used were often characterized by fairly wide ranges of particle sizes with little information about the possible distribution of sizes within those ranges. Interpretation of thermal inertia data is further limited by the lack of data on other effects on the interparticle spacing relative to particle size, such as particle shape, bimodal or polymodal mixtures of grain sizes and formation of salt cements between grains. To address these limitations and to provide a more comprehensive set of thermal conductivities vs. particle size a linear heat source apparatus, similar to that of Cremers, was assembled to

  5. Orientation Uncertainty of Structures Measured in Cored Boreholes: Methodology and Case Study of Swedish Crystalline Rock

    NASA Astrophysics Data System (ADS)

    Stigsson, Martin

    2016-11-01

    Many engineering applications in fractured crystalline rocks use measured orientations of structures such as rock contact and fractures, and lineated objects such as foliation and rock stress, mapped in boreholes as their foundation. Despite that these measurements are afflicted with uncertainties, very few attempts to quantify their magnitudes and effects on the inferred orientations have been reported. Only relying on the specification of tool imprecision may considerably underestimate the actual uncertainty space. The present work identifies nine sources of uncertainties, develops inference models of their magnitudes, and points out possible implications for the inference on orientation models and thereby effects on downstream models. The uncertainty analysis in this work builds on a unique data set from site investigations, performed by the Swedish Nuclear Fuel and Waste Management Co. (SKB). During these investigations, more than 70 boreholes with a maximum depth of 1 km were drilled in crystalline rock with a cumulative length of more than 34 km including almost 200,000 single fracture intercepts. The work presented, hence, relies on orientation of fractures. However, the techniques to infer the magnitude of orientation uncertainty may be applied to all types of structures and lineated objects in boreholes. The uncertainties are not solely detrimental, but can be valuable, provided that the reason for their presence is properly understood and the magnitudes correctly inferred. The main findings of this work are as follows: (1) knowledge of the orientation uncertainty is crucial in order to be able to infer correct orientation model and parameters coupled to the fracture sets; (2) it is important to perform multiple measurements to be able to infer the actual uncertainty instead of relying on the theoretical uncertainty provided by the manufacturers; (3) it is important to use the most appropriate tool for the prevailing circumstances; and (4) the single most

  6. Hydrogen isotope measurement of bird feather keratin, one laboratory's response to evolving methodologies.

    PubMed

    Fan, Majie; Dettman, David L

    2015-01-01

    Hydrogen in organic tissue resides in a complex mixture of molecular contexts. Some hydrogen, called non-exchangeable (H(non)), is strongly bound, and its isotopic ratio is fixed when the tissue is synthesized. Other pools of hydrogen, called exchangeable hydrogen (H(ex)), constantly exchange with ambient water vapor. The measurement of the δ(2)H(non) in organic tissues such as hair or feather therefore requires an analytical process that accounts for exchangeable hydrogen. In this study, swan feather and sheep wool keratin were used to test the effects of sample drying and capsule closure on the measurement of δ(2)H(non) values, and the rate of back-reaction with ambient water vapor. Homogenous feather or wool keratins were also calibrated at room temperature for use as control standards to correct for the effects of exchangeable hydrogen on feathers. Total δ(2)H values of both feather and wool samples showed large changes throughout the first ∼6 h of drying. Desiccant plus low vacuum seems to be more effective than room temperature vacuum pumping for drying samples. The degree of capsule closure affects exchangeable hydrogen equilibration and drying, with closed capsules responding more slowly. Using one control keratin standard to correct for the δ(2)H(ex) value for a batch of samples leads to internally consistent δ(2)H(non) values for other calibrated keratins run as unknowns. When placed in the context of other recent improvements in the measurement of keratin δ(2)H(non) values, we make recommendations for sample handing, data calibration and the reporting of results.

  7. A new methodology for phase-locking value: a measure of true dynamic functional connectivity

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Bae, K. Ty; Roberts, Timothy P. L.

    2012-03-01

    Phase-Locking value (PLV) is used to measure phase synchrony of narrowband signals, therefore, it is able to provide a measure of dynamic functional connectivity (DFC) of brain interactions. Currently used PLV methods compute the convolution of the signal at the target frequency with a complex Gabor wavelet centered at that frequency. The phase of this convolution is extracted for all time-bins over trials for a pair of neural signals. These time-bins set a limit on the temporal resolution for PLV, hence, for DFC. Therefore, these methods cannot provide a true DFC in a strict sense. PLV is defined as the absolute value of the characteristic function of the difference of instantaneous phases (IP) of two analytic signals evaluated at s = 1. It is a function of the time. For the narrowband signal in the stationary Gaussian white noise, we investigated statistics of (i) its phase, (ii) the maximum likelihood estimate of its phase, and (iii) the phase-lock loop (PLL) measurement of its phase, derived the analytic form of the probability density function (pdf) of the difference of IP, and expressed this pdf in terms of signal-to-noise ratio (SNR) of signals. PLV is finally given by analytic formulas in terms of SNRs of a pair of neural signals under investigation. In this new approach, SNR, hence PLV, is evaluated at any time instant over repeated trials. Thus, the new approach can provide a true DFC via PLV. This paper presents detailed derivations of this approach and results obtained by using simulations for magnetoencephalography (MEG) data.

  8. Conceptual and methodological challenges to measuring political commitment to respond to HIV

    PubMed Central

    2011-01-01

    Background Researchers have long recognized the importance of a central government’s political “commitment” in order to mount an effective response to HIV. The concept of political commitment remains ill-defined, however, and little guidance has been given on how to measure this construct and its relationship with HIV-related outcomes. Several countries have experienced declines in HIV infection rates, but conceptual difficulties arise in linking these declines to political commitment as opposed to underlying social and behavioural factors. Methods This paper first presents a critical review of the literature on existing efforts to conceptualize and measure political commitment to respond to HIV and the linkages between political commitment and HIV-related outcomes. Based on the elements identified in this review, the paper then develops and presents a framework to assist researchers in making choices about how to assess a government's level of political commitment to respond to HIV and how to link political commitment to HIV-related outcomes. Results The review of existing studies identifies three components of commitment (expressed, institutional and budgetary commitment) as different dimensions along which commitment can be measured. The review also identifies normative and ideological aspects of commitment and a set of variables that mediate and moderate political commitment that need to be accounted for in order to draw valid inferences about the relationship between political commitment and HIV-related outcomes. The framework summarizes a set of steps that researchers can follow in order to assess a government's level of commitment to respond to HIV and suggests ways to apply the framework to country cases. Conclusions Whereas existing studies have adopted a limited and often ambiguous conception of political commitment, we argue that conceiving of political commitment along a greater number of dimensions will allow researchers to draw a more complete

  9. Active microwave measurements of Arctic sea ice under summer conditions

    NASA Technical Reports Server (NTRS)

    Onstott, R. G.; Gogineni, S. P.

    1985-01-01

    Radar provides a valuable tool in the study of sea-ice conditions and the solution of sea-ice operational problems. For this reason, the U.S. and Canada have conducted studies to define a bilateral synthetic aperture radar (SAR) satellite program. The present paper is concerned with work which has been performed to explore the needs associated with the study of sea-ice-covered waters. The design of a suitable research or operational spaceborne SAR or real aperture radar must be based on an adequate knowledge of the backscatter coefficients of the ice features which are of interest. In order to obtain the needed information, studies involving the use of a helicopter were conducted. In these studies L-C-X-Ku-band calibrated radar data were acquired over areas of Arctic first-year and multiyear ice during the first half of the summer of 1982. The results show that the microwave response in the case of sea ice is greatly influenced by summer melt, which produces significant changes in the properties of the snowpack and ice sheet.

  10. Acceptance Plan and Performance Measurement Methodology for the ITER Cryoline System

    NASA Astrophysics Data System (ADS)

    Badgujar, S.; Bonneton, M.; Shah, N.; Chalifour, M.; Chang, H.-S.; Fauve, E.; Forgeas, A.; Navion-Maillot, N.; Sarkar, B.

    The cryoline (CL) systemof ITER consists of a complex network of vacuum insulated multi and single process pipe lines distributed over three different areas with a total length of about 5 km. The thermal performance of the CL system will be measured during the final acceptance tests using the ITER cryoplant and cryo-distribution (CD) infrastructure. The method proposed is based on temperature measurementsof a small calibrated cryogenic helium flow through lines. Thecryoplant will be set to establish constant pressure and temperature whereas dedicated heater and valves in the CD will be used to generate stable mass flow rate.

  11. An evaluation of advantages and cost measurement methodology for leasing in the health care industry.

    PubMed

    Henry, J B; Roenfeldt, R L

    1977-01-01

    Lease financing in hospitals is growing rapidly. Many articles published on the topic of lease financing point only to the benefits that may be derived. Very few articles actually analyze the pros and cons of leasing from a financial cost measurement point of view, which includes real world parameters. This article critically evaluates two articles published in this issue which lead the reader to believe leasing for the most part is a bargain when compared to debt financing. The authors discuss some misconceptions in these articles and point out some facts viewed from a financial analyst's position.

  12. Surface strains induced by measured loads on teeth in vivo: a methodological study.

    PubMed

    Nohl, F S; Setchell, D J

    2000-03-01

    Visual feedback enabled three subjects to apply predetermined near-axial loads to the incisal edge of an intact maxillary central incisor. In two subjects, principal strains and orientations developed on the labial surface of the intact incisor were resolved from strains recorded with a multiple element strain gauge. Load application was accurate and precise enough to allow resolution of strains induced by target loads of 10 to 50 N. Axially orientated compressive labial surface strains were induced by measured loads. The method could be used to validate bench-top stress analyses and investigate the effects of restoration on the structural integrity of teeth.

  13. Measurement and analysis of active synchrotron mirrors under operating conditions

    NASA Astrophysics Data System (ADS)

    Sutter, John P.; Alcock, Simon G.; Sawhney, Kawal

    2013-05-01

    At the Diamond Light Source, in situ slope error measurements using the pencil-beam method have enabled X-ray mirror surfaces to be examined in their beamline environment. A surface corrugation common to several bimorph mirrors and the removal of that corrugation by repolishing were both confirmed using this method. In the same way, mirrors curved in a controlled way with bending actuators and sag compensators could also be optimized. Fits to the elastic bending of ideal beams using the Euler-Bernoulli model have been performed on the slope errors of a mechanically bent mirror in order to distinguish bender curvatures from gravitational distortion and to calculate the compensating force that most reduces the latter effect. A successful improvement of the sag compensation mechanism of a vertically focusing mirror was also achieved, aided by a previously tested method for optimizing the settings of a mirror's actuators using pencil-beam scans.

  14. A scuba diving direct sediment sampling methodology on benthic transects in glacial lakes: procedure description, safety measures, and tests results.

    PubMed

    Pardo, Alfonso

    2014-11-01

    This work presents an in situ sediment sampling method on benthic transects, specifically intended for scientific scuba diver teams. It was originally designed and developed to sample benthic surface and subsurface sediments and subaqueous soils in glacial lakes up to a maximum depth of 25 m. Tests were conducted on the Sabocos and Baños tarns (i.e., cirque glacial lakes) in the Spanish Pyrenees. Two 100 m transects, ranging from 24.5 to 0 m of depth in Sabocos and 14 m to 0 m deep in Baños, were conducted. In each test, 10 sediment samples of 1 kg each were successfully collected and transported to the surface. This sampling method proved operative even in low visibility conditions (<2 m). Additional ice diving sampling tests were conducted in Sabocos and Truchas tarns. This sampling methodology can be easily adapted to accomplish underwater sampling campaigns in nonglacial lakes and other continental water or marine environments.

  15. Measurement of plasma-derived substance P: biological, methodological, and statistical considerations.

    PubMed

    Campbell, Donald E; Raftery, Nancy; Tustin, Richard; Tustin, Nancy B; Desilvio, Michelle L; Cnaan, Avital; Aye, Pyone Pyone; Lackner, Andrew A; Douglas, Steven D

    2006-11-01

    The undecapeptide substance P (SP) is a member of the tachykinin family of neurotransmitters, which has a pivotal role in the regulation of inflammatory and immune responses. One of the major barriers to the study of the in vivo role of SP in a number of immune disorders is the accurate measurement of SP in fluids. This is reflected in the variability of reported SP levels in serum and plasma of humans in both healthy and diseased states. This study was initiated in order to identify sources of variability by the comparative evaluation of the influences of sample preparation and analytical detection methods on the measurement of SP in plasma. The results indicate that sample preparation (peptide extraction versus no extraction) and the choice of analytical method for SP quantitation may yield significantly different values and may contribute to the variability in SP values reported in the literature. These results further emphasize the need for careful consideration in the selection of methods for SP quantitation, as well as caution in the interpretation and comparison of data reported in the literature.

  16. Practical appraisal of sustainable development-Methodologies for sustainability measurement at settlement level

    SciTech Connect

    Moles, Richard; Foley, Walter; Morrissey, John; O'Regan, Bernadette

    2008-02-15

    This paper investigates the relationships between settlement size, functionality, geographic location and sustainable development. Analysis was carried out on a sample of 79 Irish settlements, located in three regional clusters. Two methods were selected to model the level of sustainability achieved in settlements, namely, Metabolism Accounting and Modelling of Material and Energy Flows (MA) and Sustainable Development Index Modelling. MA is a systematic assessment of the flows and stocks of material within a system defined in space and time. The metabolism of most settlements is essentially linear, with resources flowing through the urban system. The objective of this research on material and energy flows was to provide information that might aid in the development of a more circular pattern of urban metabolism, vital to sustainable development. In addition to MA, a set of forty indicators were identified and developed. These target important aspects of sustainable development: transport, environmental quality, equity and quality of life issues. Sustainability indices were derived through aggregation of indicators to measure dimensions of sustainable development. Similar relationships between settlement attributes and sustainability were found following both methods, and these were subsequently integrated to provide a single measure. Analysis identified those attributes of settlements preventing, impeding or promoting progress towards sustainability.

  17. Ruthenium-catalyzed N-alkylation of amines with alcohols under mild conditions using the borrowing hydrogen methodology.

    PubMed

    Enyong, Arrey B; Moasser, Bahram

    2014-08-15

    Using a simple amino amide ligand, ruthenium-catalyzed one-pot alkylation of primary and secondary amines with simple alcohols was carried out under a wide range of conditions. Using the alcohol as solvent, alkylation was achieved under mild conditions, even as low as room temperature. Reactions occurred with high conversion and selectivity in many cases. Reactions can also be carried out at high temperatures in organic solvent with high selectivity using stoichiometric amounts of the alcohol.

  18. Methodology for measurement of fault latency in a digital avionic miniprocessor

    NASA Technical Reports Server (NTRS)

    Mcgough, J. G.; Swern, F.; Bavuso, S. J.

    1981-01-01

    Investigations regarding the synthesis of a reliability assessment capability for fault-tolerant computer-based systems have been conducted for several years. In 1978 a pilot study was conducted to test the feasibility of measuring detection coverage and investigating the dynamics of fault propagation in a digital computer. A description is presented of an investigation concerned with the applicability of previous results to a real avionics processor. The obtained results show that emulation is a practicable approach to failure modes and effects analysis of a digital processor. The run time of the emulated processor on a PDP-10 host computer is only 20,000 to 25,000 times slower than the actual processor. As a consequence large numbers of faults can be studied at relatively little cost and in a timely manner.

  19. New contactless eddy current non-destructive methodology for electric conductivity measurement

    NASA Astrophysics Data System (ADS)

    Bouchala, T.; Abdelhadi, B.; Benoudjit, A.

    2015-01-01

    In this paper, a new method of contactless electric conductivity measurement is developed. This method is essentially based on the association of the coupled electric field forward model, which we have recently developed, with a simple and efficient research algorithm. The proposed method is very fast because 1.3 s are sufficient to calculate electric conductivity, in a CPU of 2 GHz and RAM of 3 GB, for a starting research interval of 1.72-17.2 %IACS and tolerance of 1.72 × 10- 5 %IACS. The study of the calculation time according to mesh density and starting interval width has showed that an optimal choice has to be made in order to improve the rapidity while preserving its precision. Considering its rapidity and its simplicity of implementation, this method is more adapted, in comparison to direct current techniques using Van der Pauw geometry, for automated applications.

  20. Adolescent daily and general maladjustment: is there reactivity to daily repeated measures methodologies?

    PubMed

    Nishina, Adrienne

    2012-01-01

    The present study examined whether repeated exposure to daily surveys about negative social experiences predicts changes in adolescents' daily and general maladjustment, and whether question content moderates these changes. Across a 2-week period, 6th-grade students (N = 215; mode age = 11) completed 5 daily reports tapping experienced or experienced and witnessed negative events, or they completed no daily reports. General maladjustment was measured in 2-week intervals before, at the end of, and 2 weeks after the daily report study. Daily maladjustment either decreased or did not change across the 5 daily report exposures. General maladjustment decreased across the three 2-week intervals. Combined, results indicate that short-term daily report studies do not place youth at risk for increased maladjustment.

  1. Measurements of alpha particle energy using nuclear tracks in solids methodology.

    PubMed

    Espinosa, G; Amero, C; Gammage, R B

    2002-01-01

    In this paper we present a method for the measurement of alpha particle energy using polycarbonate materials as nuclear track detectors (NTDs). This method is based on the interaction of the radiation with the solid-state materials, using the relationship between the energy deposited in the material by the ionising particle and the track developed after an established chemical process. The determination of the geometrical parameters of the formed track, such as major axis, minor axis and overall track length, permit determination of the energy of the alpha particle. The track analysis is performed automatically using a digital image system, and the data are processed in a PC with commercial software. In this experiment 148Gd, 238U, 230Th, 239Pu and 244Cm alpha particle emitters were used. The values for alpha particle energy resolution, the linear response to energy, the confidence in the results and the automatisation of the procedure make this method a promising analysis system.

  2. Methodological considerations for global analysis of cellular FLIM/FRET measurements

    NASA Astrophysics Data System (ADS)

    Adbul Rahim, Nur Aida; Pelet, Serge; Kamm, Roger D.; So, Peter T. C.

    2012-02-01

    Global algorithms can improve the analysis of fluorescence energy transfer (FRET) measurement based on fluorescence lifetime microscopy. However, global analysis of FRET data is also susceptible to experimental artifacts. This work examines several common artifacts and suggests remedial experimental protocols. Specifically, we examined the accuracy of different methods for instrument response extraction and propose an adaptive method based on the mean lifetime of fluorescent proteins. We further examined the effects of image segmentation and a priori constraints on the accuracy of lifetime extraction. Methods to test the applicability of global analysis on cellular data are proposed and demonstrated. The accuracy of global fitting degrades with lower photon count. By systematically tracking the effect of the minimum photon count on lifetime and FRET prefactors when carrying out global analysis, we demonstrate a correction procedure to recover the correct FRET parameters, allowing us to obtain protein interaction information even in dim cellular regions with photon counts as low as 100 per decay curve.

  3. Phoretic Force Measurement for Microparticles Under Microgravity Conditions

    NASA Technical Reports Server (NTRS)

    Davis, E. J.; Zheng, R.

    1999-01-01

    This theoretical and experimental investigation of the collisional interactions between gas molecules and solid and liquid surfaces of microparticles involves fundamental studies of the transfer of energy, mass and momentum between gas molecules and surfaces. The numerous applications include particle deposition on semiconductor surfaces and on surfaces in combustion processes, containerless processing, the production of nanophase materials, pigments and ceramic precursors, and pollution abatement technologies such as desulfurization of gaseous effluents from combustion processes. Of particular emphasis are the forces exerted on microparticles present in a nonuniform gas, that is, in gaseous surroundings involving temperature and concentration gradients. These so-called phoretic forces become the dominant forces when the gravitational force is diminished, and they are strongly dependent on the momentum transfer between gas molecules and the surface. The momentum transfer, in turn, depends on the gas and particle properties and the mean free path and kinetic energy of the gas molecules. The experimental program involves the particle levitation system shown. A micrometer size particle is held between two heat exchangers enclosed in a vacuum chamber by means of ac and dc electric fields. The ac field keeps the particle centered on the vertical axis of the chamber, and the dc field balances the gravitational force and the thermophoretic force. Some measurements of the thermophoretic force are presented in this paper.

  4. The Ocean Colour Climate Change Initiative: I. A Methodology for Assessing Atmospheric Correction Processors Based on In-Situ Measurements

    NASA Technical Reports Server (NTRS)

    Muller, Dagmar; Krasemann, Hajo; Brewin, Robert J. W.; Deschamps, Pierre-Yves; Doerffer, Roland; Fomferra, Norman; Franz, Bryan A.; Grant, Mike G.; Groom, Steve B.; Melin, Frederic; Platt, Trevor; Regner, Peter; Sathyendranath, Shubha; Steinmetz, Francois; Swinton, John

    2015-01-01

    The Ocean Colour Climate Change Initiative intends to provide a long-term time series of ocean colour data and investigate the detectable climate impact. A reliable and stable atmospheric correction procedure is the basis for ocean colour products of the necessary high quality. In order to guarantee an objective selection from a set of four atmospheric correction processors, the common validation strategy of comparisons between in-situ and satellite derived water leaving reflectance spectra, is extended by a ranking system. In principle, the statistical parameters such as root mean square error, bias, etc. and measures of goodness of fit, are transformed into relative scores, which evaluate the relationship of quality dependent on the algorithms under study. The sensitivity of these scores to the selected database has been assessed by a bootstrapping exercise, which allows identification of the uncertainty in the scoring results. Although the presented methodology is intended to be used in an algorithm selection process, this paper focusses on the scope of the methodology rather than the properties of the individual processors.

  5. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2014-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-inch chord, 2-D straight wing with NACA 23012 airfoil section. For six ice accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 x 10(exp 6) and a Mach number of 0.18 with an 18-inch chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For four of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3% with corresponding differences in stall angle of approximately one degree or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several of the ice

  6. Development of a cognitive bias methodology for measuring low mood in chimpanzees

    PubMed Central

    Nettle, Daniel

    2015-01-01

    There is an ethical and scientific need for objective, well-validated measures of low mood in captive chimpanzees. We describe the development of a novel cognitive task designed to measure ‘pessimistic’ bias in judgments of expectation of reward, a cognitive marker of low mood previously validated in a wide range of species, and report training and test data from three common chimpanzees (Pan troglodytes). The chimpanzees were trained on an arbitrary visual discrimination in which lifting a pale grey paper cone was associated with reinforcement with a peanut, whereas lifting a dark grey cone was associated with no reward. The discrimination was trained by sequentially presenting the two cone types until significant differences in latency to touch the cone types emerged, and was confirmed by simultaneously presenting both cone types in choice trials. Subjects were subsequently tested on their latency to touch unrewarded cones of three intermediate shades of grey not previously seen. Pessimism was indicated by the similarity between the latency to touch intermediate cones and the latency to touch the trained, unreinforced, dark grey cones. Three subjects completed training and testing, two adult males and one adult female. All subjects learnt the discrimination (107–240 trials), and retained it during five sessions of testing. There was no evidence that latencies to lift intermediate cones increased over testing, as would have occurred if subjects learnt that these were never rewarded, suggesting that the task could be used for repeated testing of individual animals. There was a significant difference between subjects in their relative latencies to touch intermediate cones (pessimism index) that emerged following the second test session, and was not changed by the addition of further data. The most dominant male subject was least pessimistic, and the female most pessimistic. We argue that the task has the potential to be used to assess longitudinal changes in

  7. Validation of 3-D Ice Accretion Measurement Methodology for Experimental Aerodynamic Simulation

    NASA Technical Reports Server (NTRS)

    Broeren, Andy P.; Addy, Harold E., Jr.; Lee, Sam; Monastero, Marianne C.

    2015-01-01

    Determining the adverse aerodynamic effects due to ice accretion often relies on dry-air wind-tunnel testing of artificial, or simulated, ice shapes. Recent developments in ice-accretion documentation methods have yielded a laser-scanning capability that can measure highly three-dimensional (3-D) features of ice accreted in icing wind tunnels. The objective of this paper was to evaluate the aerodynamic accuracy of ice-accretion simulations generated from laser-scan data. Ice-accretion tests were conducted in the NASA Icing Research Tunnel using an 18-in. chord, two-dimensional (2-D) straight wing with NACA 23012 airfoil section. For six ice-accretion cases, a 3-D laser scan was performed to document the ice geometry prior to the molding process. Aerodynamic performance testing was conducted at the University of Illinois low-speed wind tunnel at a Reynolds number of 1.8 × 10(exp 6) and a Mach number of 0.18 with an 18-in. chord NACA 23012 airfoil model that was designed to accommodate the artificial ice shapes. The ice-accretion molds were used to fabricate one set of artificial ice shapes from polyurethane castings. The laser-scan data were used to fabricate another set of artificial ice shapes using rapid prototype manufacturing such as stereolithography. The iced-airfoil results with both sets of artificial ice shapes were compared to evaluate the aerodynamic simulation accuracy of the laser-scan data. For five of the six ice-accretion cases, there was excellent agreement in the iced-airfoil aerodynamic performance between the casting and laser-scan based simulations. For example, typical differences in iced-airfoil maximum lift coefficient were less than 3 percent with corresponding differences in stall angle of approximately 1 deg or less. The aerodynamic simulation accuracy reported in this paper has demonstrated the combined accuracy of the laser-scan and rapid-prototype manufacturing approach to simulating ice accretion for a NACA 23012 airfoil. For several

  8. Antioxidant Activity of Selected Thyme (Thymus L.) Species and Study of the Equivalence of Different Measuring Methodologies.

    PubMed

    Orłowska, Marta; Kowalska, Teresa; Sajewicz, Mieczysław; Pytlakowska, Katarzyna; Bartoszek, Mariola; Polak, Justyna; Waksmundzka-Hajnos, Monika

    2015-01-01

    This study presents the results of comparative evaluation of the antioxidant activity of the phenolic fraction exhaustively extracted with aqueous methanol from 18 different thyme (Thymus L.) specimens and species. This evaluation is made with use of the same free radical source (DPPH• radical), three different free radical scavenging models (gallic acid, ascorbic acid, and Trolox), and three different measuring techniques (the dot blot test, UV-Vis spectrophotometry, and electron paramagnetic resonance spectroscopy, EPR). A comparison of the equivalence of these three different measuring techniques (performed with use of hierarchical clustering with Euclidean distance as a similarity measure and Ward's linkage) is particularly important in view of the fact that different laboratories use different antioxidant activity measuring techniques, which makes any interlaboratory comparison hardly possible. The results obtained confirm a semiquantitative equivalence among the three compared methodologies, and a proposal is made of a simple and cost-effective dot blot test that uses the DPPH• radical and provides differentiation of antioxidant activity of herbal matter comparable with the results of the UV-Vis spectrophotometry and EPR.

  9. Importance of methodological standardization for the ektacytometric measures of red blood cell deformability in sickle cell anemia.

    PubMed

    Renoux, Céline; Parrow, Nermi; Faes, Camille; Joly, Philippe; Hardeman, Max; Tisdale, John; Levine, Mark; Garnier, Nathalie; Bertrand, Yves; Kebaili, Kamila; Cuzzubbo, Daniela; Cannas, Giovanna; Martin, Cyril; Connes, Philippe

    2016-01-01

    Red blood cell (RBC) deformability is severely decreased in patients with sickle cell anemia (SCA), which plays a role in the pathophysiology of the disease. However, investigation of RBC deformability from SCA patients demands careful methodological considerations. We assessed RBC deformability by ektacytometry (LORRCA MaxSis, Mechatronics, The Netherlands) in 6 healthy individuals and 49 SCA patients and tested the effects of different heights of the RBC diffraction patterns, obtained by altering the camera gain of the LORRCA, on the result of RBC deformability measurements, expressed as Elongation Index (EI). Results indicate that the pattern of RBCs from control subjects adopts an elliptical shape under shear stress, whereas the pattern of RBCs from individuals with SCA adopts a diamond shape arising from the superposition of elliptical and circular patterns. The latter represent rigid RBCs. While the EI measures did not change with the variations of the RBC diffraction pattern heights in the control subjects, we observed a decrease of EI when the RBC diffraction pattern height is increased in the SCA group. The differences in SCA EI values measured at 5 Pa between the different diffraction pattern heights correlated with the percent of hemoglobin S and the percent of sickled RBC observed by microscopy. Our study confirms that the camera gain or aperture of the ektacytometer should be used to standardize the size of the RBC diffraction pattern height when measuring RBC deformability in sickle cell patients and underscores the potential clinical utility of this technique.

  10. Biochemical quantification of sympathetic nervous activity in humans using radiotracer methodology: fallibility of plasma noradrenaline measurements

    SciTech Connect

    Esler, M.; Leonard, P.; O'Dea, K.; Jackman, G.; Jennings, G.; Korner, P.

    1982-01-01

    We have developed radiotracer techniques for studying noradrenaline kinetics, to assess better sympathetic nervous system function in humans. Tritiated l-noradrenaline was infused intravenously (0.35 microCi/m2/min) to plateau plasma concentration. Noradrenaline plasma clearance was calculated from plasma tritiated noradrenaline concentration at steady state, and the rate of spillover of noradrenaline to plasma derived from plasma noradrenaline specific radioactivity. Mean noradrenaline spillover at rest in 34 normal subjects was 0.33 micrograms/m2/min (range 0.17-0.61 micrograms/m2/min). Predictably, noradrenaline spillover was reduced in patients with subnormal sympathetic nervous system activity, 0.16 +/- 0.09 micrograms/m2/min in eight patients with idiopathic peripheral autonomic insufficiency, and 0.11 +/- 0.07 micrograms/m2/min (mean +/- SD) in six patients with essential hypertension treated with clonidine (0.45 mg daily). Noradrenaline line plasma clearance in normal subjects was 1.32 +/- 0.28 L/m2/min. Clearance fell with age, causing the previously described rise in plasma noradrenaline concentration with aging. Unexpected effects of drugs were encountered, for example chronic beta-adrenergic blockade in patients with essential hypertension reduced noradrenaline clearance. Plasma noradrenaline concentration measurements were not in agreement with noradrenaline release rate values, and do not reliably indicate sympathetic nervous system activity, in instances such as these where noradrenaline clearance is abnormal.

  11. A methodology to measure cervical vertebral bone maturation in a sample from low-income children.

    PubMed

    Aguiar, Luciana Barreto Vieira; Caldas, Maria de Paula; Haiter Neto, Francisco; Ambrosano, Glaucia Maria Bovi

    2013-01-01

    This study evaluated the applicability of the regression method for determining vertebral age developed by Caldas et al. (2007) by testing this method in children from low-income families of the rural zone. The sample comprised cephalometric and hand-wrist radiographs of 76 boys and 64 girls aged 7.0 to 14.9 years living in a medium-sized city in the desert region of the northeastern region of Brazil, with an HDI of 0.678. C3 and C4 vertebrae were traced and measured on cephalometric radiographs to estimate the bone age. The average age, average hand-wrist age and average error estimated for girls and boys were, respectively, 10.62 and 10.44 years, 11.28 and 10.57 years, and 1.42 and 1.18 years. Based on these results, the formula proposed by Caldas et al. (2007) was not applicable to the studied population, and new multiple regression models were developed to obtain the children's vertebral bone age accurately.

  12. Binding interaction between sorafenib and calf thymus DNA: Spectroscopic methodology, viscosity measurement and molecular docking

    NASA Astrophysics Data System (ADS)

    Shi, Jie-Hua; Chen, Jun; Wang, Jing; Zhu, Ying-Yao

    2015-02-01

    The binding interaction of sorafenib with calf thymus DNA (ct-DNA) was studied using UV-vis absorption spectroscopy, fluorescence emission spectroscopy, circular dichroism (CD), viscosity measurement and molecular docking methods. The experimental results revealed that there was obvious binding interaction between sorafenib and ct-DNA. The binding constant (Kb) of sorafenib with ct-DNA was 5.6 × 103 M-1 at 298 K. The enthalpy and entropy changes (ΔH0 and ΔS0) in the binding process of sorafenib with ct-DNA were -27.66 KJ mol-1 and -21.02 J mol-1 K-1, respectively, indicating that the main binding interaction forces were van der Waals force and hydrogen bonding. The docking results suggested that sorafenib preferred to bind on the minor groove of A-T rich DNA and the binding site of sorafenib was 4 base pairs long. The conformation change of sorafenib in the sorafenib-DNA complex was obviously observed and the change was close relation with the structure of DNA, implying that the flexibility of sorafenib molecule played an important role in the formation of the stable sorafenib-ct-DNA complex.

  13. Arduino-based control system for measuring ammonia in air using conditionally-deployed diffusive samplers

    NASA Astrophysics Data System (ADS)

    Ham, J. M.; Williams, C.; Shonkwiler, K. B.

    2012-12-01

    Arduino microcontrollers, wireless modules, and other low-cost hardware were used to develop a new type of air sampler for monitoring ammonia at strong areal sources like dairies, cattle feedlots, and waste treatment facilities. Ammonia was sampled at multiple locations on the periphery of an operation using Radiello diffusive passive samplers (Cod. RAD168- and RAD1201-Sigma-Aldrich). However, the samplers were not continuously exposed to the air. Instead, each sampling station included two diffusive samplers housed in specialized tubes that sealed the cartridges from the atmosphere. If a user-defined set of wind and weather conditions were met, the Radiellos were deployed into the air using a micro linear actuator. Each station was solar-powered and controlled by Arduinos that were linked to a central weather station using Xbee wireless modules (Digi International Inc.). The Arduinos also measured the total time of exposure using hall-effect sensors to verify the position of the cartridge (i.e., deployed or retracted). The decision to expose or retract the samplers was made every five minutes based on wind direction, wind speed, and time of day. Typically, the diffusive samplers were replaced with fresh cartridges every two weeks and the used samplers were analyzed in the laboratory using ion chromatography. Initial studies were conducted at a commercial dairy in northern Colorado. Ammonia emissions along the Front Range of Colorado can be transported into the mountains where atmospheric deposition of nitrogen can impact alpine ecosystems. Therefore, low-cost air quality monitoring equipment is needed that can be widely deployed in the region. Initial work at the dairy showed that ammonia concentrations ranged between 600 to 1200 ppb during the summer; the highest concentrations were downwind of a large anaerobic lagoon. Time-averaged ammonia concentrations were also used to approximate emissions using inverse dispersion models. This methodology provides a

  14. Honeybee tracking with microchips: a new methodology to measure the effects of pesticides.

    PubMed

    Decourtye, Axel; Devillers, James; Aupinel, Pierrick; Brun, François; Bagnis, Camille; Fourrier, Julie; Gauthier, Monique

    2011-03-01

    Losses of foraging bees are sometimes attributed to altered flight pattern between a meliferous plant treated with an insecticide and the hive. Only a limited number of studies has investigated the impact of pesticides on homing flight due to the difficulty of measuring the flight time between the food source and the hive. Monitoring the flights of the foraging bees needs their individual identification. The number of bees monitored simultaneously and the time span during which observations can be made limit most of the monitoring techniques. However, techniques of automatic tracking and identification of individuals have the potential to revolutionize the study of the ecotoxicological effects of xenobiotics on the bee behaviors. Radio Frequency Identification (RFID) offer numerous advantages such as an unlimited number of codes, a large number of simultaneous recording, and a quick reading, especially through materials (e.g., wood). The aim of this study was to show how the RFID device can be used to study the effects of pesticides on both the behavioral traits and the lifespan of bees. In this context, we have developed a method under tunnel to automatically record the displacements of foragers individualized with RFID tags and to detect the alteration of the flight pattern between an artificial feeder and the hive. Fipronil was selected as test substance due to the lack of information on the effects of this insecticide on the foraging behavior of free-flying bees. We showed that oral treatment of 0.3 ng of fipronil per bee (LD50/20) reduced the number of foraging trips. The strengths of our approach were briefly discussed.

  15. Measuring the habitat as an indicator of socioeconomic position: methodology and its association with hypertension

    PubMed Central

    Galobardes, B; Morabia, A

    2003-01-01

    Study objectives: (1) to develop an indicator of socioeconomic position based on the social standing of the habitat (SSH), that is, the residential building, its immediate surroundings, and local neighbourhood; (2) to assess the relation of SSH to two usual markers of socioeconomic position (education and occupation) and a known, socially determined health outcome (hypertension). Design: Population survey measuring SSH, detailed educational and occupational histories, and blood pressure. The SSH is a standardised assessment of the external and internal aspects of someone's building (or house), and of the characteristics of its immediate surroundings and local neighbourhood. Setting: A sample of participants to the Bus Santé survey between 1993 and 1998, in Geneva, Switzerland. Participants: 588 men and women, aged 35 to 74. Main results: The SSH index was highly reproducible (κ=0.8). Concordance of SSH with education or occupation was good for people of either high or low socioeconomic position, but not for those with medium education and/or occupation. There was a higher prevalence of hypertension in the lowest compared with the highest groups, defined on the basis of education or occupation, but the SSH was the only indicator that showed a higher prevalence of hypertension among people in the middle of the social spectrum. Conclusions: People of medium education or occupation are heterogeneous with respect to their habitat. Those living in habitats of medium social standing may be most affected by hypertension but this association could not be revealed on the basis of education and occupation alone. The habitat seems to capture different aspects of the socioeconomic position compared with the usual indicators of social class. PMID:12646538

  16. Investigation of Radiation Protection Methodologies for Radiation Therapy Shielding Using Monte Carlo Simulation and Measurement

    NASA Astrophysics Data System (ADS)

    Tanny, Sean

    The advent of high-energy linear accelerators for dedicated medical use in the 1950's by Henry Kaplan and the Stanford University physics department began a revolution in radiation oncology. Today, linear accelerators are the standard of care for modern radiation therapy and can generate high-energy beams that can produce tens of Gy per minute at isocenter. This creates a need for a large amount of shielding material to properly protect members of the public and hospital staff. Standardized vault designs and guidance on shielding properties of various materials are provided by the National Council on Radiation Protection (NCRP) Report 151. However, physicists are seeking ways to minimize the footprint and volume of shielding material needed which leads to the use of non-standard vault configurations and less-studied materials, such as high-density concrete. The University of Toledo Dana Cancer Center has utilized both of these methods to minimize the cost and spatial footprint of the requisite radiation shielding. To ensure a safe work environment, computer simulations were performed to verify the attenuation properties and shielding workloads produced by a variety of situations where standard recommendations and guidance documents were insufficient. This project studies two areas of concern that are not addressed by NCRP 151, the radiation shielding workload for the vault door with a non-standard design, and the attenuation properties of high-density concrete for both photon and neutron radiation. Simulations have been performed using a Monte-Carlo code produced by the Los Alamos National Lab (LANL), Monte Carlo Neutrons, Photons 5 (MCNP5). Measurements have been performed using a shielding test port designed into the maze of the Varian Edge treatment vault.

  17. Optimization and validation of high-performance chromatographic condition for simultaneous determination of adapalene and benzoyl peroxide by response surface methodology.

    PubMed

    Chen, Yi-Cheng; Tsai, Pi-Ju; Huang, Yaw-Bin; Wu, Pao-Chu

    2015-01-01

    The aim of this study was to develop a simple and reliable high-performance chromatographic (HPLC) method for simultaneous analysis of adapalene and benzoyl peroxide in pharmaceutical formulation by response surface methodology (RSM). An optimized mobile phase composed of acetonitrile, tetrahydrofuran and water containing 0.1% acetic acid at a ratio of 25:50:25 by volume was successfully predicted by using RSM. An isocratic separation was achieved by using the condition. Furthermore, the analytical method was validated in terms of specificity, linearity, accuracy and precision in a range of 80% to 120% of the expected concentration. Finally, the method was successfully applied to the analysis of a commercial product.

  18. Measurement of heat stress conditions at cow level and comparison to climate conditions at stationary locations inside a dairy barn.

    PubMed

    Schüller, Laura K; Heuwieser, Wolfgang

    2016-08-01

    The objectives of this study were to examine heat stress conditions at cow level and to investigate the relationship to the climate conditions at 5 different stationary locations inside a dairy barn. In addition, we compared the climate conditions at cow level between primiparous and multiparous cows for a period of 1 week after regrouping. The temperature-humidity index (THI) differed significantly between all stationary loggers. The lowest THI was measured at the window logger in the experimental stall and the highest THI was measured at the central logger in the experimental stall. The THI at the mobile cow loggers was 2·33 THI points higher than at the stationary loggers. Furthermore, the mean daily THI was higher at the mobile cow loggers than at the stationary loggers on all experimental days. The THI in the experimental pen was 0·44 THI points lower when the experimental cow group was located inside the milking parlour. The THI measured at the mobile cow loggers was 1·63 THI points higher when the experimental cow group was located inside the milking parlour. However, there was no significant difference for all climate variables between primiparous and multiparous cows. These results indicate, there is a wide range of climate conditions inside a dairy barn and especially areas with a great distance to a fresh air supply have an increased risk for the occurrence of heat stress conditions. Furthermore, the heat stress conditions are even higher at cow level and cows not only influence their climatic environment, but also generate microclimates within different locations inside the barn. Therefore climate conditions should be obtained at cow level to evaluate the heat stress conditions that dairy cows are actually exposed to.

  19. An Assessment of Conditioning Parameter Selection Efficiency on Medium Scale Erosion Susceptibility Mapping by GIS and Remote Sensing methodologies : An Example from Northwest Turkey

    NASA Astrophysics Data System (ADS)

    Akgün, Aykut; Turk, Necdet

    2013-04-01

    To make a medium scale erosion susceptibility map, several conditioning parameters can be considered to be input parameter in the model constructed. However, to select appropriate conditioning parameters is an important task in order to provide a comprehensive erosion susceptibility map. In this context, this study examines the efficiency of conditioning parameter selection in a case study. For this purpose, Ayvalık district (Northwest Turkey) was selected where a serious surface erosion problem is available. To make an erosion susceptibility map of the area, two methodologies were considered, namely logistic regression (LR) and analytical hierarchy process (AHP). Weathering of rock units, slope gradient, stream power index (SPI), structural lineament density, drainage density and land cover were considered to be conditioning parameters. Initally, an erosion susceptibility map considering by all the conditioning parameters were produced by LR and AHP methodologies. Then, six different parameter combinations were created, and six different erosion susceptibility maps were also produced for two modelling methods. After obtaining twelve different erosion susceptibility maps, performance analyses were carried out for all produced maps by area under curvature (AUC) procedure. The maps produced were also compared with each other. For this purpose, cross correlation were done, and both similarities and dissimilarities were determined between the maps by Kappa Index (KIA) assessment. After all these process, the obtained erosion susceptibility maps were also compared with the landslide occurrence locations which are another natural hazard problem in the area to investigate the relationship between erosion susceptibility and landslide occurrence. At the end of the performance analysis, the most successful estimations by LR and AHP were obtained, and the results were also discussed in frame of cause-result relationship. Keywords: Erosion, AHP, Logistic regression, Turkey

  20. Evaluation of validity and reliability of a methodology for measuring human postural attitude and its relation to temporomandibular joint disorders

    PubMed Central

    Fernández, Ramón Fuentes; Carter, Pablo; Muñoz, Sergio; Silva, Héctor; Venegas, Gonzalo Hernán Oporto; Cantin, Mario; Ottone, Nicolás Ernesto

    2016-01-01

    INTRODUCTION Temporomandibular joint disorders (TMJDs) are caused by several factors such as anatomical, neuromuscular and psychological alterations. A relationship has been established between TMJDs and postural alterations, a type of anatomical alteration. An anterior position of the head requires hyperactivity of the posterior neck region and shoulder muscles to prevent the head from falling forward. This compensatory muscular function may cause fatigue, discomfort and trigger point activation. To our knowledge, a method for assessing human postural attitude in more than one plane has not been reported. Thus, the aim of this study was to design a methodology to measure the external human postural attitude in frontal and sagittal planes, with proper validity and reliability analyses. METHODS The variable postures of 78 subjects (36 men, 42 women; age 18–24 years) were evaluated. The postural attitudes of the subjects were measured in the frontal and sagittal planes, using an acromiopelvimeter, grid panel and Fox plane. RESULTS The method we designed for measuring postural attitudes had adequate reliability and validity, both qualitatively and quantitatively, based on Cohen’s Kappa coefficient (> 0.87) and Pearson’s correlation coefficient (r = 0.824, > 80%). CONCLUSION This method exhibits adequate metrical properties and can therefore be used in further research on the association of human body posture with skeletal types and TMJDs. PMID:26768173

  1. Measurements of elastic moduli of pharmaceutical compacts: a new methodology using double compaction on a compaction simulator.

    PubMed

    Mazel, Vincent; Busignies, Virginie; Diarra, Harona; Tchoreloff, Pierre

    2012-06-01

    The elastic properties of pharmaceutical powders play an important role during the compaction process. The elastic behavior can be represented by Young's modulus (E) and Poisson's ratio (v). However, during the compaction, the density of the powder bed changes and the moduli must be determined as a function of the porosity. This study proposes a new methodology to determine E and v as a function of the porosity using double compaction in an instrumented compaction simulator. Precompression is used to form the compact, and the elastic properties are measured during the beginning of the main compaction. By measuring the axial and radial pressure and the powder bed thickness, E and v can be determined as a function of the porosity. Two excipients were studied, microcrystalline cellulose (MCC) and anhydrous calcium phosphate (aCP). The values of E measured are comparable to those obtained using the classical three-point bending test. Poisson's ratio was found to be close to 0.24 for aCP with only small variations with the porosity, and to increase with a decreasing porosity for MCC (0.23-0.38). The classical approximation of a value of 0.3 for ν of pharmaceutical powders should therefore be taken with caution.

  2. Methodology for evaluating lateral boundary conditions in the regional chemical transport model MATCH (v5.5.0) using combined satellite and ground-based observations

    NASA Astrophysics Data System (ADS)

    Andersson, E.; Kahnert, M.; Devasthale, A.

    2015-11-01

    Hemispheric transport of air pollutants can have a significant impact on regional air quality, as well as on the effect of air pollutants on regional climate. An accurate representation of hemispheric transport in regional chemical transport models (CTMs) depends on the specification of the lateral boundary conditions (LBCs). This study focuses on the methodology for evaluating LBCs of two moderately long-lived trace gases, carbon monoxide (CO) and ozone (O3), for the European model domain and over a 7-year period, 2006-2012. The method is based on combining the use of satellite observations at the lateral boundary with the use of both satellite and in situ ground observations within the model domain. The LBCs are generated by the global European Monitoring and Evaluation Programme Meteorological Synthesizing Centre - West (EMEP MSC-W) model; they are evaluated at the lateral boundaries by comparison with satellite observations of the Terra-MOPITT (Measurements Of Pollution In The Troposphere) sensor (CO) and the Aura-OMI (Ozone Monitoring Instrument) sensor (O3). The LBCs from the global model lie well within the satellite uncertainties for both CO and O3. The biases increase below 700 hPa for both species. However, the satellite retrievals below this height are strongly influenced by the a priori data; hence, they are less reliable than at, e.g. 500 hPa. CO is, on average, underestimated by the global model, while O3 tends to be overestimated during winter, and underestimated during summer. A regional CTM is run with (a) the validated monthly climatological LBCs from the global model; (b) dynamical LBCs from the global model; and (c) constant LBCs based on in situ ground observations near the domain boundary. The results are validated against independent satellite retrievals from the Aqua-AIRS (Atmospheric InfraRed Sounder) sensor at 500 hPa, and against in situ ground observations from the Global Atmospheric Watch (GAW) network. It is found that (i) the use of

  3. Methodological considerations for the use of heart rate variability as a measure of pain reactivity in vulnerable infants.

    PubMed

    Oberlander, Tim; Saul, J Philip

    2002-09-01

    Measures of HR and HRV offer multiple indices of reactivity to painful events. These measures are particularly helpful in preterm and ill infants where distress signals are often nonspecific and ambiguous. HR is easy to acquire, and a variety of widely used techniques are available for processing it. In general, the neuroanatomic and neurophysiologic bases for pain perception are in place even in the most preterm infant and produce patterns of HR and HRV responses that are similar across multiple settings. Developmental and experiential factors related to preterm birth, however, may affect these HR responses. Furthermore, evaluation of ill infants in an NICU setting adds multiple contextual factors that potentially influence HR and HRV and alter their specificity as measures of pain. In some cases, it may appear that pain reactivity is reduced when, in fact, HR reactivity is only an expression of the biologic capacity to produce a response, not the presence of a response itself. The nature of the setting and the infant's health, developmental stage, and behavioral state all contribute to potentially altering HR responses to painful events in this setting. Thus, the methodology used and its application must be flexible. A variety of HRV analysis techniques may be needed to identify a variety of response patterns and mechanisms that influence pain reactivity. Furthermore, careful selection of HR epochs for stationarity, an understanding of the potential discordance between biologic and behavioral measures, the effects of medication, and an accounting for developmental differences that occur during a typical NICU course are all critical factors for investigators to be aware of. Understanding cardiovascular reactivity as a measure of response to painful events in vulnerable infants requires ongoing work.

  4. Using Geographic Information Systems to measure retail food environments: Discussion of methodological considerations and a proposed reporting checklist (Geo-FERN).

    PubMed

    Wilkins, Emma L; Morris, Michelle A; Radley, Duncan; Griffiths, Claire

    2017-03-01

    Geographic Information Systems (GIS) are widely used to measure retail food environments. However the methods used are hetrogeneous, limiting collation and interpretation of evidence. This problem is amplified by unclear and incomplete reporting of methods. This discussion (i) identifies common dimensions of methodological diversity across GIS-based food environment research (data sources, data extraction methods, food outlet construct definitions, geocoding methods, and access metrics), (ii) reviews the impact of different methodological choices, and (iii) highlights areas where reporting is insufficient. On the basis of this discussion, the Geo-FERN reporting checklist is proposed to support methodological reporting and interpretation.

  5. Treatment of wastewater effluents from paper-recycling plants by coagulation process and optimization of treatment conditions with response surface methodology

    NASA Astrophysics Data System (ADS)

    Birjandi, Noushin; Younesi, Habibollah; Bahramifar, Nader

    2016-11-01

    In the present study, a coagulation process was used to treat paper-recycling wastewater with alum coupled with poly aluminum chloride (PACl) as coagulants. The effect of each four factors, viz. the dosages of alum and PACl, pH and chemical oxygen demand (COD), on the treatment efficiency was investigated. The influence of these four parameters was described using response surface methodology under central composite design. The efficiency of reducing turbidity, COD and the sludge volume index (SVI) were considered the responses. The optimum conditions for high treatment efficiency of paper-recycling wastewater under experimental conditions were reached with numerical optimization of coagulant doses and pH, with 1,550 mg/l alum and 1,314 mg/l PACl and 9.5, respectively, where the values for reduction of 80.02 % in COD, 83.23 % in turbidity, and 140 ml/g in SVI were obtained.

  6. Optimisation of immobilisation conditions for chick pea β-galactosidase (CpGAL) to alkylamine glass using response surface methodology and its applications in lactose hydrolysis.

    PubMed

    Kishore, Devesh; Kayastha, Arvind M

    2012-10-01

    Response surface methodology was advantageously used to optimally immobilise a β-galactosidase from chick pea onto alkylamine glass using Box-Behnken experimental design, resulting in an overall 91% immobilisation efficiency. Analysis of variance was performed to determine the adequacy and significance of the quadratic model. Immobilised enzyme showed a shift in the optimum pH; however, optimum temperature remained unaffected. Thermal denaturation kinetics demonstrated significant improvement in thermal stability of the enzyme after immobilisation. Galactose competitively inhibits the enzyme in both soluble and immobilised conditions. Lactose in milk whey was hydrolysed at comparatively higher rate than that of milk. Immobilised enzyme showed excellent reusability with retention of more than 82% enzymatic activity after 15 uses. The immobilised enzyme was found to be fairly stable in both dry and wet conditions for three months with retention of more than 80% residual activity.

  7. Cognitive status in the oldest old and centenarians: a condition crucial for quality of life methodologically difficult to assess.

    PubMed

    Arosio, Beatrice; Ostan, Rita; Mari, Daniela; Damanti, Sarah; Ronchetti, Francesco; Arcudi, Sara; Scurti, Maria; Franceschi, Claudio; Monti, Daniela

    2017-03-09

    Human life expectancy and the number of the oldest old are rapidly increasing worldwide. Advanced age is the main risk factor for dementia, representing one of the major causes of disability/dependency among older people with a strong impact on their families/caregivers. Centenarians have reached the extreme limits of human life escaping or delaying the major age-related diseases. Thus, these extraordinary individuals embody the best model to answer the crucial question if cognitive decline and dementia are progressive and unavoidable occurrences of increasing age. Despite a growing amount of data underlines the importance of cognitive function for quality of life and survival in old age, studies on centenarians have paid more attention to their physical condition rather than the assessment of their actual cognitive abilities. Accordingly, this work aims to summarize available data on the prevalence of dementia in centenarians and to critically address topics which can have a relevant impact on the cognitive assessment/status of the oldest old: (i) lack of standardized tools for cognitive assessment; (ii) criteria and threshold to establish the presence of dementia; (iii) influence of birth cohort and education; (iv) role of depression or positive attitude towards life; (v) gender differences.

  8. New Methodology for Evaluating Optimal Pricing for Primary Regulation of Deregulated Power Systems under Steady State Condition

    NASA Astrophysics Data System (ADS)

    Satyaramesh, P. V.; RadhaKrishna, C.

    2013-06-01

    A generalized pricing structure for procurement of power under frequency ancillary service is developed in this paper. It is a frequency linked-price model and suitable for deregulation market environment. This model takes into consideration: governor characteristics and frequency characteristics of generator as additional parameters in load flow method. The main objective of the new approach proposed in this paper is to establish bidding price structure for frequency regulation services in competitive ancillary electrical markets under steady state condition. Lot of literatures are available for calculating the frequency deviations with respect to load changes by using dynamic simulation methods. But in this paper, the model computes the frequency deviations for additional requirements of power under steady state with considering power system network topology. An attempt is also made in this paper to develop optimal bidding price structure for the frequency-regulated systems. It gives a signal to traders or bidders that the power demand can be assessed more accurately much closer to real time and helps participants bid more accurate quantities on day-ahead market. The recent trends of frequency linked-price model existing in Indian power systems issues required for attention are also dealt in this paper. Test calculations have been performed on 30-bus system. The paper also explains adoptability of 33 this model to practical Indian power system. The results presented are analyzed and useful conclusions are drawn.

  9. A new, simple method for the production of meat-curing pigment under optimised conditions using response surface methodology.

    PubMed

    Soltanizadeh, Nafiseh; Kadivar, Mahdi

    2012-12-01

    The production of cured meat pigment using nitrite and ascorbate in acidic conditions was evaluated. HCl, ascorbate and nitrite concentrations were optimised at three levels using the response surface method (RSM). The effects of process variables on the nitrosoheme yield, the wavelength of maximum absorbance (λ(max)), and L*, a* and b* values were evaluated. The response surface equations indicate that variables exerted a significant effect on all dependent factors. The optimum combinations for the reaction were HCl=-0.8, ascorbate=0.46 and nitrite=1.00 as coded values for conversion of 1mM hemin to nitrosoheme, by which a pigment yield of 100%, which was similar to the predicted value of 99.5%, was obtained. Likewise, the other parameters were not significantly different from predicted values as the λ(max), L*, a* and b* values were 558 nm, 47.03, 45.17 and 17.20, respectively. The structure of the pigment was identified using FTIR and ESI/MS.

  10. Reactions of transplanted neurocentral synchondroses to different conditions of mechanical stress. A methodological study on the rat.

    PubMed Central

    Rönning, O; Kylämarkula, S

    1979-01-01

    In order to elucidate the reactions of neurocentral synchondroses to different forces, the first cervical vertebra of 10 or 25 days old rats was transplanted into sex-matched litter mates. Some vertebrae were transplanted as a whole, in some only the ventral part with its synchondroses was transplanted and, in others the lumen was furnished with an expanding sponge or a spring. The transplantation was done subcutaneously and, in the case of the fragments, intracerebrally as well. The synchondroses of the vertebrae transplanted at 10 days did not differ very much from those of the host 5, 10 or 15 days after the operation, whereas in the vertebrae transplanted at 25 days the synchondroses underwent synostosis earlier than in situ. The synchondroses of the transplanted fragments, and especially of those placed intracerebrally, remained open longer than those in the whole vertebral transplants; the sponge and and the spring also delayed closure. In the synchondroses transplanted at 25 days there was a strong reduction in alcian blue staining, whereas in the spring loaded synchondroses the stainability persisted longer, maybe as an adaptation to the tensile force. It seems that the inherent potential of the neurocentral synchondroses to obliterate at a certain time can be altered by changing the biomechanical conditions. Images Fig. 2 Fig. 3 Fig. 4 Fig. 5 PMID:489467

  11. Methodological considerations of electron spin resonance spin trapping techniques for measuring reactive oxygen species generated from metal oxide nanomaterials

    NASA Astrophysics Data System (ADS)

    Jeong, Min Sook; Yu, Kyeong-Nam; Chung, Hyun Hoon; Park, Soo Jin; Lee, Ah Young; Song, Mi Ryoung; Cho, Myung-Haing; Kim, Jun Sung

    2016-05-01

    Qualitative and quantitative analyses of reactive oxygen species (ROS) generated on the surfaces of nanomaterials are important for understanding their toxicity and toxic mechanisms, which are in turn beneficial for manufacturing more biocompatible nanomaterials in many industrial fields. Electron spin resonance (ESR) is a useful tool for detecting ROS formation. However, using this technique without first considering the physicochemical properties of nanomaterials and proper conditions of the spin trapping agent (such as incubation time) may lead to misinterpretation of the resulting data. In this report, we suggest methodological considerations for ESR as pertains to magnetism, sample preparation and proper incubation time with spin trapping agents. Based on our results, each spin trapping agent should be given the proper incubation time. For nanomaterials having magnetic properties, it is useful to remove these nanomaterials via centrifugation after reacting with spin trapping agents. Sonication for the purpose of sample dispersion and sample light exposure should be controlled during ESR in order to enhance the obtained ROS signal. This report will allow researchers to better design ESR spin trapping applications involving nanomaterials.

  12. Biodegradation of cyanide by a new isolated strain under alkaline conditions and optimization by response surface methodology (RSM)

    PubMed Central

    2014-01-01

    Background Biodegradation of free cyanide from industrial wastewaters has been proven as a viable and robust method for treatment of wastewaters containing cyanide. Results Cyanide degrading bacteria were isolated from a wastewater treatment plant for coke-oven-gas condensate by enrichment culture technique. Five strains were able to use cyanide as the sole nitrogen source under alkaline conditions and among them; one strain (C2) was selected for further studies on the basis of the higher efficiency of cyanide degradation. The bacterium was able to tolerate free cyanide at concentrations of up to 500 ppm which makes it a good potentially candidate for the biological treatment of cyanide contaminated residues. Cyanide degradation corresponded with growth and reached a maximum level 96% during the exponential phase. The highest growth rate (1.23 × 108) was obtained on day 4 of the incubation time. Both glucose and fructose were suitable carbon sources for cyanotrophic growth. No growth was detected in media with cyanide as the sole carbon source. Four control factors including, pH, temperature, agitation speed and glucose concentration were optimized according to central composite design in response surface method. Cyanide degradation was optimum at 34.2°C, pH 10.3 and glucose concentration 0.44 (g/l). Conclusions Bacterial species degrade cyanide into less toxic products as they are able to use the cyanide as a nitrogen source, forming ammonia and carbon dioxide as end products. Alkaliphilic bacterial strains screened in this study evidentially showed the potential to possess degradative activities that can be harnessed to remediate cyanide wastes. PMID:24921051

  13. Optimization of Reflux Conditions for Total Flavonoid and Total Phenolic Extraction and Enhanced Antioxidant Capacity in Pandan (Pandanus amaryllifolius Roxb.) Using Response Surface Methodology

    PubMed Central

    Ghasemzadeh, Ali; Jaafar, Hawa Z. E.

    2014-01-01

    Response surface methodology was applied to optimization of the conditions for reflux extraction of Pandan (Pandanus amaryllifolius Roxb.) in order to achieve a high content of total flavonoids (TF), total phenolics (TP), and high antioxidant capacity (AC) in the extracts. Central composite experimental design with three factors and three levels was employed to consider the effects of the operation parameters, including the methanol concentration (MC, 40%–80%), extraction temperature (ET, 40–70°C), and liquid-to-solid ratio (LS ratio, 20–40 mL/g) on the properties of the extracts. Response surface plots showed that increasing these operation parameters induced the responses significantly. The TF content and AC could be maximized when the extraction conditions (MC, ET, and LS ratio) were 78.8%, 69.5°C, and 32.4 mL/g, respectively, whereas the TP content was optimal when these variables were 75.1%, 70°C, and 31.8 mL/g, respectively. Under these optimum conditions, the experimental TF and TP content and AC were 1.78, 6.601 mg/g DW, and 87.38%, respectively. The optimized model was validated by a comparison of the predicted and experimental values. The experimental values were found to be in agreement with the predicted values, indicating the suitability of the model for optimizing the conditions for the reflux extraction of Pandan. PMID:25147852

  14. Towards uniform accelerometry analysis: a standardization methodology to minimize measurement bias due to systematic accelerometer wear-time variation.

    PubMed

    Katapally, Tarun R; Muhajarine, Nazeem

    2014-05-01

    Accelerometers are predominantly used to objectively measure the entire range of activity intensities - sedentary behaviour (SED), light physical activity (LPA) and moderate to vigorous physical activity (MVPA). However, studies consistently report results without accounting for systematic accelerometer wear-time variation (within and between participants), jeopardizing the validity of these results. This study describes the development of a standardization methodology to understand and minimize measurement bias due to wear-time variation. Accelerometry is generally conducted over seven consecutive days, with participants' data being commonly considered 'valid' only if wear-time is at least 10 hours/day. However, even within 'valid' data, there could be systematic wear-time variation. To explore this variation, accelerometer data of Smart Cities, Healthy Kids study (www.smartcitieshealthykids.com) were analyzed descriptively and with repeated measures multivariate analysis of variance (MANOVA). Subsequently, a standardization method was developed, where case-specific observed wear-time is controlled to an analyst specified time period. Next, case-specific accelerometer data are interpolated to this controlled wear-time to produce standardized variables. To understand discrepancies owing to wear-time variation, all analyses were conducted pre- and post-standardization. Descriptive analyses revealed systematic wear-time variation, both between and within participants. Pre- and post-standardized descriptive analyses of SED, LPA and MVPA revealed a persistent and often significant trend of wear-time's influence on activity. SED was consistently higher on weekdays before standardization; however, this trend was reversed post-standardization. Even though MVPA was significantly higher on weekdays both pre- and post-standardization, the magnitude of this difference decreased post-standardization. Multivariable analyses with standardized SED, LPA and MVPA as outcome

  15. Improvement of electrical blood hematocrit measurements under various plasma conditions using a novel hematocrit estimation parameter.

    PubMed

    Kim, Myounggon; Kim, Ayoung; Kim, Sohee; Yang, Sung

    2012-05-15

    This paper presents an electrical method for measurement of Hematocrit (HCT) using a novel HCT estimation parameter. Particularly in the case of electrical HCT measurements, the measurement error generally increases with changes in the electrical conditions of the plasma such as conductivity and osmolality. This is because the electrical properties of blood are a function not only of HCT, but also of the electrical conditions in the plasma. In an attempt to reduce the measurement errors, we herein propose a novel HCT estimation parameter reflecting the characteristics of both the changes in volume of red blood cells (RBCs) and electrical conditions of plasma, simultaneously. In order to characterize the proposed methods under various electrical conditions of plasma, we prepared twelve blood samples such as four kinds of plasma conditions (hypotonic, isotonic, two kinds of hypertonic conditions) at three different HCT levels. Using linear regression analysis, we confirmed that the proposed parameter was highly correlated with reference HCT (HCT(ref.)) values measured by microcentrifugation. Thus, the HCT measurement error was less than 4%, despite considerable variations in the conductivity and osmolality of the plasma at conditions of the HCT(ref.) of 20%. Multiple linear regression analysis showed that the proposed HCT estimation parameter also yielded a lower measurement error (1%) than the other parameter previously used for the same purpose. Thus, these preliminary results suggest that proposed method could be used for accurate, fast, easy, and reproducible HCT measurements in medical procedures.

  16. The accurate measurement of fear memory in Pavlovian conditioning: Resolving the baseline issue.

    PubMed

    Jacobs, Nathan S; Cushman, Jesse D; Fanselow, Michael S

    2010-07-15

    Fear conditioning has become an indispensable behavioral task in an increasingly vast array of research disciplines. Yet one unresolved issue is how conditional fear to an explicit cue interacts with and is potentially confounded by fear prior to tone presentation, referred to as baseline fear. After tone-shock pairings, we experimentally manipulated baseline fear by presenting unpaired shocks in the testing chamber and then analyzed the accuracy of common methods for reporting tone fear. Our findings indicate that baseline fear and tone fear tend to interact, where freezing to the tone increases as baseline fear increases. However, the form of interaction is not linear across all conditions and none of the commonly used reporting methods were consistently able to eliminate the confounding effects of baseline fear. We propose a methodological solution in which baseline fear is reduced to very low levels by first extinguishing fear to the training context and then pre-exposing to the testing context.

  17. Measuring the payback of research activities: a feasible ex-post evaluation methodology in epidemiology and public health.

    PubMed

    Aymerich, Marta; Carrion, Carme; Gallo, Pedro; Garcia, Maria; López-Bermejo, Abel; Quesada, Miquel; Ramos, Rafel

    2012-08-01

    Most ex-post evaluations of research funding programs are based on bibliometric methods and, although this approach has been widely used, it only examines one facet of the project's impact, that is, scientific productivity. More comprehensive models of payback assessment of research activities are designed for large-scale projects with extensive funding. The purpose of this study was to design and implement a methodology for the ex-post evaluation of small-scale projects that would take into account both the fulfillment of projects' stated objectives as well as other wider benefits to society as payback measures. We used a two-phase ex-post approach to appraise impact for 173 small-scale projects funded in 2007 and 2008 by a Spanish network center for research in epidemiology and public health. In the internal phase we used a questionnaire to query the principal investigator (PI) on the outcomes as well as actual and potential impact of each project; in the external phase we sent a second questionnaire to external reviewers with the aim of assessing (by peer-review) the performance of each individual project. Overall, 43% of the projects were rated as having completed their objectives "totally", and 40% "considerably". The research activities funded were reported by PIs as socially beneficial their greatest impact being on research capacity (50% of payback to society) and on knowledge translation (above 11%). The method proposed showed a good discriminating ability that makes it possible to measure, reliably, the extent to which a project's objectives were met as well as the degree to which the project contributed to enhance the group's scientific performance and of its social payback.

  18. Optimization and Validation of High-Performance Chromatographic Condition for Simultaneous Determination of Adapalene and Benzoyl Peroxide by Response Surface Methodology

    PubMed Central

    Huang, Yaw-Bin; Wu, Pao-Chu

    2015-01-01

    The aim of this study was to develop a simple and reliable high-performance chromatographic (HPLC) method for simultaneous analysis of adapalene and benzoyl peroxide in pharmaceutical formulation by response surface methodology (RSM). An optimized mobile phase composed of acetonitrile, tetrahydrofuran and water containing 0.1% acetic acid at a ratio of 25:50:25 by volume was successfully predicted by using RSM. An isocratic separation was achieved by using the condition. Furthermore, the analytical method was validated in terms of specificity, linearity, accuracy and precision in a range of 80% to 120% of the expected concentration. Finally, the method was successfully applied to the analysis of a commercial product. PMID:25793581

  19. Optimization of Extraction Condition of Bee Pollen Using Response Surface Methodology: Correlation between Anti-Melanogenesis, Antioxidant Activity, and Phenolic Content.

    PubMed

    Kim, Seon Beom; Jo, Yang Hee; Liu, Qing; Ahn, Jong Hoon; Hong, In Pyo; Han, Sang Mi; Hwang, Bang Yeon; Lee, Mi Kyeong

    2015-11-02

    Bee pollen is flower pollen with nectar and salivary substances of bees and rich in essential components. Bee pollen showed antioxidant and tyrosinase inhibitory activity in our assay system. To maximize the antioxidant and tyrosinase inhibitory activity of bee pollen, extraction conditions, such as extraction solvent, extraction time, and extraction temperature, were optimized using response surface methodology. Regression analysis showed a good fit of this model and yielded the second-order polynomial regression for tyrosinase inhibition and antioxidant activity. Among the extraction variables, extraction solvent greatly affected the activity. The optimal condition was determined as EtOAc concentration in MeOH, 69.6%; temperature, 10.0 °C; and extraction time, 24.2 h, and the tyrosinase inhibitory and antioxidant activity under optimal condition were found to be 57.9% and 49.3%, respectively. Further analysis showed the close correlation between activities and phenolic content, which suggested phenolic compounds are active constituents of bee pollen for tyrosinase inhibition and antioxidant activity. Taken together, these results provide useful information about bee pollen as cosmetic therapeutics to reduce oxidative stress and hyperpigmentation.

  20. Harman Measurements for Thermoelectric Materials and Modules under Non-Adiabatic Conditions

    NASA Astrophysics Data System (ADS)

    Roh, Im-Jun; Lee, Yun Goo; Kang, Min-Su; Lee, Jae-Uk; Baek, Seung-Hyub; Kim, Seong Keun; Ju, Byeong-Kwon; Hyun, Dow-Bin; Kim, Jin-Sang; Kwon, Beomjin

    2016-12-01

    Accuracy of the Harman measurement largely depends on the heat transfer between the sample and its surroundings, so-called parasitic thermal effects (PTEs). Similar to the material evaluations, measuring thermoelectric modules (TEMs) is also affected by the PTEs especially when measuring under atmospheric condition. Here, we study the correction methods for the Harman measurements with systematically varied samples (both bulk materials and TEMs) at various conditions. Among several PTEs, the heat transfer via electric wires is critical. Thus, we estimate the thermal conductance of the electric wires, and correct the measured properties for a certain sample shape and measuring temperature. The PTEs are responsible for the underestimation of the TEM properties especially under atmospheric conditions (10–35%). This study will be useful to accurately characterize the thermoelectric properties of materials and modules.

  1. Harman Measurements for Thermoelectric Materials and Modules under Non-Adiabatic Conditions

    PubMed Central

    Roh, Im-Jun; Lee, Yun Goo; Kang, Min-Su; Lee, Jae-Uk; Baek, Seung-Hyub; Kim, Seong Keun; Ju, Byeong-Kwon; Hyun, Dow-Bin; Kim, Jin-Sang; Kwon, Beomjin

    2016-01-01

    Accuracy of the Harman measurement largely depends on the heat transfer between the sample and its surroundings, so-called parasitic thermal effects (PTEs). Similar to the material evaluations, measuring thermoelectric modules (TEMs) is also affected by the PTEs especially when measuring under atmospheric condition. Here, we study the correction methods for the Harman measurements with systematically varied samples (both bulk materials and TEMs) at various conditions. Among several PTEs, the heat transfer via electric wires is critical. Thus, we estimate the thermal conductance of the electric wires, and correct the measured properties for a certain sample shape and measuring temperature. The PTEs are responsible for the underestimation of the TEM properties especially under atmospheric conditions (10–35%). This study will be useful to accurately characterize the thermoelectric properties of materials and modules. PMID:27966622

  2. Condition Assessment Methodology for Spillways

    DTIC Science & Technology

    2008-06-01

    62. Mobile structure to support a shared lifting device. Mobile structure to support a shared lifting device (including gantry crane ) Function...Comments Alignment, elevation, spacing (gauge) According to specifications X Out of specification but no noticeable wear of track, crane X X can still lift...gate and travel (without noise and vibration) Out of specification but no noticeable wear of track, crane can still lift gate and travel X (with

  3. Dynamic measurement of physical conditions in daily life by body area network sensing system

    NASA Astrophysics Data System (ADS)

    Takayama, S.; Tanaka, T.; Takahashi, N.; Matsuda, Y.; Kariya, K.

    2010-07-01

    This paper shows the measurement system to monitor physical conditions dynamically in dairy life. The measurement system for physical conditions in motion must be wearable and wireless connected. Body area network sensing system (BANSS) is a kind of the system to realize the conditions. BANSS is the system constructed with host system and plural sensing nodes. Sensing node is constructed with sensors, analogue/digital convertor(ADC), peripheral interface component(PIC), memory and near field communication device(NFCD). The NFCD in this system is Zigbee. Zigbee is the most suitable to construct wireless network system easily. BANSS is not only the system to measure physical parameters. BANSS informs current physical conditions and advises to keep suitable physical strength. As an application of BANSS, the system managing heart rate in walking is shown. By using this system, users can exercise in condition of a constant physical strength.

  4. Experiential self-referential and selfless processing in mindfulness and mental health: Conceptual model and implicit measurement methodology.

    PubMed

    Hadash, Yuval; Plonsker, Reut; Vago, David R; Bernstein, Amit

    2016-07-01

    We propose that Experiential Self-Referential Processing (ESRP)-the cognitive association of present moment subjective experience (e.g., sensations, emotions, thoughts) with the self-underlies various forms of maladaptation. We theorize that mindfulness contributes to mental health by engendering Experiential Selfless Processing (ESLP)-processing present moment subjective experience without self-referentiality. To help advance understanding of these processes we aimed to develop an implicit, behavioral measure of ESRP and ESLP of fear, to experimentally validate this measure, and to test the relations between ESRP and ESLP of fear, mindfulness, and key psychobehavioral processes underlying (mal)adaptation. One hundred 38 adults were randomized to 1 of 3 conditions: control, meta-awareness with identification, or meta-awareness with disidentification. We then measured ESRP and ESLP of fear by experimentally eliciting a subjective experience of fear, while concurrently measuring participants' cognitive association between her/himself and fear by means of a Single Category Implicit Association Test; we refer to this measurement as the Single Experience & Self Implicit Association Test (SES-IAT). We found preliminary experimental and correlational evidence suggesting the fear SES-IAT measures ESLP of fear and 2 forms of ESRP- identification with fear and negative self-referential evaluation of fear. Furthermore, we found evidence that ESRP and ESLP are associated with meta-awareness (a core process of mindfulness), as well as key psychobehavioral processes underlying (mal)adaptation. These findings indicate that the cognitive association of self with experience (i.e., ESRP) may be an important substrate of the sense of self, and an important determinant of mental health. (PsycINFO Database Record

  5. Stable carbon isotopic ratio measurement of polycyclic aromatic hydrocarbons as a tool for source identification and apportionment--a review of analytical methodologies.

    PubMed

    Buczyńska, A J; Geypens, B; Van Grieken, R; De Wael, K

    2013-02-15

    The measurement of the ratio of stable isotopes of carbon ((13)C/(12)C expressed as a δ(13)C) in the individual components of a sample may be used as a means to identify the origin of these components. This article reviews the approaches and reports on the successes and failures of source identification and apportionment of Polycyclic Aromatic Hydrocarbons (PAHs) with the use of compound-specific isotope analysis (CSIA). One of the conditions for a precise and accurate analysis of isotope ratios with the use of GC-C-IRMS is the need for well separated peaks, with no co-elutions, and reduced unresolved complex mixture (UCM). Additionally, special care needs to be taken for an investigation of possible isotope fractionation effects introduced during the analytical treatment of samples. With the above-mentioned problems in mind, this review discusses in detail and compares current laboratory methodologies, mainly in the extraction and subsequent clean-up techniques used for environmental samples (air particulate matter, soil and sediments). Sampling strategies, the use of isotopic internal standards and the ranges for precision and accuracy are also reported and discussed.

  6. von Neumann measurement-related matrices and the nullity condition for quantum correlation

    NASA Astrophysics Data System (ADS)

    Zhao, MingJing; Ma, Teng; Zhang, TingGui; Fei, Shao-Ming

    2016-12-01

    We study von Neumann measurement-related matrices, and the nullity condition of quantum correlation. We investigate the properties of these matrices that are related to a von Neumann measurement. It is shown that these ( m 2 - 1) × ( m 2 - 1) matrices are idempotent, and have rank m - 1. These properties give rise to necessary conditions for the nullity of quantum correlations in bipartite systems. Finally, as an example we discuss quantum correlation in Bell diagonal states.

  7. Measuring Value Added in Higher Education: A Proposed Methodology for Developing a Performance Indicator Based on the Economic Value Added to Graduates

    ERIC Educational Resources Information Center

    Rodgers, Timothy

    2007-01-01

    The 2003 UK higher education White Paper suggested that the sector needed to re-examine the potential of the value added concept. This paper describes a possible methodology for developing a performance indicator based on the economic value added to graduates. The paper examines how an entry-quality-adjusted measure of a graduate's…

  8. Evaluation of optimum conditions for pachyman encapsulated in poly(d,l-lactic acid) nanospheres by response surface methodology and results of a related in vitro study

    PubMed Central

    Zheng, Sisi; Luo, Li; Bo, Ruonan; Liu, Zhenguang; Xing, Jie; Niu, Yale; Hu, Yuanliang; Liu, Jiaguo; Wang, Deyun

    2016-01-01

    This study aimed to optimize the preparation conditions of pachyman (PHY)-loaded poly(d,l-lactic acid) (PLA) (PHYP) nanospheres by response surface methodology, explore their characteristics, and assess their effects on splenic lymphocytes. Double emulsion solvent evaporation was used to synthesize PHYP nanospheres, and the optimal preparation conditions were identified as a concentration of poloxamer 188 (F68) (w/v) of 0.33%, a concentration of PLA of 30 mg/mL, and a ratio of PLA to drug (w/w) of 10.25:1 required to reach the highest encapsulation efficiency, which was calculated to be 59.10%. PHYP had a spherical shape with a smooth surface and uniform size and an evident effect of sustained release and relative stability. Splenic lymphocytes are crucial and multifunctional cells in the immune system, and their immunological properties could be enhanced significantly by PHYP treatment. This study confirmed that PHY encapsulated in PLA nanospheres had comparatively steady properties and exerted obvious immune enhancement. PMID:27729787

  9. Optimization of conditions for Cu(II) adsorption on D151 resin from aqueous solutions using response surface methodology and its mechanism study.

    PubMed

    Zhang, Hao; Xiong, Chunhua; Liu, Fang; Zheng, Xuming; Jiang, Jianxiong; Zheng, Qunxiong; Yao, Caiping

    2014-01-01

    An experimental study on the removal of Cu(II) from aqueous solutions by D151 resin was carried out in a batch system. The response surface methodology (RSM)-guided optimization indicated that the optimal adsorption conditions are: temperature of 35 °C, pH of 5.38, and initial Cu(II) concentration of 0.36 mg/mL, and the predicted adsorption capacity from the model reached 328.3 mg/g. At optimum adsorption conditions, the adsorption capacity of Cu(II) was 321.6 mg/g, which obtained from real experiments what were in close agreement with the predicted value. The adsorption isotherms data fitted the Langmuir model well, and the correlation coefficient has been evaluated. The calculation data of thermodynamic parameters (ΔG, ΔS, and ΔH) confirmed that the adsorption process was endothermic and spontaneous in nature. The desorption study revealed that Cu(II) can be effectively eluted by 1 mol/l HCl solution, and the recovery was 100%. Moreover, the characterization was undertaken by infrared (IR) spectroscopy.

  10. Optimization on condition of epigallocatechin-3-gallate (EGCG) nanoliposomes by response surface methodology and cellular uptake studies in Caco-2 cells

    NASA Astrophysics Data System (ADS)

    Luo, Xiaobo; Guan, Rongfa; Chen, Xiaoqiang; Tao, Miao; Ma, Jieqing; Zhao, Jin

    2014-06-01

    The major component in green tea polyphenols, epigallocatechin-3-gallate (EGCG), has been demonstrated to prevent carcinogenesis. To improve the effectiveness of EGCG, liposomes were used as a carrier in this study. Reverse-phase evaporation method besides response surface methodology is a simple, rapid, and beneficial approach for liposome preparation and optimization. The optimal preparation conditions were as follows: phosphatidylcholine-to-cholesterol ratio of 4.00, EGCG concentration of 4.88 mg/mL, Tween 80 concentration of 1.08 mg/mL, and rotary evaporation temperature of 34.51°C. Under these conditions, the experimental encapsulation efficiency and size of EGCG nanoliposomes were 85.79% ± 1.65% and 180 nm ± 4 nm, which were close with the predicted value. The malondialdehyde value and the release test in vitro indicated that the prepared EGCG nanoliposomes were stable and suitable for more widespread application. Furthermore, compared with free EGCG, encapsulation of EGCG enhanced its inhibitory effect on tumor cell viability at higher concentrations.

  11. Thermally assisted mechanical dewatering (TAMD) of suspensions of fine particles: analysis of the influence of the operating conditions using the response surface methodology.

    PubMed

    Mahmoud, Akrama; Fernandez, Aurora; Chituchi, Toma-Mihai; Arlabosse, Patricia

    2008-08-01

    Thermally assisted mechanical dewatering (TAMD) is a new process for energy-efficient liquid/solids separation which enhances conventional-device efficiency. The main idea of this process is to supply a flow of heat in mechanical dewatering processes to favour the reduction of the liquid content. This is not a new idea but the proposed combination, especially the chosen operating conditions (temperature <100 degrees C and pressure <3000 kPa) constitutes an original approach and a significant energy saving since the liquid is kept in liquid state. Response surface methodology was used to evaluate the effects of the processing parameters of TAMD on the final dry solids content, which is a fundamental dewatering parameter and an excellent indicator of the extent of TAMD. In this study, a two-factor central composite rotatable design was used to establish the optimum conditions for the TAMD of suspensions of fine particles. Significant regression models, describing changes on final dry solids content with respect to independent variables, were established with regression coefficients (usually called determination coefficients), R(2), greater than 80%. Experiments were carried out on a laboratory filtration/compression cell, firstly on different compressible materials: synthetic mineral suspensions such as talc and synthetic organic suspensions such as cellulose, and then on industrial materials, such as bentonite sludge provided by Soletanche Bachy Company. Experiment showed that the extent of TAMD for a given material is particularly dependent on their physical and chemical properties but also on processing parameters.

  12. Modelling and optimising of physicochemical features of walnut-oil beverage emulsions by implementation of response surface methodology: effect of preparation conditions on emulsion stability.

    PubMed

    Homayoonfal, Mina; Khodaiyan, Faramarz; Mousavi, Mohammad

    2015-05-01

    The major purpose of this study is to apply response surface methodology to model and optimise processing conditions for the preparation of beverage emulsions with maximum emulsion stability and viscosity, minimum particle size, turbidity loss rate, size index and peroxide value changes. A three-factor, five-level central composite design was conducted to estimate the effects of three independent variables: ultrasonic time (UT, 5-15 min), walnut-oil content (WO, 4-10% (w/w)) and Span 80 content (S80, 0.55-0.8). The results demonstrated the empirical models were satisfactorily (p < 0.0001) fitted to the experimental data. Evaluation of responses by analysis of variance indicated high coefficient determination values. The overall optimisation of preparation conditions was an UT of 14.630 min, WO content of 8.238% (w/w), and S80 content of 0.782% (w/w). Under this optimum region, responses were found to be 219.198, 99.184, 0.008, 0.008, 2.43 and 16.65 for particle size, emulsion stability, turbidity loss rate, size index, viscosity and peroxide value changes, respectively.

  13. Objective measures for predicting speech intelligibility in noisy conditions based on new band-importance functions.

    PubMed

    Ma, Jianfen; Hu, Yi; Loizou, Philipos C

    2009-05-01

    The articulation index (AI), speech-transmission index (STI), and coherence-based intelligibility metrics have been evaluated primarily in steady-state noisy conditions and have not been tested extensively in fluctuating noise conditions. The aim of the present work is to evaluate the performance of new speech-based STI measures, modified coherence-based measures, and AI-based measures operating on short-term (30 ms) intervals in realistic noisy conditions. Much emphasis is placed on the design of new band-importance weighting functions which can be used in situations wherein speech is corrupted by fluctuating maskers. The proposed measures were evaluated with intelligibility scores obtained by normal-hearing listeners in 72 noisy conditions involving noise-suppressed speech (consonants and sentences) corrupted by four different maskers (car, babble, train, and street interferences). Of all the measures considered, the modified coherence-based measures and speech-based STI measures incorporating signal-specific band-importance functions yielded the highest correlations (r=0.89-0.94). The modified coherence measure, in particular, that only included vowel/consonant transitions and weak consonant information yielded the highest correlation (r=0.94) with sentence recognition scores. The results from this study clearly suggest that the traditional AI and STI indices could benefit from the use of the proposed signal- and segment-dependent band-importance functions.

  14. Doctoral training in statistics, measurement, and methodology in psychology: replication and extension of Aiken, West, Sechrest, and Reno's (1990) survey of PhD programs in North America.

    PubMed

    Aiken, Leona S; West, Stephen G; Millsap, Roger E

    2008-01-01

    In a survey of all PhD programs in psychology in the United States and Canada, the authors documented the quantitative methodology curriculum (statistics, measurement, and research design) to examine the extent to which innovations in quantitative methodology have diffused into the training of PhDs in psychology. In all, 201 psychology PhD programs (86%) participated. This survey replicated and extended a previous survey (L. S. Aiken, S. G. West, L. B. Sechrest, & R. R. Reno, 1990), permitting examination of curriculum development. Most training supported laboratory and not field research. The median of 1.6 years of training in statistics and measurement was mainly devoted to the modally 1-year introductory statistics course, leaving little room for advanced study. Curricular enhancements were noted in statistics and to a minor degree in measurement. Additional coverage of both fundamental and innovative quantitative methodology is needed. The research design curriculum has largely stagnated, a cause for great concern. Elite programs showed no overall advantage in quantitative training. Forces that support curricular innovation are characterized. Human capital challenges to quantitative training, including recruiting and supporting young quantitative faculty, are discussed. Steps must be taken to bring innovations in quantitative methodology into the curriculum of PhD programs in psychology.

  15. Solar collector test procedures: Development of a method to refer measured efficiencies to standardized test conditions

    NASA Astrophysics Data System (ADS)

    Thomas, W. C.

    1984-02-01

    An analytical procedure has been developed for referring collector efficiency measurements, obtained under different test conditions, to a common, or standard set of conditions. The procedure applies to flatplate liquid-type collectors of conventional tube-in-sheet design. The basic Hottel-Whillier-Bliss theory is used with appropriate extensions to account for serpentine flow configurations and glazing materials with high infrared transmittance. The procedure includes a systematic method for deriving two invariant collector parameters directly from ASHRAE Standard 93-77 test results. The two parameters selected are the plate absorptance and back loss coefficient. A set of standard conditions is recommended which corresponds to favorable test conditions.

  16. Measuring reward with the conditioned place preference (CPP) paradigm: update of the last decade.

    PubMed

    Tzschentke, Thomas M

    2007-09-01

    Conditioned place preference (CPP) continues to be one of the most popular models to study the motivational effects of drugs and non-drug treatments in experimental animals. This is obvious from a steady year-to-year increase in the number of publications reporting the use this model. Since the compilation of the preceding review in 1998, more than 1000 new studies using place conditioning have been published, and the aim of the present review is to provide an overview of these recent publications. There are a number of trends and developments that are obvious in the literature of the last decade. First, as more and more knockout and transgenic animals become available, place conditioning is increasingly used to assess the motivational effects of drugs or non-drug rewards in genetically modified animals. Second, there is a still small but growing literature on the use of place conditioning to study the motivational aspects of pain, a field of pre-clinical research that has so far received little attention, because of the lack of appropriate animal models. Third, place conditioning continues to be widely used to study tolerance and sensitization to the rewarding effects of drugs induced by pre-treatment regimens. Fourth, extinction/reinstatement procedures in place conditioning are becoming increasingly popular. This interesting approach is thought to model certain aspects of relapse to addictive behavior and has previously almost exclusively been studied in drug self-administration paradigms. It has now also become established in the place conditioning literature and provides an additional and technically easy approach to this important phenomenon. The enormous number of studies to be covered in this review prevented in-depth discussion of many methodological, pharmacological or neurobiological aspects; to a large extent, the presentation of data had to be limited to a short and condensed summary of the most relevant findings.

  17. A conditional likelihood approach for regression analysis using biomarkers measured with batch-specific error.

    PubMed

    Wang, Ming; Flanders, W Dana; Bostick, Roberd M; Long, Qi

    2012-12-20

    Measurement error is common in epidemiological and biomedical studies. When biomarkers are measured in batches or groups, measurement error is potentially correlated within each batch or group. In regression analysis, most existing methods are not applicable in the presence of batch-specific measurement error in predictors. We propose a robust conditional likelihood approach to account for batch-specific error in predictors when batch effect is additive and the predominant source of error, which requires no assumptions on the distribution of measurement error. Although a regression model with batch as a categorical covariable yields the same parameter estimates as the proposed conditional likelihood approach for linear regression, this result does not hold in general for all generalized linear models, in particular, logistic regression. Our simulation studies show that the conditional likelihood approach achieves better finite sample performance than the regression calibration approach or a naive approach without adjustment for measurement error. In the case of logistic regression, our proposed approach is shown to also outperform the regression approach with batch as a categorical covariate. In addition, we also examine a 'hybrid' approach combining the conditional likelihood method and the regression calibration method, which is shown in simulations to achieve good performance in the presence of both batch-specific and measurement-specific errors. We illustrate our method by using data from a colorectal adenoma study.

  18. Methodology and measures for preventing unacceptable flow-accelerated corrosion thinning of pipelines and equipment of NPP power generating units

    NASA Astrophysics Data System (ADS)

    Tomarov, G. V.; Shipkov, A. A.; Lovchev, V. N.; Gutsev, D. F.

    2016-10-01

    Problems of metal flow-accelerated corrosion (FAC) in the pipelines and equipment of the condensate- feeding and wet-steam paths of NPP power-generating units (PGU) are examined. Goals, objectives, and main principles of the methodology for the implementation of an integrated program of AO Concern Rosenergoatom for the prevention of unacceptable FAC thinning and for increasing operational flow-accelerated corrosion resistance of NPP EaP are worded (further the Program). A role is determined and potentialities are shown for the use of Russian software packages in the evaluation and prediction of FAC rate upon solving practical problems for the timely detection of unacceptable FAC thinning in the elements of pipelines and equipment (EaP) of the secondary circuit of NPP PGU. Information is given concerning the structure, properties, and functions of the software systems for plant personnel support in the monitoring and planning of the inservice inspection of FAC thinning elements of pipelines and equipment of the secondary circuit of NPP PGUs, which are created and implemented at some Russian NPPs equipped with VVER-1000, VVER-440, and BN-600 reactors. It is noted that one of the most important practical results of software packages for supporting NPP personnel concerning the issue of flow-accelerated corrosion consists in revealing elements under a hazard of intense local FAC thinning. Examples are given for successful practice at some Russian NPP concerning the use of software systems for supporting the personnel in early detection of secondary-circuit pipeline elements with FAC thinning close to an unacceptable level. Intermediate results of working on the Program are presented and new tasks set in 2012 as a part of the updated program are denoted. The prospects of the developed methods and tools in the scope of the Program measures at the stages of design and construction of NPP PGU are discussed. The main directions of the work on solving the problems of flow

  19. Defining and measuring chronic conditions: imperatives for research, policy, program, and practice.

    PubMed

    Goodman, Richard A; Posner, Samuel F; Huang, Elbert S; Parekh, Anand K; Koh, Howard K

    2013-04-25

    Current trends in US population growth, age distribution, and disease dynamics foretell rises in the prevalence of chronic diseases and other chronic conditions. These trends include the rapidly growing population of older adults, the increasing life expectancy associated with advances in public health and clinical medicine, the persistently high prevalence of some risk factors, and the emerging high prevalence of multiple chronic conditions. Although preventing and mitigating the effect of chronic conditions requires sufficient measurement capacities, such measurement has been constrained by lack of consistency in definitions and diagnostic classification schemes and by heterogeneity in data systems and methods of data collection. We outline a conceptual model for improving understanding of and standardizing approaches to defining, identifying, and using information about chronic conditions in the United States. We illustrate this model's operation by applying a standard classification scheme for chronic conditions to 5 national-level data systems. Although the literature does not support a single uniform definition for chronic disease, recurrent themes include the non-self-limited nature, the association with persistent and recurring health problems, and a duration measured in months and years, not days and weeks--Thrall. So far, many different approaches have been used to measure the prevalence and consequences of chronic diseases and health conditions in children, resulting in a wide variability of prevalence estimates that cannot be readily compared--van der Lee et al.

  20. Photovoltaic module energy rating methodology development

    SciTech Connect

    Kroposki, B.; Myers, D.; Emery, K.; Mrig, L.; Whitaker, C.; Newmiller, J.

    1996-05-01

    A consensus-based methodology to calculate the energy output of a PV module will be described in this paper. The methodology develops a simple measure of PV module performance that provides for a realistic estimate of how a module will perform in specific applications. The approach makes use of the weather data profiles that describe conditions throughout the United States and emphasizes performance differences between various module types. An industry-representative Technical Review Committee has been assembled to provide feedback and guidance on the strawman and final approach used in developing the methodology.

  1. PLANE-INTEGRATED OPEN-PATH FOURIER TRANSFORM INFRARED SPECTROMETRY METHODOLOGY FOR ANAEROBIC SWINE LAGOON EMISSION MEASUREMENTS

    EPA Science Inventory

    Emissions of ammonia and methane from an anaerobic lagoon at a swine animal feeding operation were evaluated five times over a period of two years. The plane-integrated (PI) open-path Fourier transform infrared spectrometry (OP-FTIR) methodology was used to transect the plume at ...

  2. Knowledge Value Added (KVA) Methodology as a Tool for Measuring the Utilization of Knowledge Assets Aboard Marine Corps Installations

    DTIC Science & Technology

    2008-06-01

    1996 Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN INFORMATION TECHNOLOGY MANAGEMENT from...Value Added Methodologies. Masters Thesis, Information Technology Management Department, Naval Postgraduate School, Jun 2003. Bourazanis...Management Techniques and Their Impact on the Marine Corps in a Navy Marine Corps Intranet Environment. Masters Thesis, Information Technology Management Department

  3. A New Methodology for Open Pit Slope Design in Karst-Prone Ground Conditions Based on Integrated Stochastic-Limit Equilibrium Analysis

    NASA Astrophysics Data System (ADS)

    Zhang, Ke; Cao, Ping; Ma, Guowei; Fan, Wenchen; Meng, Jingjing; Li, Kaihui

    2016-07-01

    Using the Chengmenshan Copper Mine as a case study, a new methodology for open pit slope design in karst-prone ground conditions is presented based on integrated stochastic-limit equilibrium analysis. The numerical modeling and optimization design procedure contain a collection of drill core data, karst cave stochastic model generation, SLIDE simulation and bisection method optimization. Borehole investigations are performed, and the statistical result shows that the length of the karst cave fits a negative exponential distribution model, but the length of carbonatite does not exactly follow any standard distribution. The inverse transform method and acceptance-rejection method are used to reproduce the length of the karst cave and carbonatite, respectively. A code for karst cave stochastic model generation, named KCSMG, is developed. The stability of the rock slope with the karst cave stochastic model is analyzed by combining the KCSMG code and the SLIDE program. This approach is then applied to study the effect of the karst cave on the stability of the open pit slope, and a procedure to optimize the open pit slope angle is presented.

  4. Optimal Conditions for Continuous Immobilization of Pseudozyma hubeiensis (Strain HB85A) Lipase by Adsorption in a Packed-Bed Reactor by Response Surface Methodology

    PubMed Central

    Bussamara, Roberta; Dall'Agnol, Luciane; Schrank, Augusto; Fernandes, Kátia Flávia; Vainstein, Marilene Henning

    2012-01-01

    This study aimed to develop an optimal continuous process for lipase immobilization in a bed reactor in order to investigate the possibility of large-scale production. An extracellular lipase of Pseudozyma hubeiensis (strain HB85A) was immobilized by adsorption onto a polystyrene-divinylbenzene support. Furthermore, response surface methodology (RSM) was employed to optimize enzyme immobilization and evaluate the optimum temperature and pH for free and immobilized enzyme. The optimal immobilization conditions observed were 150 min incubation time, pH 4.76, and an enzyme/support ratio of 1282 U/g support. Optimal activity temperature for free and immobilized enzyme was found to be 68°C and 52°C, respectively. Optimal activity pH for free and immobilized lipase was pH 4.6 and 6.0, respectively. Lipase immobilization resulted in improved enzyme stability in the presence of nonionic detergents, at high temperatures, at acidic and neutral pH, and at high concentrations of organic solvents such as 2-propanol, methanol, and acetone. PMID:22315670

  5. Optimization of hydrolysis conditions for the production of glucomanno-oligosaccharides from konjac using β-mannanase by response surface methodology.

    PubMed

    Chen, Junfan; Liu, Desheng; Shi, Bo; Wang, Hai; Cheng, Yongqiang; Zhang, Wenjing

    2013-03-01

    Glucomanno-oligosaccharides (GMO), usually produced from hydrolysis of konjac tubers with a high content of glucomannan, have a positive effect on Bifidobacterium as well as a variety of other physiological activities. Response surface methodology (RSM) was employed to optimize the hydrolysis time, hydrolysis temperature, pH and enzyme to substrate ratio (E/S) to obtain a high GMO yield from konjac tubers. From the signal-factor experiments, it was concluded that the change in the direct reducing sugar (DRS) is consistent with total reducing sugar (TRS) but contrary to the degree of polymerization (DP). DRS was used as an indicator of the content of GMO in the RSM study. The optimum RSM operating conditions were: reaction time of 3.4 h, reaction temperature of 41.0°C, pH of 7.1 and E/S of 0.49. The results suggested that the enzymatic hydrolysis was enhanced by temperature, pH and incubation time. Model validation showed good agreement between experimental results and the predicted responses.

  6. Quantification of furanic compounds in coated deep-fried products simulating normal preparation and consumption: optimisation of HS-SPME analytical conditions by response surface methodology.

    PubMed

    Pérez-Palacios, T; Petisca, C; Melo, A; Ferreira, I M P L V O

    2012-12-01

    The validation of a method for the simultaneous quantification of furanic compounds in coated deep-fried samples processed and handled as usually consumed is presented. The deep-fried food was grinded using a device that simulates the mastication, and immediately analysed by headspace solid phase microextraction coupled to gas chromatography-mass spectrometry. Parameters affecting the efficiency of HS-SPME procedure were selected by response surface methodology, using a 2(3) full-factorial central composite design. Optimal conditions were achieved using 2g of sample, 3g of NaCl and 40min of absorption time at 37°C. Consistency between predicted and experimented values was observed and quality parameters of the method were established. As a result, furan, 2-furfural, furfuryl alcohol and 2-pentylfuran were, for the first time, simultaneously detected and quantified (5.59, 0.27, 10.48 and 1.77μgg(-1) sample, respectively) in coated deep-fried fish, contributing to a better understanding of the amounts of these compounds in food.

  7. Methodology for Using 3-Dimensional Sonography to Measure Fetal Adrenal Gland Volumes in Pregnant Women With and Without Early Life Stress.

    PubMed

    Kim, Deborah; Epperson, C Neill; Ewing, Grace; Appleby, Dina; Sammel, Mary D; Wang, Eileen

    2016-09-01

    Fetal adrenal gland volumes on 3-dimensional sonography have been studied as potential predictors of preterm birth. However, no consistent methodology has been published. This article describes the methodology used in a study that is evaluating the effects of maternal early life stress on fetal adrenal growth to allow other researchers to compare methodologies across studies. Fetal volumetric data were obtained in 36 women at 20 to 22 and 28 to 30 weeks' gestation. Two independent examiners measured multiple images of a single fetal adrenal gland from each sonogram. Intra- and inter-rater consistency was examined. In addition, fetal adrenal volumes between male and female fetuses were reported. The intra- and inter-rater reliability was satisfactory when the mean of 3 measurements from each rater was used. At 20 weeks' gestation, male fetuses had larger average adjusted adrenal volumes than female fetuses (mean, 0.897 versus 0.638; P = .004). At 28 weeks' gestation, the fetal weight was more influential in determining values for adjusted fetal adrenal volume (0.672 for male fetuses versus 0.526 for female fetuses; P = .034). This article presents a methodology for assessing fetal adrenal volume using 3-dimensional sonography that can be used by other researchers to provide more consistency across studies.

  8. Effects of acetic acid injection and operating conditions on NO emission in a vortexing fluidized bed combustor using response surface methodology

    SciTech Connect

    Fuping Qian; Chiensong Chyang; Weishen Yen

    2009-07-15

    The effects of acetic acid injection and operating conditions on NO emission were investigated in a pilot scale vortexing fluidized bed combustor (VFBC), an integration of circular freeboard and a rectangular combustion chamber. Operating conditions, such as the stoichiometric oxygen in the combustion chamber, the bed temperature and the injecting location of acetic acid, were determined by means of response surface methodology (RSM), which enables the examination of parameters with a moderate number of experiments. In RSM, NO emission concentration after acetic acid injection and NO removal percentage at the exit of the VFBC are used as the objective function. The results show that the bed temperature has a more important effect on the NO emission than the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber. Meanwhile, the injecting location of acetic acid and the stoichiometric oxygen in the combustion chamber have a more important effect on the NO removal percentage than the bed temperature. NO emission can be decreased by injecting the acetic acid into the combustion chamber, and NO emission decreases with the height of the acetic acid injecting location above the distributor. On the other hand, NO removal percentage increases with the height of the acetic acid injecting location, and NO emission increases with the stoichiometric oxygen in the combustion chamber and the bed temperature. NO removal percentage increases with the stoichiometric oxygen, and increases first, then decreases with the bed temperature. Also, a higher NO removal percentage could be obtained at 850{sup o}C. 26 refs., 12 figs., 8 tabs.

  9. Methodological Adaptations for Reliable Measurement of Radium and Radon Isotopes in Hydrothermal Fluids of Extreme Chemical Diversity in Yellowstone National Park, Wyoming, USA

    NASA Astrophysics Data System (ADS)

    Role, A.; Sims, K. W. W.; Scott, S. R.; Lane-Smith, D. R.

    2015-12-01

    To quantitatively model fluid residence times, water-rock-gas interactions, and fluid flow rates in the Yellowstone (YS) hydrothermal system we are measuring short-lived isotopes of Ra (228Ra, 226Ra, 224Ra, 223Ra) and Rn (222Rn, and 220Rn) in hydrothermal fluids and gases. While these isotopes have been used successfully in investigations of water residence times, mixing, and groundwater discharge in oceanic, coastal and estuarine environments, the well-established techniques for measuring Ra and Rn isotopes were developed for seawater and dilute groundwaters which have near neutral pH, moderate temperatures, and a limited range of chemical composition. Unfortunately, these techniques, as originally developed are not suitable for the extreme range of compositions found in YS waters, which have pH ranging from <1 - 10, Eh -.208 to .700 V, water temperatures from ambient to 93 degree C, and high dissolved CO2 concentrations. Here we report on our refinements of these Ra and Rn methods for the extreme conditions found in YS. Our methodologies are now enabling us to quantitatively isolate Ra from fluids that cover a large range of chemical compositions and conduct in-situ Rn isotope measurements that accommodate variable temperatures and high CO2 (Lane-Smith and Sims, 2013, Acta Geophys. 61). These Ra and Rn measurements are now allowing us to apply simple models to quantify hot spring water residence times and aquifer to surface travel times. (224Ra/223Ra) calculations provide estimates of water-rock reaction zone to discharge zone of 4 to 14 days for Yellowstone hot springs and (224Ra/228Ra) shallow aquifer to surface travel times from 2 to 7 days. Further development of more sophisticated models that take into account water-rock-gas reactions and water mixing (shallow groundwater, surface run-off, etc.) will allow us to estimate the timescales of these processes more accurately and thus provide a heretofore-unknown time component to the YS hydrothermal system.

  10. Humidity-conditioned gravimetric method to measure the water content of hydrogel contact lens materials.

    PubMed

    Galas, S L; Enns, J B

    1993-07-01

    A method to determine the humidity-conditioned gravimetric water content of hydrogel contact lens materials has been developed, in which errors due to blotting have been eliminated by conditioning the lens in a series of relative humidity (RH) environments before measuring the water content gravimetrically, and then extrapolating the water content to 100% RH. This method has been used to determine the water contents of representative materials from each of the four FDA lens groups, which were compared with their labeled values, as well as with values obtained from refractive index measurements. The deviation of the water content of soft contact lenses as measured by refractive index from that obtained gravimetrically increased as the water content decreased. The humidity-conditioned gravimetric method to determine water content of hydrophilic contact lenses is being proposed as an International Organization for Standardization (ISO) standard, as an improvement over the gravimetric and refractive index methods.

  11. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    PubMed Central

    Egert, Amanda M.; Klotz, James L.; McLeod, Kyle R.; Harmon, David L.

    2014-01-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Experiment 2, the steers (n = 8) were pair-fed the basal diet of Experiment 1 and dosed with endophyte-free (E−) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine/kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 h period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of

  12. Development of a methodology to measure the effect of ergot alkaloids on forestomach motility using real-time wireless telemetry

    NASA Astrophysics Data System (ADS)

    Egert, Amanda; Klotz, James; McLeod, Kyle; Harmon, David

    2014-10-01

    The objectives of these experiments were to characterize rumen motility patterns of cattle fed once daily using a real-time wireless telemetry system, determine when to measure rumen motility with this system, and determine the effect of ruminal dosing of ergot alkaloids on rumen motility. Ruminally cannulated Holstein steers (n = 8) were fed a basal diet of alfalfa cubes once daily. Rumen motility was measured by monitoring real-time pressure changes within the rumen using wireless telemetry and pressure transducers. Experiment 1 consisted of three 24-h rumen pressure collections beginning immediately after feeding. Data were recorded, stored, and analyzed using iox2 software and the rhythmic analyzer. All motility variables differed (P < 0.01) between hours and thirds (8-h periods) of the day. There were no differences between days for most variables. The variance of the second 8-h period of the day was less than (P < 0.01) the first for area and less than the third for amplitude, frequency, duration, and area (P < 0.05). These data demonstrated that the second 8-h period of the day was the least variable for many measures of motility and would provide the best opportunity for testing differences in motility due to treatments. In Exp. 2, the steers (n = 8) were pair-fed the basal diet of Exp. 1 and dosed with endophyte-free (E-) or endophyte-infected (E+; 0 or 10 μg ergovaline + ergovalinine / kg BW; respectively) tall fescue seed before feeding for 15 d. Rumen motility was measured for 8 h beginning 8 h after feeding for the first 14 d of seed dosing. Blood samples were taken on d 1, 7, and 15, and rumen content samples were taken on d 15. Baseline (P = 0.06) and peak (P = 0.04) pressure were lower for E+ steers. Water intake tended (P = 0.10) to be less for E+ steers the first 8 hour period after feeding. The E+ seed treatment at this dosage under thermoneutral conditions did not significantly affect rumen motility, ruminal fill, or dry matter of rumen

  13. Measuring environmental impact by real time laser differential displacement technique in simulated climate conditions

    NASA Astrophysics Data System (ADS)

    Tornari, Vivi; Bernikola, Eirini; Tsigarida, Nota; Hatzigiannakis, Kostas; Andrianakis, Michalis; Leissner, Johanna

    2015-06-01

    Environmental impact on artworks has always been a big issues for preservation of Cultural Heritage. Nowadays with the climate change it is experienced a slow but steady process of temperature increase affecting relative humidity which fluctuates while materials attempt to keep moisture balance. During repetitive equilibrium courses fatigue accumulates endangering the structural integrity prior to fracture. Assessing the risk imposed by the fluctuation allow preventive actions to take place and avoid interventive restoration action after fracture. A methodology is presented employing full-field interferometry by surface probing illumination based on direct realtime recording of surface images from delicate hygroscopic surfaces as they deform to dimensionally respond to relative humidity (RH) changes. The developed methodology aims to develop an early stage risk indicator tool to allow preventive measures directly through surface readings. The presented study1 aiming to experimentally highlight acclimatisation structural phenomena and to verify assumed standards in RH safety range based on the newly introduced concept of deformation threshold value is described and demonstrated with indicative results.

  14. [Methodological description of accelerometry for measuring physical activity in the 1993 and 2004 Pelotas (Brazil) birth cohorts].

    PubMed

    Knuth, Alan Goularte; Assunção, Maria Cecília F; Gonçalves, Helen; Menezes, Ana Maria B; Santos, Iná S; Barros, Aluísio J D; Matijasevich, Alicia; Ramires, Virgílio Viana; Silva, Inácio Crochemore Mohnsamb da; Hallal, Pedro Curi

    2013-03-01

    The aim of this study was to characterize the methodology of data collection on physical activity using accelerometry in two birth cohorts (2004 and 1993) in Pelotas, Rio Grande do Sul State, Brazil, at the 6-7 and 18-year follow-up visits, respectively. During visits to the study headquarters for a health evaluation, cohort subjects received the accelerometer to be worn on the wrist for 5 to 8 days, after which the device was retrieved at their homes. Genea and GENEActiv triaxial estimators of gravity (g) acceleration were employed. Accelerometry data were collected from 3,331 children (93.7% of those included in follow-up) and 3,816 adolescents (99% of those in follow-up). The study characterizes the data collection methodology in more than 7,000 individuals and discusses issues in its implementation. It thus provides a methodological framework aimed at helping to plan future population-based studies with the use of such technology and to improve understanding of physical activity in the context of epidemiological studies.

  15. Statistical Measurements of Contact Conditions of 478 Transport-airplane Landings During Routine Daytime Operations

    NASA Technical Reports Server (NTRS)

    Silsby, Norman S

    1955-01-01

    Statistical measurements of contact conditions have been obtained, by means of a special photographic technique, of 478 landings of present-day transport airplanes made during routine daylight operations in clear air at the Washington National Airport. From the measurements, sinking speeds, rolling velocities, bank angles, and horizontal speeds at the instant before contact have been evaluated and a limited statistical analysis of the results has been made and is reported in this report.

  16. Methodological study investigating long term laser Doppler measured cerebral blood flow changes in a permanently occluded rat stroke model.

    PubMed

    Eve, David J; Musso, James; Park, Dong-Hyuk; Oliveira, Cathy; Pollock, Kenny; Hope, Andrew; Baradez, Marc-Olivier; Sinden, John D; Sanberg, Paul R

    2009-05-30

    Cerebral blood flow is impaired during middle cerebral artery occlusion in the rat model of stroke. However, the long term effects on cerebral blood flow following occlusion have received little attention. We examined cerebral blood flow in both sides at multiple time points following middle cerebral artery occlusion of the rat. The bilateral cerebral blood flow in young male Sprague Dawley rats was measured at the time of occlusion, as well as 4, 10 and 16 weeks after occlusion. Under the present experimental conditions, the difference between the left and right side's cerebral blood flow was observed to appear to switch in direction in a visual oscillatory fashion over time in the sham-treated group, whereas the occluded animals consistently showed left side dominance. One group of rats was intraparenchymally transplanted with a human neural stem cell line (CTX0E03 cells) known to have benefit in stroke models. Cerebral blood flow in the lesioned side of the cell-treated group was observed to be improved compared to the untreated rats and to demonstrate a similar oscillatory nature as that observed in sham-treated animals. These findings suggest that multiple bilateral monitoring of cerebral blood flow over time can show effects of stem cell transplantation efficiently as well as functional tests in an animal stroke model.

  17. Advanced analytical methodologies for measuring healthy ageing and its determinants, using factor analysis and machine learning techniques: the ATHLOS project.

    PubMed

    Félix Caballero, Francisco; Soulis, George; Engchuan, Worrawat; Sánchez-Niubó, Albert; Arndt, Holger; Ayuso-Mateos, José Luis; Haro, Josep Maria; Chatterji, Somnath; Panagiotakos, Demosthenes B

    2017-03-10

    A most challenging task for scientists that are involved in the study of ageing is the development of a measure to quantify health status across populations and over time. In the present study, a Bayesian multilevel Item Response Theory approach is used to create a health score that can be compared across different waves in a longitudinal study, using anchor items and items that vary across waves. The same approach can be applied to compare health scores across different longitudinal studies, using items that vary across studies. Data from the English Longitudinal Study of Ageing (ELSA) are employed. Mixed-effects multilevel regression and Machine Learning methods were used to identify relationships between socio-demographics and the health score created. The metric of health was created for 17,886 subjects (54.6% of women) participating in at least one of the first six ELSA waves and correlated well with already known conditions that affect health. Future efforts will implement this approach in a harmonised data set comprising several longitudinal studies of ageing. This will enable valid comparisons between clinical and community dwelling populations and help to generate norms that could be useful in day-to-day clinical practice.

  18. Advanced analytical methodologies for measuring healthy ageing and its determinants, using factor analysis and machine learning techniques: the ATHLOS project

    PubMed Central

    Félix Caballero, Francisco; Soulis, George; Engchuan, Worrawat; Sánchez-Niubó, Albert; Arndt, Holger; Ayuso-Mateos, José Luis; Haro, Josep Maria; Chatterji, Somnath; Panagiotakos, Demosthenes B.

    2017-01-01

    A most challenging task for scientists that are involved in the study of ageing is the development of a measure to quantify health status across populations and over time. In the present study, a Bayesian multilevel Item Response Theory approach is used to create a health score that can be compared across different waves in a longitudinal study, using anchor items and items that vary across waves. The same approach can be applied to compare health scores across different longitudinal studies, using items that vary across studies. Data from the English Longitudinal Study of Ageing (ELSA) are employed. Mixed-effects multilevel regression and Machine Learning methods were used to identify relationships between socio-demographics and the health score created. The metric of health was created for 17,886 subjects (54.6% of women) participating in at least one of the first six ELSA waves and correlated well with already known conditions that affect health. Future efforts will implement this approach in a harmonised data set comprising several longitudinal studies of ageing. This will enable valid comparisons between clinical and community dwelling populations and help to generate norms that could be useful in day-to-day clinical practice. PMID:28281663

  19. Laser vibrometry vibration measurements on vehicle cabins in running conditions: helicopter mock-up application

    NASA Astrophysics Data System (ADS)

    Revel, Gian Marco; Castellini, Paolo; Chiariotti, Paolo; Tomasini, Enrico Primo; Cenedese, Fausto; Perazzolo, Alessandro

    2011-10-01

    The present work deals with the analysis of problems and potentials of laser vibrometer measurements inside vehicle cabins in running conditions, with particular reference to helicopters where interior vibro-acoustic issues are very important. This paper describes the results of a systematic measurement campaign performed on an Agusta A109MKII mock-up. The aim is to evaluate the applicability of scanning laser Doppler vibrometer (SLDV) for tests in simulated flying conditions and to understand how performances of the technique are affected when the laser head is placed inside the cabin, thus being subjected to interfering inputs. First a brief description of the performed test cases and the used measuring set-ups are given. Comparative tests between the SLDV and accelerometers are presented, analyzing the achievable performances for the specific application. Results obtained measuring with the SLDV placed inside the helicopter cabin during operative excitation conditions are compared with those performed with the laser lying outside the mock-up, these last being considered as ``reference measurements.'' Finally, in order to give an estimate of the uncertainty level on measured signals, a study linking the admitted percentage of noise content on vibrometer signals due to laser head vibration levels will be introduced.

  20. ANNUS MIRABILIS. PHYSICS OF OUR DAYS: Development of quantum measurement methods (Methodological notes on part of Einstein's scientific legacy)

    NASA Astrophysics Data System (ADS)

    Braginsky, Vladimir B.

    2005-06-01

    Historical development of indirect quantum measurements is briefly reviewed. Using several examples, considerable resources are shown to exist for increasing the sensitivity of various types of quantum measurements.

  1. Reliability of the Kinetic Measures under Different Heel Conditions during Normal Walking

    ERIC Educational Resources Information Center

    Liu, Yuanlong; Wang, Yong Tai

    2004-01-01

    The purpose of this study was to determine and compare the reliability of 3 dimension reaction forces and impulses in walking with 3 different heel shoe conditions. These results suggest that changing the height of the heels affects mainly the reliability of the ground reaction force and impulse measures on the medial and lateral dimension and not…

  2. Measuring various sizes of H-reflex while monitoring the stimulus condition.

    PubMed

    Hiraoka, Koichi

    2002-11-01

    The purpose of this study was to assess the usefulness of a new technique that measured various sizes of the soleus H-reflex, while monitoring the stimulus condition. Eight healthy volunteers participated in this experiment. In the new technique, an above-motor-threshold conditioning stimulus was given to the tibial nerve 10-12 ms after a below-motor-threshold test stimulus. The conditioning stimulus evoked a direct M-wave, which was followed by a test-stimulus-evoked H-reflex. This reflex was followed by a conditioning stimulus-evoked H-reflex. The amount of the voluntary-contraction-induced facilitation of the H-reflex was similar for both the new technique and conventional technique, in which an above-motor-threshold test stimulus was given without a conditioning stimulus. Using the new technique, we found that the amount of facilitation increased linearly with the size of the test H-reflex. This technique allows us to evoke various sizes of H-reflex while monitoring a stimulus condition, and is useful for measuring H-reflexes during voluntary movement.

  3. Measurement of Survival Time in Brachionus Rotifers: Synchronization of Maternal Conditions.

    PubMed

    Kaneko, Gen; Yoshinaga, Tatsuki; Gribble, Kristin E; Welch, David M; Ushio, Hideki

    2016-07-22

    Rotifers are microscopic cosmopolitan zooplankton used as models in ecotoxicological and aging studies due to their several advantages such as short lifespan, ease of culture, and parthenogenesis that enables clonal culture. However, caution is required when measuring their survival time as it is affected by maternal age and maternal feeding conditions. Here we provide a protocol for powerful and reproducible measurement of the survival time in Brachionus rotifers following a careful synchronization of culture conditions over several generations. Empirically, poor synchronization results in early mortality and a gradual decrease in survival rate, thus resulting in weak statistical power. Indeed, under such conditions, calorie restriction (CR) failed to significantly extend the lifespan of B. plicatilis although CR-induced longevity has been demonstrated with well-synchronized rotifer samples in past and present studies. This protocol is probably useful for other invertebrate models, including the fruitfly Drosophila melanogaster and the nematode Caenorhabditis elegans, because maternal age effects have also been reported in these species.

  4. Systematic measurements of opacity dependence on temperature, density, and atomic number at stellar interior conditions

    NASA Astrophysics Data System (ADS)

    Bailey, James; Nagayama, T.; Loisel, G. P.; Rochau, G. A.; Blancard, C.; Colgan, J.; Cosse, Ph.; Faussurier, G.; Fontes, C. J.; Golovkin, I.; Hansen, S. B.; Iglesias, C. A.; Kilcrease, D. P.; Macfarlane, J. J.; Mancini, R. C.; Nahar, S. N.; Orban, C.; Pradhan, A. K.; Sherrill, M.; Wilson, B. G.; Pain, J. C.; Gilleron, F.

    2016-10-01

    Model predictions for iron opacity are notably different from measurements performed at conditions similar to the boundary between the solar radiation and convection zone. New measurements at the Sandia Z facility with chromium, iron, and nickel are providing a systematic study of how opacity changes with temperature, density, and atomic number. These measurements help further evaluate possibilities for experiment errors and help constrain hypotheses for opacity model refinements. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000.

  5. Reliability Centered Maintenance - Methodologies

    NASA Technical Reports Server (NTRS)

    Kammerer, Catherine C.

    2009-01-01

    Journal article about Reliability Centered Maintenance (RCM) methodologies used by United Space Alliance, LLC (USA) in support of the Space Shuttle Program at Kennedy Space Center. The USA Reliability Centered Maintenance program differs from traditional RCM programs because various methodologies are utilized to take advantage of their respective strengths for each application. Based on operational experience, USA has customized the traditional RCM methodology into a streamlined lean logic path and has implemented the use of statistical tools to drive the process. USA RCM has integrated many of the L6S tools into both RCM methodologies. The tools utilized in the Measure, Analyze, and Improve phases of a Lean Six Sigma project lend themselves to application in the RCM process. All USA RCM methodologies meet the requirements defined in SAE JA 1011, Evaluation Criteria for Reliability-Centered Maintenance (RCM) Processes. The proposed article explores these methodologies.

  6. Investigation of Seepage Meter Measurements in Steady Flow and Wave Conditions.

    PubMed

    Russoniello, Christopher J; Michael, Holly A

    2015-01-01

    Water exchange between surface water and groundwater can modulate or generate ecologically important fluxes of solutes across the sediment-water interface. Seepage meters can directly measure fluid flux, but mechanical resistance and surface water dynamics may lead to inaccurate measurements. Tank experiments were conducted to determine effects of mechanical resistance on measurement efficiency and occurrence of directional asymmetry that could lead to erroneous net flux measurements. Seepage meter efficiency was high (average of 93%) and consistent for inflow and outflow under steady flow conditions. Wave effects on seepage meter measurements were investigated in a wave flume. Seepage meter net flux measurements averaged 0.08 cm/h-greater than the expected net-zero flux, but significantly less than theoretical wave-driven unidirectional discharge or recharge. Calculations of unidirectional flux from pressure measurements (Darcy flux) and theory matched well for a ratio of wave length to water depth less than 5, but not when this ratio was greater. Both were higher than seepage meter measurements of unidirectional flux made with one-way valves. Discharge averaged 23% greater than recharge in both seepage meter measurements and Darcy calculations of unidirectional flux. Removal of the collection bag reduced this net discharge. The presence of a seepage meter reduced the amplitude of pressure signals at the bed and resulted in a nearly uniform pressure distribution beneath the seepage meter. These results show that seepage meters may provide accurate measurements of both discharge and recharge under steady flow conditions and illustrate the potential measurement errors associated with dynamic wave environments.

  7. Methodological Issues in Mobile Computer-Supported Collaborative Learning (mCSCL): What Methods, What to Measure and When to Measure?

    ERIC Educational Resources Information Center

    Song, Yanjie

    2014-01-01

    This study aims to investigate (1) methods utilized in mobile computer-supported collaborative learning (mCSCL) research which focuses on studying, learning and collaboration mediated by mobile devices; (2) whether these methods have examined mCSCL effectively; (3) when the methods are administered; and (4) what methodological issues exist in…

  8. Quality-of-Life Measures in Children With Neurological Conditions: Pediatric Neuro-QOL

    PubMed Central

    Lai, Jin-Shei; Nowinski, Cindy; Victorson, David; Bode, Rita; Podrabsky, Tracy; McKinney, Natalie; Straube, Don; Holmes, Gregory L.; McDonald, Craig M.; Henricson, Erik; Abresch, R. Ted; Moy, Claudia S.; Cella, David

    2013-01-01

    Background A comprehensive, reliable, and valid measurement system is needed to monitor changes in children with neurological conditions who experience lifelong functional limitations. Objective This article describes the development and psychometric properties of the pediatric version of the Quality of Life in Neurological Disorders (Neuro-QOL) measurement system. Methods The pediatric Neuro-QOL consists of generic and targeted measures. Literature review, focus groups, individual interviews, cognitive interviews of children and consensus meetings were used to identify and finalize relevant domains and item content. Testing was conducted on 1018 children aged 10 to 17 years drawn from the US general population for generic measures and 171 similarly aged children with muscular dystrophy or epilepsy for targeted measures. Dimensionality was evaluated using factor analytic methods. For unidimensional domains, item parameters were estimated using item response theory models. Measures with acceptable fit indices were calibrated as item banks; those without acceptable fit indices were treated as summary scales. Results Ten measures were developed: 8 generic or targeted banks (anxiety, depression, anger, interaction with peers, fatigue, pain, applied cognition, and stigma) and 2 generic scales (upper and lower extremity function). The banks reliably (r > 0.90) measured 63.2% to 100% of the children tested. Conclusions The pediatric Neuro-QOL is a comprehensive measurement system with acceptable psychometric properties that could be used in computerized adaptive testing. The next step is to validate these measures in various clinical populations. PMID:21788436

  9. Removal Rates of Aqueous Cr(VI) by Zero-Valent Iron Measured Under Flow Conditions

    SciTech Connect

    Kaplan, D.I.

    2002-05-10

    Studies were undertaken to measure the rate of Cr(VI) removal from the aqueous phase by zero-valent iron, Fe(0), under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a groundwater remediation technology that replaces the sand in a filter pack of a conventional well with a reactive material, such as Fe(0). The pseudo-first-order rate coefficients measured under flow conditions were comparable to those previously measured under batch conditions that had significantly greater ratios of solution volume to Fe(0) surface area. Between the range of 20 and 100 weight percent Fe(0), there was little measurable change in the reaction kinetics. Thus, it may be possible to include sand into the reactive filter packs in the event it is necessary to increase filter pack porosity or to decrease the accumulation of secondary reaction products that may lead to filter pack plugging. Background water chemistry had only marginal effects on reaction rate coefficients. The reaction rates measured in this study indicated that an Fe(0) filter pack could be used to lower Cr(VI) concentrations by several orders of magnitude in a once-through mode of operation of the Reactive Well Technology.

  10. Direct and High-Resolution Measurements of Retardation and Transport in Whole Rock Samples under Unsaturated Conditions

    NASA Astrophysics Data System (ADS)

    Hu, Q.; Wang, J.

    2001-12-01

    Evaluation of chemical sorption and transport is very important in the investigations of contaminant remediation, risk assessment, and waste disposal (e.g., the potential high-level nuclear waste repository at Yucca Mountain, Nevada). Characterization of transport parameters for whole rock samples has typically been performed in batch systems with arbitrary grain sizes and a high water/rock ratio. Measurement of these parameters under conditions more representative of fractured rocks in situ provides a better understanding of the processes occurring there. The effective Kd approach has been commonly employed to quantify the extent of contaminant-medium-fluid interactions. Unrepresentative Kd values will lead to unrealistic assessments of contaminant transport. Experimentally determined Kd values are predominantly obtained from batch experiments under saturated and well-mixed conditions. Batch-sorption experiments can be problematic because: (1) saturated conditions with large waterrock ratios are not representative of the in situ vadose condition, and (2) crushed rock samples are used, with the sample size (in the range of microns to sub-millimeters) chosen more or less arbitrarily and mainly for experimental convenience, and (3) for weakly sorbing contaminants, a batch-sorption approach can yield variable and even negative Kd values, because of the inherent methodology of calculating the Kd values by subtracting two large numbers (i.e., initial and final aqueous concentration). In this work, we use an unsaturated transport-sorption approach to quantify the sorption behavior of contaminants and evaluate the applicability of the conventional batch-sorption approach in unsaturated rock. Transient experiments are designed to investigate water imbibition and chemical transport into the rock sample (with size in the centimeter range) by contacting one end of a sample with water containing chemical tracers. Capillary-driven imbibition transports chemicals farther away

  11. A wearable device for measuring eye dynamics in real-world conditions.

    PubMed

    Knopp, Simon; Bones, Philip; Weddell, Stephen; Innes, Carrie; Jones, Richard

    2013-01-01

    Drowsiness and lapses of responsiveness have the potential to cause fatalities in many occupations. One subsystem of a prototype device which aims to detect these lapses as they occur is described. A head-mounted camera measures several features of the eye that are known to correlate with drowsiness. The system was tested with eight combinations of eye colour, ambient lighting, and eye glasses to simulate typical real-world input conditions. A task was completed for each set of conditions to simulate a range of eye movement-saccades, tracking, and eye closure. Our image processing software correctly classified 99.3% of video frames as open/closed/partly closed, and the error rate was not affected by the combinations of input conditions. Most errors occurred during eyelid movement. The accuracy of the pupil localisation was also not influenced by input conditions, with the possible exception of one subject's glasses.

  12. Performance degradation of a typical twin engine commuter type aircraft in measured natural icing conditions

    NASA Technical Reports Server (NTRS)

    Ranaudo, R. J.; Mikkelsen, K. L.; Mcknight, R. C.; Perkins, P. J., Jr.

    1984-01-01

    The performance of an aircraft in various measured icing conditions was investigated. Icing parameters such as liquid water content, temperature, cloud droplet sizes and distributions were measured continuously while in icinig. Flight data wre reduced to provide plots of the aircraft drag polars and lift curves (CL vs. alpha) for the measured 'iced' condition as referenced to the uniced aircraft. These data were also reduced to provide plots of thrust horsepower required vs. single engine power available to show how icing affects engine out capability. It is found that performance degradation is primarily influenced by the amount and shape of the accumulated ice. Glaze icing caused the greatest aerodynamic performance penalties in terms of increased drag and reduction in lift while aerodynamic penalties due to rime icing were significantly lower. Previously announced in STAR as N84-13173

  13. Performance degradation of a typical twin engine commuter type aircraft in measured natural icing conditions

    NASA Technical Reports Server (NTRS)

    Ranaudo, R. J.; Mikkelsen, K. L.; Mcknight, R. C.; Perkins, P. J., Jr.

    1984-01-01

    The performance of an aircraft in various measured icing conditions was investigated. Icing parameters such as liquid water content, temperature, cloud droplet sizes and distributions were measured continuously while in icing. Flight data were reduced to provide plots of the aircraft drag polars and lift curves (CL vs. alpha) for the measured ""iced'' condition as referenced to the uniced aircraft. These data were also reduced to provide plots of thrust horsepower required vs. single engine power available to show how icing affects engine out capability. It is found that performance degradation is primarily influenced by the amount and shape of the accumulated ice. Glaze icing caused the greatest aerodynamic performance penalties in terms of increased drag and reduction in lift while aerodynamic penalties due to rime icing were significantly lower.

  14. [Measurement of chronic conditions in a single person as a mortality predictor].

    PubMed

    Rius, Cristina; Pérez, Glòria

    2006-12-01

    The presence of multiple chronic diseases in a single individual has become an increasing public health problem for two reasons: population aging and the growing prevalence of chronic conditions in the elderly. This article aims to review the various measures of chronic conditions used in different morbidity studies and to provide an example of their application. We present definitions and characteristics of distinct morbidity measures, as well as their advantages and disadvantages, and provide an example of their calculation using real data. The presence of multiple chronic diseases in a single individual can be measured in multiple ways. Thus, morbidity can be expressed as multi-morbidity, co-morbidity, or as a co-morbidity index. Researchers have to select the best option according to the research objectives, study design, information resources, and the main outcome variable selected.

  15. Measurements of Electrical and Thermal Conductivity of Iron Under Earth's Core Conditions

    NASA Astrophysics Data System (ADS)

    Ohta, K.; Kuwayama, Y.; Shimizu, K.; Yagi, T.; Hirose, K.; Ohishi, Y.

    2014-12-01

    Secular cooling of the Earth's core induces the convection of the conductive liquid outer core, which generates the geomagnetic field, and the growth of the solid inner core. Since iron is the primary component of the Earth's core, the electrical and thermal conductivity of iron in both solid and liquid states are key pieces of information for estimating the transport properties of the core. We performed electrical and thermal conductivity measurements on iron under core conditions in a laser-heated diamond anvil cell. Our electrical conductivity measurements on iron clearly show resistivity saturation phenomena in iron under high pressure and high temperature conditions as predicted in a recent laboratory-based model for the core conductivity (Gomi et al., 2013). Direct measurements of thermal diffusivity of iron have been also preformed at high pressures by using the pulsed light heating thermoreflectance technique, which enable us to confirm the validity of the Wiedemann-Franz law toward transition metal under high pressure.

  16. Use of acoustic velocity methodology and remote sensing techniques to measure unsteady flow on the lower Yazoo River in Mississippi

    USGS Publications Warehouse

    Turnipseed, D. Phil; Cooper, Lance M.; Davis, Angela A.

    1998-01-01

    Methodologies have been developed for computing continuous discharge during varied, non-uniform low and medium flows on the Yazoo River at the U.S. Geological Survey streamgage below Steele Bayou near Long Lake, Mississippi, using acoustic signal processing and conventional streamgaging techniques. Procedures were also developed to compute locations of discharges during future high flow events when the stream reach is subject to hi-directional and reverse flow caused by rising stages on the Mississippi River using a combination of acoustic equipment and remote sensing technology. A description of the study area is presented. Selected results of these methods are presented for the period from March through September 1997.

  17. Estimation of the Joint Patient Condition Occurrence Frequencies from Operation Iraqi Freedom and Operation Enduring Freedom. Volume I: Development of Methodology

    DTIC Science & Technology

    2011-03-28

    Freedom Volume 1: Development of Methodology James Zouris Edwin D’Souza Trevor Elkins Jay Walker Vern Wing Carrie Brown Document No. 11-9I...and Operation Enduring Freedom Volume I: Development of Methodology James Zouris Edwin D’Souza Trevor Elkins Jay Walker Vern Wing ...Closed FRAC CL SHOULDER & UPPER ARM 811 Fracture of Scapula , Closed FRAC CL SHOULDER & UPPER ARM 812 Fracture of Unspecified Part of Upper End of

  18. Methodological Gravitism

    ERIC Educational Resources Information Center

    Zaman, Muhammad

    2011-01-01

    In this paper the author presents the case of the exchange marriage system to delineate a model of methodological gravitism. Such a model is not a deviation from or alteration to the existing qualitative research approaches. I have adopted culturally specific methodology to investigate spouse selection in line with the Grounded Theory Method. This…

  19. Methodology for assessing the probability of corrosion in concrete structures on the basis of half-cell potential and concrete resistivity measurements.

    PubMed

    Sadowski, Lukasz

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential E corr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures.

  20. Methodology for Assessing the Probability of Corrosion in Concrete Structures on the Basis of Half-Cell Potential and Concrete Resistivity Measurements

    PubMed Central

    2013-01-01

    In recent years, the corrosion of steel reinforcement has become a major problem in the construction industry. Therefore, much attention has been given to developing methods of predicting the service life of reinforced concrete structures. The progress of corrosion cannot be visually assessed until a crack or a delamination appears. The corrosion process can be tracked using several electrochemical techniques. Most commonly the half-cell potential measurement technique is used for this purpose. However, it is generally accepted that it should be supplemented with other techniques. Hence, a methodology for assessing the probability of corrosion in concrete slabs by means of a combination of two methods, that is, the half-cell potential method and the concrete resistivity method, is proposed. An assessment of the probability of corrosion in reinforced concrete structures carried out using the proposed methodology is presented. 200 mm thick 750 mm  ×  750 mm reinforced concrete slab specimens were investigated. Potential Ecorr and concrete resistivity ρ in each point of the applied grid were measured. The experimental results indicate that the proposed methodology can be successfully used to assess the probability of corrosion in concrete structures. PMID:23766706

  1. Towards the automatic identification of cloudiness condition by means of solar global irradiance measurements

    NASA Astrophysics Data System (ADS)

    Sanchez, G.; Serrano, A.; Cancillo, M. L.

    2010-09-01

    This study focuses on the design of an automatic algorithm for classification of the cloudiness condition based only on global irradiance measurements. Clouds are a major modulating factor for the Earth radiation budget. They attenuate the solar radiation and control the terrestrial radiation participating in the energy balance. Generally, cloudiness is a limiting factor for the solar radiation reaching the ground, highly contributing to the Earth albedo. Additionally it is the main responsible for the high variability shown by the downward irradiance measured at ground level. Being a major source for the attenuation and high-frequency variability of the solar radiation available for energy purposes in solar power plants, the characterization of the cloudiness condition is of great interest. This importance is even higher in Southern Europe, where very high irradiation values are reached during long periods within the year. Thus, several indexes have been proposed in the literature for the characterization of the cloudiness condition of the sky. Among these indexes, those exclusively involving global irradiance are of special interest since this variable is the most widely available measurement in most radiometric stations. Taking this into account, this study proposes an automatic algorithm for classifying the cloudiness condition of the sky into three categories: cloud-free, partially cloudy and overcast. For that aim, solar global irradiance was measured by Kipp&Zonen CMP11 pyranometer installed on the terrace of the Physics building in the Campus of Badajoz (Spain) of the University of Extremadura. Measurements were recorded at one-minute basis for a period of study extending from 23 November 2009 to 31 March 2010. The algorithm is based on the clearness index kt, which is calculated as the ratio between the solar global downward irradiance measured at ground and the solar downward irradiance at the top of the atmosphere. Since partially cloudy conditions

  2. Local mean age measurements for heating, cooling, and isothermal supply air conditions

    SciTech Connect

    Han, H.; Kuehn, T.H.; Kim, Y.

    1999-07-01

    The objective of this paper is to investigate the effect on room ventilation of thermal buoyancy caused by temperature differences between surfaces and the supply air. Spatial distributions of local mean age were obtained in a half-scale environmental chamber under well-controlled temperature conditions simulating isothermal ventilation, cooling, and heating. Air was supplied and returned through slots in the ceiling. Sulfur hexafluoride (SF{sub 6}) tracer gas concentration was measured by an electron capture gas chromatograph. Tracer gas concentration was measured at various points in the chamber versus time after a pulse injection was applied in the supply air duct. The maximum local mean age (LMA) was obtained near the center of a large recirculation zone for isothermal conditions. The results for cooling conditions showed a relatively uniform LMA distribution in the space compared to the isothermal conditions, as the room air was well mixed by the cold downdraft from the supply. However, there was a large variation in local air change indices in the space for the heating condition because of stable thermal stratification. Warm supply air could not penetrate into the lower half of the space but short-circuited to the exhaust duct. The model results in the present study can be converted to full-scale situations using similitude and can be used for validating computational fluid dynamics codes.

  3. Measurements of Elastic and Inelastic Properties under Simulated Earth's Mantle Conditions in Large Volume Apparatus

    NASA Astrophysics Data System (ADS)

    Mueller, H. J.

    2012-12-01

    The interpretation of highly resolved seismic data from Earths deep interior require measurements of the physical properties of Earth's materials under experimental simulated mantle conditions. More than decade ago seismic tomography clearly showed subduction of crustal material can reach the core mantle boundary under specific circumstances. That means there is no longer space for the assumption deep mantle rocks might be much less complex than deep crustal rocks known from exhumation processes. Considering this geophysical high pressure research is faced the challenge to increase pressure and sample volume at the same time to be able to perform in situ experiments with representative complex samples. High performance multi anvil devices using novel materials are the most promising technique for this exciting task. Recent large volume presses provide sample volumes 3 to 7 orders of magnitude bigger than in diamond anvil cells far beyond transition zone conditions. The sample size of several cubic millimeters allows elastic wave frequencies in the low to medium MHz range. Together with the small and even adjustable temperature gradients over the whole sample this technique makes anisotropy and grain boundary effects in complex systems accessible for elastic and inelastic properties measurements in principle. The measurements of both elastic wave velocities have also no limits for opaque and encapsulated samples. The application of triple-mode transducers and the data transfer function technique for the ultrasonic interferometry reduces the time for saving the data during the experiment to about a minute or less. That makes real transient measurements under non-equilibrium conditions possible. A further benefit is, both elastic wave velocities are measured exactly simultaneously. Ultrasonic interferometry necessarily requires in situ sample deformation measurement by X-radiography. Time-resolved X-radiography makes in situ falling sphere viscosimetry and even the

  4. Testing methodologies

    SciTech Connect

    Bender, M.A.

    1990-01-01

    Several methodologies are available for screening human populations for exposure to ionizing radiation. Of these, aberration frequency determined in peripheral blood lymphocytes is the best developed. Individual exposures to large doses can easily be quantitated, and population exposures to occupational levels can be detected. However, determination of exposures to the very low doses anticipated from a low-level radioactive waste disposal site is more problematical. Aberrations occur spontaneously, without known cause. Exposure to radiation induces no new or novel types, but only increases their frequency. The limitations of chromosomal aberration dosimetry for detecting low level radiation exposures lie mainly in the statistical signal to noise'' problem, the distribution of aberrations among cells and among individuals, and the possible induction of aberrations by other environmental occupational or medical exposures. However, certain features of the human peripheral lymphocyte-chromosomal aberration system make it useful in screening for certain types of exposures. Future technical developments may make chromosomal aberration dosimetry more useful for low-level radiation exposures. Other methods, measuring gene mutations or even minute changes on the DNA level, while presently less will developed techniques, may eventually become even more practical and sensitive assays for human radiation exposure. 15 refs.

  5. Development of a measurement system for the online inspection of microstructured surfaces in harsh industrial conditions

    NASA Astrophysics Data System (ADS)

    Mueller, Thomas; Langmann, Benjamin; Reithmeier, Eduard

    2014-05-01

    Microscopic imaging techniques are usually applied for the inspection of microstructured surfaces. These techniques require clean measurement conditions. Soilings, e.g. dust or splashing liquids, can disturb the measurement process or even damage instruments. Since these soilings occur in the majority of manufacturing processes, microscopic inspection usually must be carried out in a separate laboratory. We present a measurement system which allows for a microscopic inspection and a 3D reconstruction of microstructured surfaces in harsh industrial conditions. The measurement system also enables precise positioning, e.g. of a grinding wheel, with an accuracy of 5 μm. The main component of the measurement system is a CCD camera with a high-magnification telecentric lens. By means of this camera it is even possible to measure structures with dimensions in the range of 30 to 50 μm. The camera and the lens are integrated into a waterproof and dustproof enclosure. The inspection window of the enclosure has an air curtain which serves as a splash guard. The workpiece illumination is crucial in order to obtain good measurement results. The measuring system includes high-power LEDs which are integrated in a waterproof enclosure. The measurement system also includes a laser with a specially designed lens system to form an extremely narrow light section on the workpiece surface. It is possible to obtain a line width of 25 μm. This line and the camera with the high-magnification telecentric lens are used to perform a laser triangulation of the microstructured surface. This paper describes the system as well as the development and evaluation of the software for the automatic positioning of the workpiece and the automatic three-dimensional surface analysis.

  6. Universal health outcome measures for older persons with multiple chronic conditions

    PubMed Central

    2012-01-01

    Older adults with multiple chronic conditions (MCC) require considerable health services and complex care. As health status is affected along multiple dimensions by the persistence and progression of diseases and courses of treatments, well-validated universal outcome measures across diseases are needed for research, clinical care and administrative purposes. An expert panel meeting held by the National Institute on Aging (NIA) in September 2011 recommends that older persons with MCC complete a brief initial composite measure that includes general health, pain, fatigue, and physical, mental health and social role function, along with gait speed measurement. Suitable composite measures include the Short-form 8 (SF-8) and 36 (SF-36) and the Patient Reported Outcomes Measurement Information System29-item Health Profile (PROMIS-29). Based on responses to items within the initial measure, short follow-on measures should be selectively targeted to the following areas: symptom burden, depression, anxiety and daily activities. Persons unable to walk a short distance for gait speed should be assessed using a physical function scale. Remaining gaps to be considered for measure development include disease burden, cognitive function and caregiver burden. Routine outcome assessment of MCC patients could facilitate system-based care improvement and clinical effectiveness research. PMID:23194184

  7. Reference place conditioning procedure with cocaine: increased sensitivity for measuring associatively motivated choice behavior in rats.

    PubMed

    Reichel, Carmela M; Wilkinson, Jamie L; Bevins, Rick A

    2010-07-01

    Place conditioning is widely used to study the conditioned rewarding effects of drugs. In the standard version, one reward (cocaine) is compared with no reward (saline). A modified variant of this task, 'reference-conditioning' procedure, compares two potentially rewarding stimuli (high vs. low cocaine dose). There has been little research on the utility of this procedure. Experiment 1 used the standard protocol with saline administered before confinement to the reference compartment of a place conditioning chamber. On alternate days, saline, 2.5, 5, 7.5, 10, or 20 mg/kg cocaine was administered before confinement to the opposite compartment. In experiments 2 and 3, reference-compartment saline was replaced with 5 and 7.5 mg/kg cocaine, respectively. Relative to saline, 7.5-20 mg/kg cocaine had comparable conditioned rewarding effects (i.e. similar increase in time in paired compartment). When cocaine replaced saline, there was competition at doses lower than 7.5 mg/kg. Rats that received 7.5 versus 2.5 mg/kg spent similar time in each compartment, indicating competition. Competition was not seen with 5 versus 20 mg/kg; preference was for the 20 mg/kg compartment. Experiment 4 showed that the competition at 2.5 mg/kg was not due to reward sensitization. The reference-conditioning procedure has increased the sensitivity for measuring associatively motivated choice behavior.

  8. Measurement of stress distributions in truck tyre contact patch in real rolling conditions

    NASA Astrophysics Data System (ADS)

    Anghelache, Gabriel; Moisescu, Raluca

    2012-12-01

    Stress distributions on three orthogonal directions have been measured across the contact patch of truck tyres using the complex measuring system that contains a transducer assembly with 30 sensing elements placed in the road surface. The measurements have been performed in straight line, in real rolling conditions. Software applications for calibration, data acquisition, and data processing were developed. The influence of changes in inflation pressure and rolling speed on the shapes and sizes of truck tyre contact patch has been shown. The shapes and magnitudes of normal, longitudinal, and lateral stress distributions, measured at low speed, have been presented and commented. The effect of wheel toe-in and camber on the stress distribution results was observed. The paper highlights the impact of the longitudinal tread ribs on the shear stress distributions. The ratios of stress distributions in the truck tyre contact patch have been computed and discussed.

  9. System for Measuring Conditional Amplitude, Phase, or Time Distributions of Pulsating Phenomena

    PubMed Central

    Van Brunt, Richard J.; Cernyar, Eric W.

    1992-01-01

    A detailed description is given of an electronic stochastic analyzer for use with direct “real-time” measurements of the conditional distributions needed for a complete stochastic characterization of pulsating phenomena that can be represented as random point processes. The measurement system described here is designed to reveal and quantify effects of pulse-to-pulse or phase-to-phase memory propagation. The unraveling of memory effects is required so that the physical basis for observed statistical properties of pulsating phenomena can be understood. The individual unique circuit components that comprise the system and the combinations of these components for various measurements, are thoroughly documented. The system has been applied to the measurement of pulsating partial discharges generated by applying alternating or constant voltage to a discharge gap. Examples are shown of data obtained for conditional and unconditional amplitude, time interval, and phase-of-occurrence distributions of partial-discharge pulses. The results unequivocally show the existence of significant memory effects as indicated, for example, by the observations that the most probable amplitudes and phases-of-occurrence of discharge pulses depend on the amplitudes and/or phases of the preceding pulses. Sources of error and fundamental limitations of the present measurement approach are analyzed. Possible extensions of the method are also discussed. PMID:28053450

  10. Structure light telecentric stereoscopic vision 3D measurement system based on Scheimpflug condition

    NASA Astrophysics Data System (ADS)

    Mei, Qing; Gao, Jian; Lin, Hui; Chen, Yun; Yunbo, He; Wang, Wei; Zhang, Guanjin; Chen, Xin

    2016-11-01

    We designed a new three-dimensional (3D) measurement system for micro components: a structure light telecentric stereoscopic vision 3D measurement system based on the Scheimpflug condition. This system creatively combines the telecentric imaging model and the Scheimpflug condition on the basis of structure light stereoscopic vision, having benefits of a wide measurement range, high accuracy, fast speed, and low price. The system measurement range is 20 mm×13 mm×6 mm, the lateral resolution is 20 μm, and the practical vertical resolution reaches 2.6 μm, which is close to the theoretical value of 2 μm and well satisfies the 3D measurement needs of micro components such as semiconductor devices, photoelectron elements, and micro-electromechanical systems. In this paper, we first introduce the principle and structure of the system and then present the system calibration and 3D reconstruction. We then present an experiment that was performed for the 3D reconstruction of the surface topography of a wafer, followed by a discussion. Finally, the conclusions are presented.

  11. Note: Electrical resolution during conductive atomic force microscopy measurements under different environmental conditions and contact forces

    SciTech Connect

    Lanza, M.; Porti, M.; Nafria, M.; Aymerich, X.; Whittaker, E.; Hamilton, B.

    2010-10-15

    Conductive atomic force microscopy experiments on gate dielectrics in air, nitrogen, and UHV have been compared to evaluate the impact of the environment on topography and electrical measurements. In current images, an increase of the lateral resolution and a reduction of the conductivity were observed in N{sub 2} and, especially, in UHV (where current depends also on the contact force). Both effects were related to the reduction/elimination of the water layer between the tip and the sample in N{sub 2}/UHV. Therefore, since current measurements are very sensitive to environmental conditions, these factors must be taken into consideration when comparisons between several experiments are performed.

  12. Electrode size and boundary condition independent measurement of the effective piezoelectric coefficient of thin films

    SciTech Connect

    Stewart, M.; Lepadatu, S.; McCartney, L. N.; Cain, M. G.; Wright, L.; Crain, J.; Newns, D. M.; Martyna, G. J.

    2015-02-01

    The determination of the piezoelectric coefficient of thin films using interferometry is hindered by bending contributions. Using finite element analysis (FEA) simulations, we show that the Lefki and Dormans approximations using either single or double-beam measurements cannot be used with finite top electrode sizes. We introduce a novel method for characterising piezoelectric thin films which uses a differential measurement over the discontinuity at the electrode edge as an internal reference, thereby eliminating bending contributions. This step height is shown to be electrode size and boundary condition independent. An analytical expression is derived which gives good agreement with FEA predictions of the step height.

  13. Method and apparatus for measuring coupled flow, transport, and reaction processes under liquid unsaturated flow conditions

    DOEpatents

    McGrail, Bernard P.; Martin, Paul F.; Lindenmeier, Clark W.

    1999-01-01

    The present invention is a method and apparatus for measuring coupled flow, transport and reaction processes under liquid unsaturated flow conditions. The method and apparatus of the present invention permit distinguishing individual precipitation events and their effect on dissolution behavior isolated to the specific event. The present invention is especially useful for dynamically measuring hydraulic parameters when a chemical reaction occurs between a particulate material and either liquid or gas (e.g. air) or both, causing precipitation that changes the pore structure of the test material.

  14. Single cell stiffness measurement at various humidity conditions by nanomanipulation of a nano-needle.

    PubMed

    Shen, Yajing; Nakajima, Masahiro; Yang, Zhan; Tajima, Hirotaka; Najdovski, Zoran; Homma, Michio; Fukuda, Toshio

    2013-04-12

    This paper presents a method for single cell stiffness measurement based on a nano-needle and nanomanipulation. The nano-needle with a buffering beam was fabricated from an atomic force microscope cantilever by the focused ion beam etching technique. Wild type yeast cells (W303) were prepared and placed on the sample stage inside an environmental scanning electron microscope (ESEM) chamber. The nanomanipulator actuated the nano-needle to press against a single yeast cell. As a result, the deformation of the cell and nano-needle was observed by the ESEM system in real-time. Finally, the stiffness of the single cell was determined based on this deformation information. To reveal the relationship between the cell stiffness and the environmental humidity conditions, the cell stiffness was measured at three different humidity conditions, i.e. 40, 70 and 100%, respectively. The results show that the stiffness of a single cell is reduced with increasing humidity.

  15. Single cell stiffness measurement at various humidity conditions by nanomanipulation of a nano-needle

    NASA Astrophysics Data System (ADS)

    Shen, Yajing; Nakajima, Masahiro; Yang, Zhan; Tajima, Hirotaka; Najdovski, Zoran; Homma, Michio; Fukuda, Toshio

    2013-04-01

    This paper presents a method for single cell stiffness measurement based on a nano-needle and nanomanipulation. The nano-needle with a buffering beam was fabricated from an atomic force microscope cantilever by the focused ion beam etching technique. Wild type yeast cells (W303) were prepared and placed on the sample stage inside an environmental scanning electron microscope (ESEM) chamber. The nanomanipulator actuated the nano-needle to press against a single yeast cell. As a result, the deformation of the cell and nano-needle was observed by the ESEM system in real-time. Finally, the stiffness of the single cell was determined based on this deformation information. To reveal the relationship between the cell stiffness and the environmental humidity conditions, the cell stiffness was measured at three different humidity conditions, i.e. 40, 70 and 100%, respectively. The results show that the stiffness of a single cell is reduced with increasing humidity.

  16. Measuring enzyme activities under standardized in vivo-like conditions for systems biology.

    PubMed

    van Eunen, Karen; Bouwman, Jildau; Daran-Lapujade, Pascale; Postmus, Jarne; Canelas, André B; Mensonides, Femke I C; Orij, Rick; Tuzun, Isil; van den Brink, Joost; Smits, Gertien J; van Gulik, Walter M; Brul, Stanley; Heijnen, Joseph J; de Winde, Johannes H; de Mattos, M Joost Teixeira; Kettner, Carsten; Nielsen, Jens; Westerhoff, Hans V; Bakker, Barbara M

    2010-02-01

    Realistic quantitative models require data from many laboratories. Therefore, standardization of experimental systems and assay conditions is crucial. Moreover, standards should be representative of the in vivo conditions. However, most often, enzyme-kinetic parameters are measured under assay conditions that yield the maximum activity of each enzyme. In practice, this means that the kinetic parameters of different enzymes are measured in different buffers, at different pH values, with different ionic strengths, etc. In a joint effort of the Dutch Vertical Genomics Consortium, the European Yeast Systems Biology Network and the Standards for Reporting Enzymology Data Commission, we have developed a single assay medium for determining enzyme-kinetic parameters in yeast. The medium is as close as possible to the in vivo situation for the yeast Saccharomyces cerevisiae, and at the same time is experimentally feasible. The in vivo conditions were estimated for S. cerevisiae strain CEN.PK113-7D grown in aerobic glucose-limited chemostat cultures at an extracellular pH of 5.0 and a specific growth rate of 0.1 h(-1). The cytosolic pH and concentrations of calcium, sodium, potassium, phosphorus, sulfur and magnesium were determined. On the basis of these data and literature data, we propose a defined in vivo-like medium containing 300 mM potassium, 50 mM phosphate, 245 mM glutamate, 20 mM sodium, 2 mM free magnesium and 0.5 mM calcium, at a pH of 6.8. The V(max) values of the glycolytic and fermentative enzymes of S. cerevisiae were measured in the new medium. For some enzymes, the results deviated conspicuously from those of assays done under enzyme-specific, optimal conditions.

  17. Measurement of exhaust emissions from two J-58 engines at simulated supersonic cruise flight conditions

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1976-01-01

    Emissions of total oxides of nitrogen, unburned hydrocarbons, carbon monoxide, and carbon dioxide from two J-58 afterburning turbojet engines at simulated high-altitude flight conditions are reported. Test conditions included flight speeds from Mach 2 to 3 at altitudes from 16 to 23 km. For each flight condition, exhaust measurements were made for four or five power levels from maximum power without afterburning through maximum afterburning. The data show that exhaust emissions vary with flight speed, altitude, power level, and radial position across the exhaust. Oxides of nitrogen (NOX) emissions decreased with increasing altitude, and increased with increasing flight speed. NOX emission indices with afterburning were less than half the value without afterburning. Carbon monoxide and hydrocarbon emissions increased with increasing altitude, and decreased with increasing flight speed. Emissions of these species were substantially higher with afterburning than without.

  18. Zero-valent iron removal rates of aqueous Cr(VI) measured under flow conditions

    SciTech Connect

    Kaplan, Daniel I.; Gilmore, Tyler J.

    2004-06-30

    The rates of Cr(VI) removal from the aqueous phase by zero-valent iron Fe(0) was measured under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a gournwater remediation technology that replaces the sand in a filter pack of a conventioanl well with a reactive material, such as Fe(0).

  19. Laser spectroscopic real time measurements of methanogenic activity under simulated Martian subsurface analog conditions

    NASA Astrophysics Data System (ADS)

    Schirmack, Janosch; Böhm, Michael; Brauer, Chris; Löhmannsröben, Hans-Gerd; de Vera, Jean-Pierre; Möhlmann, Diedrich; Wagner, Dirk

    2014-08-01

    On Earth, chemolithoautothrophic and anaerobic microorganisms such as methanogenic archaea are regarded as model organisms for possible subsurface life on Mars. For this reason, the methanogenic strain Methanosarcina soligelidi (formerly called Methanosarcina spec. SMA-21), isolated from permafrost-affected soil in northeast Siberia, has been tested under Martian thermo-physical conditions. In previous studies under simulated Martian conditions, high survival rates of these microorganisms were observed. In our study we present a method to measure methane production as a first attempt to study metabolic activity of methanogenic archaea during simulated conditions approaching conditions of Mars-like environments. To determine methanogenic activity, a measurement technique which is capable to measure the produced methane concentration with high precision and with high temporal resolution is needed. Although there are several methods to detect methane, only a few fulfill all the needed requirements to work within simulated extraterrestrial environments. We have chosen laser spectroscopy, which is a non-destructive technique that measures the methane concentration without sample taking and also can be run continuously. In our simulation, we detected methane production at temperatures down to -5 °C, which would be found on Mars either temporarily in the shallow subsurface or continually in the deep subsurface. The pressure of 50 kPa which we used in our experiments, corresponds to the expected pressure in the Martian near subsurface. Our new device proved to be fully functional and the results indicate that the possible existence of methanogenic archaea in Martian subsurface habitats cannot be ruled out.

  20. Laboratory emissivity measurements of the plagioclase solid solution series under varying environmental conditions

    NASA Astrophysics Data System (ADS)

    Donaldson Hanna, K. L.; Thomas, I. R.; Bowles, N. E.; Greenhagen, B. T.; Pieters, C. M.; Mustard, J. F.; Jackson, C. R. M.; Wyatt, M. B.

    2012-11-01

    New laboratory thermal infrared emissivity measurements of the plagioclase solid solution series over the 1700 ˜ 400 cm-1 (6-25 μm) spectral range are presented. Thermal infrared (TIR) spectral changes for fine-particulate samples (0-25 μm) are characterized for the first time under different laboratory environmental conditions: ambient (terrestrial-like), half-vacuum (Mars-like), vacuum, and vacuum with cooled chamber (lunar-like). Under all environmental conditions the Christiansen Feature (CF) is observed to vary in a systematic way with Na-rich end-member (albite) having a CF position at the highest wave number (shortest wavelength) and the Ca-rich end-member (anorthite) having a CF position with the lowest wave number (longest wavelength). As pressure decreases to <10-3 mbar four observations are made: (1) the CF position shifts to higher wave numbers, (2) the spectral contrast of the CF increases relative to the RB, (3) the spectral contrast of the RB in the ˜1200-900 spectral range decreases while the spectral contrast of the RB in the ˜800-400 spectral range either increases or remains the same and (4) the TF disappears. A relationship between the wavelength position of the CF measured under simulated lunar conditions and plagioclase composition (An#) is developed. Although its exact form may evolve with additional data, this linear relationship should be applied to current and future TIR data sets of the Moon. Our new spectral measurements demonstrate how sensitive thermal infrared emissivity spectra of plagioclase feldspars are to the environmental conditions under which they are measured and provide important constraints for interpreting current and future thermal infrared data sets.

  1. Assessing Granger non-causality using nonparametric measure of conditional independence.

    PubMed

    Seth, Sohan; Príncipe, José C

    2012-01-01

    In recent years, Granger causality has become a popular method in a variety of research areas including engineering, neuroscience, and economics. However, despite its simplicity and wide applicability, the linear Granger causality is an insufficient tool for analyzing exotic stochastic processes such as processes involving non-linear dynamics or processes involving causality in higher order statistics. In order to analyze such processes more reliably, a different approach toward Granger causality has become increasingly popular. This new approach employs conditional independence as a tool to discover Granger non-causality without any assumption on the underlying stochastic process. This paper discusses the concept of discovering Granger non-causality using measures of conditional independence, and proposes a novel measure of conditional independence. In brief, the proposed approach estimates the conditional distribution function through a kernel based least square regression approach. This paper also explores the strengths and weaknesses of the proposed method compared to other available methods, and provides a detailed comparison of these methods using a variety of synthetic data sets.

  2. Ground-truthing evoked potential measurements against behavioral conditioning in the goldfish, Carassius auratus

    NASA Astrophysics Data System (ADS)

    Hill, Randy J.; Mann, David A.

    2005-04-01

    Auditory evoked potentials (AEPs) have become commonly used to measure hearing thresholds in fish. However, it is uncertain how well AEP thresholds match behavioral hearing thresholds and what effect variability in electrode placement has on AEPs. In the first experiment, the effect of electrode placement on AEPs was determined by simultaneously recording AEPs from four locations on each of 12 goldfish, Carassius auratus. In the second experiment, the hearing sensitivity of 12 goldfish was measured using both classical conditioning and AEP's in the same setup. For behavioral conditioning, the fish were trained to reduce their respiration rate in response to a 5 s sound presentation paired with a brief shock. A modified staircase method was used in which 20 reversals were completed for each frequency, and threshold levels were determined by averaging the last 12 reversals. Once the behavioral audiogram was completed, the AEP measurements were made without moving the fish. The recording electrode was located subdermally over the medulla, and was inserted prior to classical conditioning to minimize handling of animal. The same sound stimuli (pulsed tones) were presented and the resultant evoked potentials were recorded for 1000-6000 averages. AEP input-output functions were then compared to the behavioral audiogram to compare techniques for estimating behavioral thresholds from AEP data.

  3. The uncertainty of mass discharge measurements using pumping methods under simplified conditions.

    PubMed

    Chen, Xiaosong; Brooks, Michael C; Wood, A Lynn

    2014-01-01

    Mass discharge measurements at contaminated sites have been used to assist with site management decisions, and can be divided into two broad categories: point-scale measurement techniques and pumping methods. Pumping methods can be sub-divided based on the pumping procedures used into sequential, concurrent, and tandem circulating well categories. Recent work has investigated the uncertainty of point measurement methods, and to a lesser extent, pumping methods. However, the focus of this study was a direct comparison of uncertainty between the various pumping method approaches that have been used, as well as a comparison of uncertainty between pumping and point measurement methods. Mass discharge measurement error was investigated using a Monte Carlo modeling analysis as a function of the contaminant plume position and width, and as a function of the pumping conditions used in the different pumping tests. Results indicated that for the conditions investigated, uncertainty in mass discharge estimates based on pumping methods was 1.3 to 16 times less than point measurement method uncertainty, and that a sequential pumping approach resulted in 5 to 12 times less uncertainty than the concurrent pumping or tandem circulating well approaches. Uncertainty was also investigated as a function of the plume width relative to well spacing. For a given well spacing, uncertainty decreased for all methods as the plume width increased, and comparable levels of uncertainty between point measurement and pumping methods were obtained when three wells were distributed across the plume. A hybrid pumping technique in which alternate wells were pumped concurrently in two separate campaigns yielded similar uncertainty to the sequential pumping approach. This suggests that the hybrid approach can be used to capitalize on the advantages of sequential pumping yet minimize the overall test duration.

  4. Measurements of the Centimeter-Wave Properties of Gases under Simulated Conditions for Deep Jovian Atmospheres

    NASA Astrophysics Data System (ADS)

    Karpowicz, Bryan M.; Steffes, P. G.; Hanley, T. R.

    2008-09-01

    In recent years, an improved laboratory measurement system was developed at Georgia Tech to measure opacities of trace constituents in a hydrogen/helium atmosphere at wavelengths from 1.1 to 20 cm, pressures up to 12 bars and temperatures from 185 to 550 K. However, the Juno mission will include a microwave radiometer instrument (MWR) capable of sensing centimeter-wavelength emission from the very deep atmosphere of Jupiter at pressures exceeding 100 Bars (Janssen et al., 2005, Icarus 171, 447-453). In order to accurately retrieve the abundances of microwave absorbing constituents such as water vapor, and ammonia from measurements of the centimeter-wave emission from these deep layers, precise knowledge of the absorptive properties of these gases under deep atmospheric conditions is necessary. To date, only a very limited number of measurements have been made of the microwave absorption of water vapor, or ammonia at such high pressures, and none of these measurements were conducted at wavelengths greater than 3.3 cm. The system developed enables the measurement of water vapor, and ammonia opacity in a hydrogen/helium atmosphere at wavelengths from 5 to 22 cm, pressures up to 100 bars, and temperatures from 295 to 600 K, corresponding to the deep atmosphere of Jupiter. The first high-pressure measurements of the 5-22 cm opacity and refractivity of water vapor broadened by hydrogen and helium under Jovian conditions will be presented. This work was supported by NASA Contract NNM06AA75C from the Marshall Space Flight Center supporting the Juno Mission Science Team, under Subcontract 699054X from the Southwest Research Institute.

  5. Brief inhalation method to measure cerebral oxygen extraction fraction with PET: Accuracy determination under pathologic conditions

    SciTech Connect

    Altman, D.I.; Lich, L.L.; Powers, W.J. )

    1991-09-01

    The initial validation of the brief inhalation method to measure cerebral oxygen extraction fraction (OEF) with positron emission tomography (PET) was performed in non-human primates with predominantly normal cerebral oxygen metabolism (CMRO2). Sensitivity analysis by computer simulation, however, indicated that this method may be subject to increasing error as CMRO2 decreases. Accuracy of the method under pathologic conditions of reduced CMRO2 has not been determined. Since reduced CMRO2 values are observed frequently in newborn infants and in regions of ischemia and infarction in adults, we determined the accuracy of the brief inhalation method in non-human primates by comparing OEF measured with PET to OEF measured by arteriovenous oxygen difference (A-VO2) under pathologic conditions of reduced CMRO2 (0.27-2.68 ml 100g-1 min-1). A regression equation of OEF (PET) = 1.07 {times} OEF (A-VO2) + 0.017 (r = 0.99, n = 12) was obtained. The absolute error in oxygen extraction measured with PET was small (mean 0.03 {plus minus} 0.04, range -0.03 to 0.12) and was independent of cerebral blood flow, cerebral blood volume, CMRO2, or OEF. The percent error was higher (19 {plus minus} 37), particularly when OEF is below 0.15. These data indicate that the brief inhalation method can be used for measurement of cerebral oxygen extraction and cerebral oxygen metabolism under pathologic conditions of reduced cerebral oxygen metabolism, with these limitations borne in mind.

  6. ZERO-VALENT IRON REMOVAL RATES OF AQUEOUS Cr(VI) MEASURED UNDER FLOW CONDITIONS

    SciTech Connect

    Kaplan, Daniel I.; Gilmore, Tyler J.

    2004-06-01

    The rates of Cr(VI) removal from the aqueous phase by zero-valent iron, Fe(0), was measured under flow conditions. The intent of this work was to generate removal rate coefficients that would be applicable to the Reactive Well Technology, a groundwater remediation technology that replaces the sand in a filter pack of a conventional well with a reactive material, such as Fe(0). Dissolved Cr(VI) concentration, dissolved O2 concentration, and Eh data indicated that Cr(VI) removal from the aqueous phase was mass-transfer limited. All pseudo-first-order regression fits to the data were significant (P≤0.05), however, they did not capture many of the salient aspects of the data, including that the removal rate often decreased as contact time increased. As such, application of these rate coefficients to predict long-term Cr(VI) removal were compromised. The rate coefficients measured under flow conditions were comparable to those measured previously under batch conditions with significantly greater solution:solid ratios. Between the range of 20 and 100 wt-% Fe(0) in the column, there was little measurable change in the reaction kinetics. Thus, it may be possible to include sand into the reactive filter packs in the event it is necessary to increase filter pack porosity or to decrease the accumulation of secondary reaction products that may lead to filter pack plugging. Background water chemistry (0.2 M NaHCO3, distilled water, and a carbonate-dominated groundwater) had only marginal, if any, effects on reaction rate coefficients. The reaction rates measured in this study indicated that an Fe(0) filter pack could be used to lower Cr(VI) concentrations by several orders of magnitude in a once-through mode of operation of the Reactive Well Technology.

  7. Investigation of Rayleigh-Taylor turbulence and mixing using direct numerical simulation with experimentally-measured initial conditions. I. Comparison to experimental data

    SciTech Connect

    Mueschke, N; Schilling, O

    2008-07-23

    A 1152 x 760 x 1280 direct numerical simulation (DNS) using initial conditions, geometry, and physical parameters chosen to approximate those of a transitional, small Atwood number Rayleigh-Taylor mixing experiment [Mueschke, Andrews and Schilling, J. Fluid Mech. 567, 27 (2006)] is presented. The density and velocity fluctuations measured just off of the splitter plate in this buoyantly unstable water channel experiment were parameterized to provide physically-realistic, anisotropic initial conditions for the DNS. The methodology for parameterizing the measured data and numerically implementing the resulting perturbation spectra in the simulation is discussed in detail. The DNS model of the experiment is then validated by comparing quantities from the simulation to experimental measurements. In particular, large-scale quantities (such as the bubble front penetration hb and the mixing layer growth parameter {alpha}{sub b}), higher-order statistics (such as velocity variances and the molecular mixing parameter {theta}), and vertical velocity and density variance spectra from the DNS are shown to be in favorable agreement with the experimental data. Differences between the quantities obtained from the DNS and from experimental measurements are related to limitations in the dynamic range of scales resolved in the simulation and other idealizations of the simulation model. This work demonstrates that a parameterization of experimentally-measured initial conditions can yield simulation data that quantitatively agrees well with experimentally-measured low- and higher-order statistics in a Rayleigh-Taylor mixing layer. This study also provides resolution and initial conditions implementation requirements needed to simulate a physical Rayleigh-Taylor mixing experiment. In Part II [Mueschke and Schilling, Phys. Fluids (2008)], other quantities not measured in the experiment are obtained from the DNS and discussed, such as the integral- and Taylor-scale Reynolds numbers

  8. A methodology to determine margins by EPID measurements of patient setup variation and motion as applied to immobilization devices

    SciTech Connect

    Prisciandaro, Joann I.; Frechette, Christina M.; Herman, Michael G.; Brown, Paul D.; Garces, Yolanda I.; Foote, Robert L.

    2004-11-01

    Assessment of clinic and site specific margins are essential for the effective use of three-dimensional and intensity modulated radiation therapy. An electronic portal imaging device (EPID) based methodology is introduced which allows individual and population based CTV-to-PTV margins to be determined and compared with traditional margins prescribed during treatment. This method was applied to a patient cohort receiving external beam head and neck radiotherapy under an IRB approved protocol. Although the full study involved the use of an EPID-based method to assess the impact of (1) simulation technique (2) immobilization, and (3) surgical intervention on inter- and intrafraction variations of individual and population-based CTV-to-PTV margins, the focus of the paper is on the technique. As an illustration, the methodology is utilized to examine the influence of two immobilization devices, the UON{sup TM} thermoplastic mask and the Type-S{sup TM} head/neck shoulder immobilization system on margins. Daily through port images were acquired for selected fields for each patient with an EPID. To analyze these images, simulation films or digitally reconstructed radiographs (DRR's) were imported into the EPID software. Up to five anatomical landmarks were identified and outlined by the clinician and up to three of these structures were matched for each reference image. Once the individual based errors were quantified, the patient results were grouped into populations by matched anatomical structures and immobilization device. The variation within the subgroup was quantified by calculating the systematic and random errors ({sigma}{sub sub} and {sigma}{sub sub}). Individual patient margins were approximated as 1.65 times the individual-based random error and ranged from 1.1 to 6.3 mm (A-P) and 1.1 to 12.3 mm (S-I) for fields matched on skull and cervical structures, and 1.7 to 10.2 mm (L-R) and 2.0 to 13.8 mm (S-I) for supraclavicular fields. Population-based margins

  9. Measurement of acoustic dissipation in an experimental combustor under representative conditions

    NASA Astrophysics Data System (ADS)

    Webster, Samuel; Hardi, Justin; Oschwald, Michael

    2017-03-01

    The present paper is concerned with experimental estimation of acoustic dissipation under conditions representative of those in a rocket engine combustion chamber. Specifically, the influence of operating and injection conditions on acoustic dissipation is considered. Two experimental and analytical techniques are applied to measure and then compare dissipation rates of the first longitudinal and transverse acoustic modes in an experimental combustion chamber. Comparison between non-combustion and combustion tests showed that combustion chamber damping for the first transverse mode is far greater under combustion conditions. A lesser difference between non-combustion and combustion tests for the first longitudinal mode was found although the damping rates during combustion tests were still higher. A strong relationship between primary injection velocity and dissipation rate was observed, with lower injection velocities leading to decreased damping rates of the first transverse mode. Furthermore, increased film cooling injection rate decreased dissipation rate. The significant influence of representative conditions, specifically injection conditions, on dissipation rate has strong implications for both combustor design and experimental approaches aimed at quantifying dissipation in rocket combustion chambers.