Science.gov

Sample records for quantified results show

  1. Quantifying causal emergence shows that macro can beat micro

    PubMed Central

    Hoel, Erik P.; Albantakis, Larissa; Tononi, Giulio

    2013-01-01

    Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis. PMID:24248356

  2. Quantifying disability: data, methods and results.

    PubMed Central

    Murray, C. J.; Lopez, A. D.

    1994-01-01

    Conventional methods for collecting, analysing and disseminating data and information on disability in populations have relied on cross-sectional censuses and surveys which measure prevalence in a given period. While this may be relevant for defining the extent and demographic pattern of disabilities in a population, and thus indicating the need for rehabilitative services, prevention requires detailed information on the underlying diseases and injuries that cause disabilities. The Global Burden of Disease methodology described in this paper provides a mechanism for quantifying the health consequences of the years of life lived with disabilities by first estimating the age-sex-specific incidence rates of underlying conditions, and then mapping these to a single disability index which collectively reflects the probability of progressing to a disability, the duration of life lived with the disability, and the approximate severity of the disability in terms of activity restriction. Detailed estimates of the number of disability-adjusted life years (DALYs) lived are provided in this paper, for eight geographical regions. The results should be useful to those concerned with planning health services for the disabled and, more particularly, with determining policies to prevent the underlying conditions which give rise to serious disabling sequelae. PMID:8062403

  3. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability

    PubMed Central

    Winkelströter, Lizziane Kretli; Martinis, Elaine C.P. De

    2015-01-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  4. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability.

    PubMed

    Winkelstrter, Lizziane Kretli; De Martinis, Elaine C P

    2015-03-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  5. 14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF INADEQUATE TAMPING. THE SIZE OF THE GRANITE AGGREGATE USED IN THE DAMS CONCRETE IS CLEARLY SHOWN. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  6. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  7. Quantifying IOHDR brachytherapy underdosage resulting from an incomplete scatter environment

    SciTech Connect

    Raina, Sanjay; Avadhani, Jaiteerth S.; Oh, Moonseong; Malhotra, Harish K.; Jaggernauth, Wainwright; Kuettel, Michael R.; Podgorsak, Matthew B. . E-mail: matthew.podgorsak@roswellpark.org

    2005-04-01

    Purpose: Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. Dosimetric errors from this assumption are negligible. However, in intraoperative high-dose-rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in underdosage during treatment. This study was carried out to investigate the magnitude of this underdosage. Methods: IOHDR treatment geometries were simulated using a solid water phantom beneath an applicator with varying amounts of bolus material on the top and sides of the applicator to account for missing tissue. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm{sup 2}), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Ionization measurements were made with a liquid-filled ionization chamber linear array with a dedicated electrometer and data acquisition system. Results: Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. Furthermore, for a given prescription depth, measurements in planes parallel to an applicator at distances up to 4.0 cm from the applicator plane showed that the dose delivery error is equal in magnitude throughout the target volume. Conclusion: This study demonstrates the magnitude of underdosage in IOHDR treatments delivered in a geometry that may not result in a full scatter environment around the applicator. This implies that the target volume and, specifically, the prescription depth (tumor bed) may get a dose significantly less than prescribed. It might be clinically relevant to correct for this inaccuracy.

  8. Quantifying Fine Root Carbon Inputs To Soil: Results From Combining Radiocarbon And Traditional Methodologies.

    NASA Astrophysics Data System (ADS)

    Gaudinski, J. B.; Trumbore, S. E.; Dawson, T.; Torn, M.; Pregitzer, K.; Joslin, J. D.

    2002-12-01

    Estimates of high belowground net primary productivity (50% or more) in forest ecosystems are often based on assumptions that almost all fine roots (< 2 mm in diameter) live and die within one year. Recent radiocarbon (14C) measurements of fine root cellulose in three eastern temperate forests of the United States show that at least a portion of fine roots are living for more than 8 years (Gaudinski et al. 2001) and that fine root lifespans likely vary as a function of both diameter and position on the root branch system. New data from investigations under way in several different temperate forests further support the idea of large variations in root lifespans with radiocarbon-derived ages ranging from approximately one year to several years. In forests where both mini-rhizotron and 14C lifespan estimates have been made, the two techniques agree well when the 14C sampling is made on the same types of roots viewed by mini-rhizotron cameras (i.e. first and second order roots; the most distal and newest roots on the root branching system), and the 14C signature of new root growth is known. We have quantified the signature of new tree roots by taking advantage of locally-elevated 14C at Oak Ridge Tennessee, which shows that carbon making up new roots was photosynthesized approximately 1.5 years prior to new root growth. Position on the root branching system shows a correlation with age, with ages up to 7 years for 4th order roots of red maple. The method by which roots are sampled also affects the 14C-estimated age, with total fine root population, sampled via soil cores, showing longer lifespans relative to roots sampled by position on the root branch system (when similar diameter classes are compared). Overall, the implication of our studies is that assumptions of turnover times of 1 year result in underestimates of the true lifespan of a large portion of fine root biomass in temperate forests. This suggests that future calculations of belowground net primary productivity should take variation in fine root lifespan into account. Reference: Gaudinski JB, Trumbore SE, Davidson EA, Cook A, Richter D (2001) The age of fine-root carbon in three forests of the eastern United States measured by radiocarbon, Oecologia 129:420-429.

  9. Quantifying the Variability in Damage of Structures as a Result of Geohazards

    NASA Astrophysics Data System (ADS)

    Latchman, S.; Simic, M.

    2012-04-01

    Uncertainty is ever present in catastrophe modelling and has recently become a popular topic of discussion in insurance media. Each element of a catastrophe model has associated uncertainties whether they be aleatory, epistemic or other. One method of quantifying the uncertainty specific to each peril is to estimate the variation in damage for a given intensity of peril. For example, the proportion of total cost to repair a structure resulting from an earthquake in the regions of the affected area with peak ground acceleration of 0.65g may range from 10% to 100%. This variation in damage for a given intensity needs to be quantified by catastrophe models. Using insurance claims data, we investigate how damage varies for a given peril (e.g. earthquake, tropical cyclone, inland flood) as a function of peril intensity. Probability distributions (including those with a fat tail, i.e. with large probability of high damage) are fitted to the claims data to test a number of perils specific hypotheses, for example that a very large earthquake will cause less variation in losses than a mid-sized earthquake. We also compare the relationship between damage variability and peril intensity for a number of different geohazards. For example, we compare the uncertainty bands for large earthquakes with large hurricanes in an attempt to assess whether loss estimates are more uncertain for hurricanes say, compared to earthquakes. The results of this study represent advances in the appreciation of uncertainty in catastrophe models and of how losses to a notional portfolio and notional event could vary according to the empirical probability distributions found.

  10. Quantifying the offensive sequences that result in goals in elite futsal matches.

    PubMed

    Sarmento, Hugo; Bradley, Paul; Anguera, M Teresa; Polido, Tiago; Resende, Rui; Campaniço, Jorge

    2016-04-01

    The aim of this study was to quantify the type of offensive sequences that result in goals in elite futsal. Thirty competitive games in the Spanish Primera Division de Sala were analysed using computerised notation analysis for patterns of play that resulted in goals. More goals were scored in positional attack (42%) and from set pieces (27%) compared to other activities. The number of defence to offense "transitions" (n = 45) and the start of offensive plays due to the rules of the game (n = 45) were the most common type of sequences that resulted in goals compared to other patterns of play. The central offensive zonal areas were the most common for shots on goal, with 73% of all goals scored from these areas of the pitch compared to defensive and wide zones. The foot was the main part of the body involved in scoring (n = 114). T-pattern analysis of offensive sequences revealed regular patterns of play, which are common in goal scoring opportunities in futsal and are typical movement patterns in this sport. The data demonstrate common offensive sequences and movement patterns related to goals in elite futsal and this could provide important information for the development of physical and technical training drills that replicate important game situations. PMID:26183125

  11. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small volume grab samples were collected for direct-plating analyses for bacterial indicators and coliphage. After filtration, the viruses were eluted from the filter and further concentrated. The final concentrated sample volume (FCSV) was used for enteric virus analysis by use of two methods-cell culture and a molecular method, polymerase chain reaction (PCR). Quantitative PCR (qPCR) for DNA viruses and quantitative reverse-transcriptase PCR (qRT-PCR) for RNA viruses were used in this study. To support data interpretations, the assay limit of detection (ALOD) was set for each virus assay and used to determine sample reporting limits (SRLs). For qPCR and qRT-PCR the ALOD was an estimated value because it was not established according to established method detection limit procedures. The SRLs were different for each sample because effective sample volumes (the volume of the original sample that was actually used in each analysis) were different for each sample. Effective sample volumes were much less than the original sample volumes because of reductions from processing steps and (or) from when dilutions were made to minimize the effects from PCR-inhibiting substances. Codes were used to further qualify the virus data and indicate the level of uncertainty associated with each measurement. Quality-control samples were used to support data interpretations. Field and laboratory blanks for bacteria, coliphage, and enteric viruses were all below detection, indicating that it was unlikely that samples were contaminated from equipment or processing procedures. The absolute value log differences (AVLDs) between concurrent replicate pairs were calculated to identify the variability associated with each measurement. For bacterial indicators and coliphage, the AVLD results indicated that concentrations <10 colony-forming units or plaque-forming units per 100 mL can differ between replicates by as much as 1 log, whereas higher concentrations can differ by as much as 0.3 log. The AVLD results for viruses indicated that differences between replicates can be as great as 1.2 log genomic copies per liter, regardless of the concentration of virus. Relatively large differences in molecular results for viruses between replicate pairs were likely due to lack of precision for samples with small effective volumes. Concentrations of E. coli, fecal coliforms, enterococci, and somatic and F-specific coliphage in post-secondary and post-tertiary samples in conventional plants were higher than those in post-MBR samples. In post-MBR and post-secondary samples, concentrations of somatic coliphage were higher than F-specific coliphage. In post-disinfection samples from two MBR plants (the third MBR plant had operational issues) and the ultraviolet conventional plant, concentrations for all bacterial indicators and coliphage were near or below detection; from the chlorine conventional plant, concentrations in post-disinfection samples were in the single or double digits. All of the plants met the National Pollutant Discharge Elimination System required effluent limits established for fecal coliforms. Norovirus GII and hepatitis A virus were not detected in any samples, and rotavirus was detected in one sample but could not be quantified. Adenovirus was found in 100 percent, enterovirus in over one-half, and norovirus GI in about one-half of post-preliminary wastewater samples. Adenovirus and enterovirus were detected throughout the treatment processes, and norovirus GI was detected less often than the other two enteric viruses. Culturable viruses were detected in post-preliminary samples and in only two post-treatment samples from the plant with operational issues.

  12. Comb-Push Ultrasound Shear Elastography of Breast Masses: Initial Results Show Promise

    PubMed Central

    Song, Pengfei; Fazzio, Robert T.; Pruthi, Sandhya; Whaley, Dana H.; Chen, Shigao; Fatemi, Mostafa

    2015-01-01

    Purpose or Objective To evaluate the performance of Comb-push Ultrasound Shear Elastography (CUSE) for classification of breast masses. Materials and Methods CUSE is an ultrasound-based quantitative two-dimensional shear wave elasticity imaging technique, which utilizes multiple laterally distributed acoustic radiation force (ARF) beams to simultaneously excite the tissue and induce shear waves. Female patients who were categorized as having suspicious breast masses underwent CUSE evaluations prior to biopsy. An elasticity estimate within the breast mass was obtained from the CUSE shear wave speed map. Elasticity estimates of various types of benign and malignant masses were compared with biopsy results. Results Fifty-four female patients with suspicious breast masses from our ongoing study are presented. Our cohort included 31 malignant and 23 benign breast masses. Our results indicate that the mean shear wave speed was significantly higher in malignant masses (6 1.58 m/s) in comparison to benign masses (3.65 1.36 m/s). Therefore, the stiffness of the mass quantified by the Youngs modulus is significantly higher in malignant masses. According to the receiver operating characteristic curve (ROC), the optimal cut-off value of 83 kPa yields 87.10% sensitivity, 82.61% specificity, and 0.88 for the area under the curve (AUC). Conclusion CUSE has the potential for clinical utility as a quantitative diagnostic imaging tool adjunct to B-mode ultrasound for differentiation of malignant and benign breast masses. PMID:25774978

  13. Showing Value in Newborn Screening: Challenges in Quantifying the Effectiveness and Cost-Effectiveness of Early Detection of Phenylketonuria and Cystic Fibrosis

    PubMed Central

    Grosse, Scott D.

    2015-01-01

    Decision makers sometimes request information on the cost savings, cost-effectiveness, or cost-benefit of public health programs. In practice, quantifying the health and economic benefits of population-level screening programs such as newborn screening (NBS) is challenging. It requires that one specify the frequencies of health outcomes and events, such as hospitalizations, for a cohort of children with a given condition under two different scenarios—with or without NBS. Such analyses also assume that everything else, including treatments, is the same between groups. Lack of comparable data for representative screened and unscreened cohorts that are exposed to the same treatments following diagnosis can result in either under- or over-statement of differences. Accordingly, the benefits of early detection may be understated or overstated. This paper illustrates these common problems through a review of past economic evaluations of screening for two historically significant conditions, phenylketonuria and cystic fibrosis. In both examples qualitative judgments about the value of prompt identification and early treatment to an affected child were more influential than specific numerical estimates of lives or costs saved. PMID:26702401

  14. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2007-01-01

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered

  15. Gun Shows and Gun Violence: Fatally Flawed Study Yields Misleading Results

    PubMed Central

    Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A.

    2010-01-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors prior research. The study should not be used as evidence in formulating gun policy. PMID:20724672

  16. Preliminary Results In Quantifying The Climatic Impact Forcing Factors Around 3 Ma Ago

    NASA Astrophysics Data System (ADS)

    Fluteau, F.; Ramstein, G.; Duringer, P.; Schuster, M.; Tiercelin, J. J.

    What is exactly the control of climate changes on the development of the Hominids ? Is it possible to quantify such changes ? and which are the forcing factors that create these changes ? We use here a General Circulation Model to investigate the climate sensitivity of 3 different forcing factors : the uplift of the East African Rift, the ex- tent (more than twenty time PD surfaces) of the Chad Lake and ultimately we shall with a coupled oceanatmospher GCM test the the effect of Indonesian throughflow changes. To achieve these goals, we need a multidisciplinary group to assess the evo- lution of the Rift and the extent of the Lake. We prescribe these different boundary conditions to the GCM and use a biome model to assess the vegetation changes. In this presentation we will only focus on the Rift uplift and the Chad lake impacts on Atmospheric circulation, monsoon and their environmental consequences in term of vegetation changes.

  17. Quantifying the effects of root reinforcing on slope stability: results of the first tests with an new shearing device

    NASA Astrophysics Data System (ADS)

    Rickli, Christian; Graf, Frank

    2013-04-01

    The role of vegetation in preventing shallow soil mass movements such as shallow landslides and soil erosion is generally well recognized and, correspondingly, soil bioengineering on steep slopes has been widely used in practice. However, the precise effectiveness of vegetation regarding slope stabilityis still difficult to determine. A recently designed inclinable shearing device for large scale vegetated soil samples allows quantitative evaluation of the additional shear strength provided by roots of specific plant species. In the following we describe the results of a first series of shear strength experiments with this apparatus focusing on root reinforcement of White Alder (Alnus incana) and Silver Birch (Betula pendula) in large soil block samples (500 x 500 x 400 mm). The specimen with partly saturated soil of a maximum grain size of 10 mm were slowly sheared at an inclination of 35 with low normal stresses of 3.2 kPa accounting for natural conditions on a typical slope prone to mass movements. Measurements during the experiments involved shear stress, shear displacement and normal displacement, all recorded with high accuracy. In addition, dry weights of sprout and roots were measured to quantify plant growth of the planted specimen. The results with the new apparatus indicate a considerable reinforcement of the soil due to plant roots, i.e. maximum shear stress of the vegetated specimen were substantially higher compared to non-vegetated soil and the additional strength was a function of species and growth. Soil samples with seedlings planted five months prior to the test yielded an important increase in maximum shear stress of 250% for White Alder and 240% for Silver Birch compared to non-vegetated soil. The results of a second test series with 12 month old plants showed even clearer enhancements in maximum shear stress (390% for Alder and 230% for Birch). Overall the results of this first series of shear strength experiments with the new apparatus using planted and unplanted soil specimen confirm the importance of plants in soil stabilisation. Furthermore, they demonstrate the suitability of the apparatus to quantify the additional strength of specific vegetation as a function of species and growth under clearly defined conditions in the laboratory.

  18. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  19. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  20. Results From Mars Show Electrostatic Charging of the Mars Pathfinder Sojourner Rover

    NASA Technical Reports Server (NTRS)

    Kolecki, Joseph C.; Siebert, Mark W.

    1998-01-01

    Indirect evidence (dust accumulation) has been obtained indicating that the Mars Pathfinder rover, Sojourner, experienced electrostatic charging on Mars. Lander camera images of the Sojourner rover provide distinctive evidence of dust accumulation on rover wheels during traverses, turns, and crabbing maneuvers. The sol 22 (22nd Martian "day" after Pathfinder landed) end-of-day image clearly shows fine red dust concentrated around the wheel edges with additional accumulation in the wheel hubs. A sol 41 image of the rover near the rock "Wedge" (see the next image) shows a more uniform coating of dust on the wheel drive surfaces with accumulation in the hubs similar to that in the previous image. In the sol 41 image, note particularly the loss of black-white contrast on the Wheel Abrasion Experiment strips (center wheel). This loss of contrast was also seen when dust accumulated on test wheels in the laboratory. We believe that this accumulation occurred because the Martian surface dust consists of clay-sized particles, similar to those detected by Viking, which have become electrically charged. By adhering to the wheels, the charged dust carries a net nonzero charge to the rover, raising its electrical potential relative to its surroundings. Similar charging behavior was routinely observed in an experimental facility at the NASA Lewis Research Center, where a Sojourner wheel was driven in a simulated Martian surface environment. There, as the wheel moved and accumulated dust (see the following image), electrical potentials in excess of 100 V (relative to the chamber ground) were detected by a capacitively coupled electrostatic probe located 4 mm from the wheel surface. The measured wheel capacitance was approximately 80 picofarads (pF), and the calculated charge, 8 x 10(exp -9) coulombs (C). Voltage differences of 100 V and greater are believed sufficient to produce Paschen electrical discharge in the Martian atmosphere. With an accumulated net charge of 8 x 10(exp -9) C, and average arc time of 1 msec, arcs can also occur with estimated arc currents approaching 10 milliamperes (mA). Discharges of this magnitude could interfere with the operation of sensitive electrical or electronic elements and logic circuits. Sojourner rover wheel tested in laboratory before launch to Mars. Before launch, we believed that the dust would become triboelectrically charged as it was moved about and compacted by the rover wheels. In all cases observed in the laboratory, the test wheel charged positively, and the wheel tracks charged negatively. Dust samples removed from the laboratory wheel averaged a few ones to tens of micrometers in size (clay size). Coarser grains were left behind in the wheel track. On Mars, grain size estimates of 2 to 10 mm were derived for the Martian surface materials from the Viking Gas Exchange Experiment. These size estimates approximately match the laboratory samples. Our tentative conclusion for the Sojourner observations is that fine clay-sized particles acquired an electrostatic charge during rover traverses and adhered to the rover wheels, carrying electrical charge to the rover. Since the Sojourner rover carried no instruments to measure this mission's onboard electrical charge, confirmatory measurements from future rover missions on Mars are desirable so that the physical and electrical properties of the Martian surface dust can be characterized. Sojourner was protected by discharge points, and Faraday cages were placed around sensitive electronics. But larger systems than Sojourner are being contemplated for missions to the Martian surface in the foreseeable future. The design of such systems will require a detailed knowledge of how they will interact with their environment. Validated environmental interaction models and guidelines for the Martian surface must be developed so that design engineers can test new ideas prior to cutting hardware. These models and guidelines cannot be validated without actual flighata. Electrical charging of vehicles and, one day, astronauts moving across t

  1. Aortic emboli show surprising size dependent predilection for cerebral arteries: Results from computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Carr, Ian; Schwartz, Robert; Shadden, Shawn

    2012-11-01

    Cardiac emboli can have devastating consequences if they enter the cerebral circulation, and are the most common cause of embolic stroke. Little is known about relationships of embolic origin/density/size to cerebral events; as these relationships are difficult to observe. To better understand stoke risk from cardiac and aortic emboli, we developed a computational model to track emboli from the heart to the brain. Patient-specific models of the human aorta and arteries to the brain were derived from CT angiography from 10 MHIF patients. Blood flow was modeled by the Navier-Stokes equations using pulsatile inflow at the aortic valve, and physiologic Windkessel models at the outlets. Particulate was injected at the aortic valve and tracked using modified Maxey-Riley equations with a wall collision model. Results demonstrate aortic emboli that entered the cerebral circulation through the carotid or vertebral arteries were localized to specific locations of the proximal aorta. The percentage of released particles embolic to the brain markedly increased with particle size from 0 to ~1-1.5 mm in all patients. Larger particulate became less likely to traverse the cerebral vessels. These findings are consistent with sparse literature based on transesophageal echo measurements. This work was supported in part by the National Science Foundation, award number 1157041.

  2. Quantifying geological processes on Mars-Results of the high resolution stereo camera (HRSC) on Mars express

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; de Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K.-D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-07-01

    This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale are the unique strength of the HRSC instrument. The analysis of these data products enabled quantifying geological processes such as effusion rates of lava flows, tectonic deformation, discharge of water in channels, formation timescales of deltas, geometry of sedimentary deposits as well as estimating the age of geological units by crater size-frequency distribution measurements. Both the quantification of geological processes and the age determination allow constraining the evolution of Martian geologic activity in space and time. A second major contribution of HRSC is the discovery of episodicity in the intensity of geological processes on Mars. This has been revealed by comparative age dating of volcanic, fluvial, glacial, and lacustrine deposits. Volcanic processes on Mars have been active over more than 4 Gyr, with peak phases in all three geologic epochs, generally ceasing towards the Amazonian. Fluvial and lacustrine activity phases spread a time span from Noachian until Amazonian times, but detailed studies show that they have been interrupted by multiple and long lasting phases of quiescence. Also glacial activity shows discrete phases of enhanced intensity that may correlate with periods of increased spin-axis obliquity. The episodicity of geological processes like volcanism, erosion, and glaciation on Mars reflects close correlation between surface processes and endogenic activity as well as orbit variations and changing climate condition.

  3. Quantifying microwear on experimental Mistassini quartzite scrapers: preliminary results of exploratory research using LSCM and scale-sensitive fractal analysis.

    PubMed

    Stemp, W James; Lerner, Harry J; Kristant, Elaine H

    2013-01-01

    Although previous use-wear studies involving quartz and quartzite have been undertaken by archaeologists, these are comparatively few in number. Moreover, there has been relatively little effort to quantify use-wear on stone tools made from quartzite. The purpose of this article is to determine the effectiveness of a measurement system, laser scanning confocal microscopy (LSCM), to document the surface roughness or texture of experimental Mistassini quartzite scrapers used on two different contact materials (fresh and dry deer hide). As in previous studies using LSCM on chert, flint, and obsidian, this exploratory study incorporates a mathematical algorithm that permits the discrimination of surface roughness based on comparisons at multiple scales. Specifically, we employ measures of relative area (RelA) coupled with the F-test to discriminate used from unused stone tool surfaces, as well as surfaces of quartzite scrapers used on dry and fresh deer hide. Our results further demonstrate the effect of raw material variation on use-wear formation and its documentation using LSCM and RelA. PMID:22688593

  4. Quantifying Reemission Of Mercury From Terrestrial And Aquatic Systems Using Stable Isotopes: Results From The Experimental Lakes Area METAALICUS Study

    NASA Astrophysics Data System (ADS)

    Lindberg, S. E.; Southworth, G.; Peterson, M.; Hintelmann, H.; Graydon, J.; St. Louis, V.; Amyot, M.; Krabbenhoft, D.

    2003-12-01

    This study represents the first attempt to directly quantify the re-emission of deposited Hg. This is crucial for understanding the sources of Hg emitted from natural surfaces as being of either geological origin or through re-emission of recently deposited Hg. Three stable Hg isotopes are being added experimentally to a headwater lake, wetlands, and its watershed in a whole-ecosystem manipulation study at the Experimental Lake Area in Canada. Our overall objective is to determine the link between atmospheric deposition and Hg in fish, but numerous aspects of the biogeochemical cycling of Hg are being addressed during METAALICUS (Mercury Experiment to Assess Atmospheric Loading in Canada and the U.S.), including Hg re-emission. Pilot studies in 1999-2000 applied enriched 200Hg to isolated upland and wetland plots, and to lake enclosures. Fluxes were measured with dynamic chambers for several months. The 200Hg spike was quickly detected in ground-level air (e.g. 5 ng/m3) suggesting rapid initial volatilization of the new Hg. Initial 200Hg fluxes > ambient Hg, but emissions of 200Hg decreased within 3 months to non-detects; about 5% of the applied 200Hg spike was emitted from uplands and about 10% from wetlands. The 200Hg spike (representing new deposition) was generally more readily volatilized than was ambient (old) Hg in both sites. Mercury evasion to the atmosphere from a lake enclosure was also measured and compared with the flux estimated from measured dissolved gaseous mercury (DGM). The introduction of the tracer spike was followed by increased concentrations of DGM and higher fluxes to the atmosphere. In some cases, the observed and calculated fluxes were similar; however, it was common for the observed flux to exceed the calculated flux significantly under some conditions, suggesting that DGM concentration alone in the water column is a poor predictor of gaseous mercury evasion. A substantially larger fraction of the newly deposited Hg was re-emitted from the lake than from wetlands or from upland soils. The whole-ecosystem manipulation is now underway at ELA Lake 658. Addition of 200Hg (to uplands), 202Hg (lake), and 199Hg (wetlands) commenced in 2001 and was completed in June 2003. These data are now being analyzed, and appear to support the behavior seen in the pilot studies; final results will be presented.

  5. Seeking to quantify the ferromagnetic-to-antiferromagnetic interface coupling resulting in exchange bias with various thin-film conformations

    SciTech Connect

    Hsiao, C. H.; Wang, S.; Ouyang, H.; Desautels, R. D.; Lierop, J. van; Lin, K. W.

    2014-08-07

    Ni{sub 3}Fe/(Ni, Fe)O thin films with bilayer and nanocrystallite dispersion morphologies are prepared with a dual ion beam deposition technique permitting precise control of nanocrystallite growth, composition, and admixtures. A bilayer morphology provides a Ni{sub 3}Fe-to-NiO interface, while the dispersion films have different mixtures of Ni{sub 3}Fe, NiO, and FeO nanocrystallites. Using detailed analyses of high resolution transmission electron microscopy images with Multislice simulations, the nanocrystallites' structures and phases are determined, and the intermixing between the Ni{sub 3}Fe, NiO, and FeO interfaces is quantified. From field-cooled hysteresis loops, the exchange bias loop shift from spin interactions at the interfaces are determined. With similar interfacial molar ratios of FM-to-AF, we find the exchange bias field essentially unchanged. However, when the interfacial ratio of FM to AF was FM rich, the exchange bias field increases. Since the FM/AF interface contact areas in the nanocrystallite dispersion films are larger than that of the bilayer film, and the nanocrystallite dispersions exhibit larger FM-to-AF interfacial contributions to the magnetism, we attribute the changes in the exchange bias to be from increases in the interfacial segments that suffer defects (such as vacancies and bond distortions), that also affects the coercive fields.

  6. Quantifying Terrestrail Carbon Trends in the Conterminous U.S. - Overall Approach and Results From the Appalachian Forests

    NASA Astrophysics Data System (ADS)

    Liu, J.; Liu, S.; Loveland, T. R.

    2004-12-01

    Estimating dynamic terrestrial ecosystem carbon (C) sources and sinks over large areas is crucial for C management, but it is complicated due to the variations of climate, soil, vegetation, and disturbances. The scaling of C sources and sinks from field to regional level has been challenging. As part of the U.S. Carbon Trends Project, we simulated the forest ecosystem C sequestration of the Appalachians region for the period of 1972 to 2000 using the General Ensemble biogeochemical Modeling System (GEMS). Land cover change was detected using sequential Landsat imagery within seventy-two sample blocks across the region. GEMS used the 60-meter resolution land cover change maps to capture stand-replacing events and used forest inventory data to estimate non-stand-replacing events that was not captured in those maps. GEMS also used Monte Carlo approaches to deal with spatial scaling issues such as initialization of forest age and soil properties. Ensemble simulations were performed to incorporate the uncertainties of input data. Simulated results show that from 1972 to 2000 the net primary productivity (NPP), net ecosystem productivity (NEP) and net biome productivity (NBP) ranged from 481 to 731(average 595), 97 to 329 (average 173), and 50 to 308 (average 136) g C m-2 yr-1, respectively. The inter-annual variability was mostly driven by climate. The inter-ecoregion variability showed the impacts from soil and land cover change. Model test revealed that without dynamic land cover change the annual C sink strength for this region would be over-estimated about 20 to 70 percent of the normal condition C sink. This over-estimated C sink was close to the amount of forest harvesting C in the normal condition. Detailed C budgets for the year 2000 were also analyzed. Within a total 148,000 km2 forested area, average forest ecosystem C density was 161 Mg C ha-1 , of which 81 Mg C ha-1 was in biomass and 80 Mg C ha-1 was in litter and soil. The total C stock of the Appalachian forests was estimated to be 2,375 Tg C including 1,191 Tg C in living biomass and 1,184 Tg C in litter and soil. The total net C sink of the forest ecosystem in 2000 was 13 Tg C y-1.

  7. Clean Colon Software Program (CCSP), Proposal of a standardized Method to quantify Colon Cleansing During Colonoscopy: Preliminary Results

    PubMed Central

    Rosa-Rizzotto, Erik; Dupuis, Adrian; Guido, Ennio; Caroli, Diego; Monica, Fabio; Canova, Daniele; Cervellin, Erica; Marin, Renato; Trovato, Cristina; Crosta, Cristiano; Cocchio, Silvia; Baldo, Vincenzo; De Lazzari, Franca

    2015-01-01

    Background and study aims: Neoplastic lesions can be missed during colonoscopy, especially when cleansing is inadequate. Bowel preparation scales have significant limitations and no objective and standardized method currently exists to establish colon cleanliness during colonoscopy. The aims of our study are to create a software algorithm that is able to analyze bowel cleansing during colonoscopies and to compare it to a validate bowel preparation scale. Patients and methods: A software application (the Clean Colon Software Program, CCSP) was developed. Fifty colonoscopies were carried out and video-recorded. Each video was divided into 3 segments: cecum-hepatic flexure (1st Segment), hepatic flexure-descending colon (2nd Segment) and rectosigmoid segment (3rd Segment). Each segment was recorded twice, both before and after careful cleansing of the intestinal wall. A score from 0 (dirty) to 3 (clean) was then assigned by CCSP. All the videos were also viewed by four endoscopists and colon cleansing was established using the Boston Bowel Preparation Scale. Interclass correlation coefficient was then calculated between the endoscopists and the software. Results: The cleansing score of the prelavage colonoscopies was 1.56??0.52 and the postlavage one was 2,08??0,59 (P?showing an approximate 33.3?% improvement in cleansing after lavage. Right colon segment prelavage (0.99??0.69) was dirtier than left colon segment prelavage (2.07??0.71). The overall interobserver agreement between the average cleansing score for the 4 endoscopists and the software pre-cleansing was 0.87 (95?% CI, 0.84??0.90) and post-cleansing was 0.86 (95?% CI, 0.83??0.89). Conclusions: The software is able to discriminate clean from non-clean colon tracts with high significance and is comparable to endoscopist evaluation. PMID:26528508

  8. Quantifying the effect of crops surface albedo variability on GHG budgets in a life cycle assessment approach : methodology and results.

    NASA Astrophysics Data System (ADS)

    Ferlicoq, Morgan; Ceschia, Eric; Brut, Aurore; Tallec, Tiphaine

    2013-04-01

    We tested a new method to estimate the radiative forcing of several crops at the annual and rotation scales, using local measurements data from two ICOS experimental sites. We used jointly 1) the radiative forcing caused by greenhouse gas (GHG) net emissions, calculated by using a Life Cycle Analysis (LCA) approach and in situ measurements (Ceschia et al. 2010), and 2) the radiative forcing caused by rapid changes in surface albedo typical from those ecosystems and resulting from management and crop phenology. The carbon and GHG budgets (GHGB) of 2 crop sites with contrasted management located in South West France (Aurad and Lamasqure sites) was estimated over a complete rotation by combining a classical LCA approach with on site flux measurements. At both sites, carbon inputs (organic fertilisation and seeds), carbon exports (harvest) and net ecosystem production (NEP), measured with the eddy covariance technique, were calculated. The variability of the different terms and their relative contributions to the net ecosystem carbon budget (NECB) were analysed for all site-years, and the effect of management on NECB was assessed. To account for GHG fluxes that were not directly measured on site, we estimated the emissions caused by field operations (EFO) for each site using emission factors from the literature. The EFO were added to the NECB to calculate the total GHGB for a range of cropping systems and management regimes. N2O emissions were or calculated following the IPCC (2007) guidelines, and CH4 emissions were assumed to be negligible compared to other contributions to the net GHGB. Additionally, albedo was calculated continuously using the short wave incident and reflected radiation measurements in the field (0.3-3m) from CNR1 sensors. Mean annual differences in albedo and deduced radiative forcing from a reference value were then compared for all site-years. Mean annual differences in radiative forcing were then converted in g C equivalent m-2 in order to add this effect to the GHG budget (Muoz et a. 2010). Increasing the length of the vegetative period is considered as one of the main levers for improving the NECB of crop ecosystems. Therefore, we also tested the effect of adding intermediate crops or maintaining crop voluntary re-growth on both the NECB and the radiative forcing caused by the changes in mean annual surface albedo. We showed that the NEP was improved and as a consequence NECB and GHGB too. Intermediate crops also increased the mean annual surface albedo and therefore caused a negative radiative forcing (cooling effect) expressed in g C equivalent m-2 (sink). The use of an intermediate crop could in some cases switch the crop from a positive NEP (source) to a negative one (sink) and the change in radiative forcing (up to -110 g C-eq m-2 yr-1) could overwhelm the NEP term.

  9. Quantifying Quality

    ERIC Educational Resources Information Center

    Kazlauskas, Edward J.; Bennion, Bruce

    1977-01-01

    Speeches and tutorials of the ASIS Workshop "Quantifying Quality" are summarized. Topics include quantitative methods for measuring performance; queueing theory in libraries; data base value analysis; performance standards for libraries; use of Statistical Package for the Social Sciences in decision making; designing optimal information access…

  10. Airborne Laser Swath Mapping: Results of Field Tests Conducted to Quantify the Effects of Different Ground Covers

    NASA Astrophysics Data System (ADS)

    Carter, W. E.; Shrestha, R. L.; Tuell, G.; Bloomquist, D.; Sartori, M.; Raabe, E.

    2001-12-01

    Most scientific and engineering applications of Airborne Laser Swath Mapping (ALSM) require precisions and/or repeatabilities (relative accuracies) of several decimeters in the horizontal coordinates and a few to several centimeters in the vertical coordinates of the point measurements, or ultimately of surface features derived from the point measurements. Manufacturers generally use components consistent with this level of performance and laboratory calibration and testing results indicate that instrumental errors are within these bounds. However, field observations include additional sources of error that can vary significantly from project to project. Comparisons of results from an ALSM system operated by the University of Florida (Optech Model 1210) and ground survey values, on a point-by-point basis, and as profiles cut from Digital Elevation Models, consistently yield RMS differences of 30 to 50 cm in horizontal coordinates, and 4 to 8 cm in the vertical coordinates, for points on smooth bare surfaces such as pavements, roofs, and sand beaches. These numbers increase in steep or rugged terrain, and in areas covered with vegetation. Results from recent projects will be presented that illustrate the effects of different ground covers, including grass, row crops, marsh grasses, coastal mangroves, open pine and dense mixed forests. Examples illustrating the use of laser intensity values, multiple stops per pulse, and filtering algorithms, to minimize the degradation caused by ground cover, will also be presented.

  11. Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides

    PubMed Central

    Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

    2013-01-01

    Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring cellulases and other biomass-degrading enzymes to Bcell itself and in anchoring proteins other Gram-positive organisms. PMID:23593409

  12. Development and application of methods to quantify spatial and temporal hyperpolarized 3He MRI ventilation dynamics: preliminary results in chronic obstructive pulmonary disease

    NASA Astrophysics Data System (ADS)

    Kirby, Miranda; Wheatley, Andrew; McCormack, David G.; Parraga, Grace

    2010-03-01

    Hyperpolarized helium-3 (3He) magnetic resonance imaging (MRI) has emerged as a non-invasive research method for quantifying lung structural and functional changes, enabling direct visualization in vivo at high spatial and temporal resolution. Here we described the development of methods for quantifying ventilation dynamics in response to salbutamol in Chronic Obstructive Pulmonary Disease (COPD). Whole body 3.0 Tesla Excite 12.0 MRI system was used to obtain multi-slice coronal images acquired immediately after subjects inhaled hyperpolarized 3He gas. Ventilated volume (VV), ventilation defect volume (VDV) and thoracic cavity volume (TCV) were recorded following segmentation of 3He and 1H images respectively, and used to calculate percent ventilated volume (PVV) and ventilation defect percent (VDP). Manual segmentation and Otsu thresholding were significantly correlated for VV (r=.82, p=.001), VDV (r=.87 p=.0002), PVV (r=.85, p=.0005), and VDP (r=.85, p=.0005). The level of agreement between these segmentation methods was also evaluated using Bland-Altman analysis and this showed that manual segmentation was consistently higher for VV (Mean=.22 L, SD=.05) and consistently lower for VDV (Mean=-.13, SD=.05) measurements than Otsu thresholding. To automate the quantification of newly ventilated pixels (NVp) post-bronchodilator, we used translation, rotation, and scaling transformations to register pre-and post-salbutamol images. There was a significant correlation between NVp and VDV (r=-.94 p=.005) and between percent newly ventilated pixels (PNVp) and VDP (r=- .89, p=.02), but not for VV or PVV. Evaluation of 3He MRI ventilation dynamics using Otsu thresholding and landmark-based image registration provides a way to regionally quantify functional changes in COPD subjects after treatment with beta-agonist bronchodilators, a common COPD and asthma therapy.

  13. QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon

    SciTech Connect

    Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U.H. Eitel; Sebastian Martinuzzi

    2012-04-01

    Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (CO{sub 2}) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools; however, not much is known about the accuracy of estimating biomass change and carbon flux from repeat LiDAR acquisitions containing different data sampling characteristics. In this study, discrete return airborne LiDAR data was collected in 2003 and 2009 across {approx}20,000 hectares (ha) of an actively managed, mixed conifer forest landscape in northern Idaho, USA. Forest inventory plots, established via a random stratified sampling design, were established and sampled in 2003 and 2009. The Random Forest machine learning algorithm was used to establish statistical relationships between inventory data and forest structural metrics derived from the LiDAR acquisitions. Aboveground biomass maps were created for the study area based on statistical relationships developed at the plot level. Over this 6-year period, we found that the mean increase in biomass due to forest growth across the non-harvested portions of the study area was 4.8 metric ton/hectare (Mg/ha). In these non-harvested areas, we found a significant difference in biomass increase among forest successional stages, with a higher biomass increase in mature and old forest compared to stand initiation and young forest. Approximately 20% of the landscape had been disturbed by harvest activities during the six-year time period, representing a biomass loss of >70 Mg/ha in these areas. During the study period, these harvest activities outweighed growth at the landscape scale, resulting in an overall loss in aboveground carbon at this site. The 30-fold increase in sampling density between the 2003 and 2009 did not affect the biomass estimates. Overall, LiDAR data coupled with field reference data offer a powerful method for calculating pools and changes in aboveground carbon in forested systems. The results of our study suggest that multitemporal LiDAR-based approaches are likely to be useful for high quality estimates of aboveground carbon change in conifer forest systems.

  14. Quantifying Electron Delocalization in Electrides.

    PubMed

    Janesko, Benjamin G; Scalmani, Giovanni; Frisch, Michael J

    2016-01-12

    Electrides are ionic solids whose anions are electrons confined to crystal voids. We show that our electron delocalization range function EDR(r?;d), which quantifies the extent to which an electron at point r? in a calculated wave function delocalizes over distance d, provides useful insights into electrides. The EDR quantifies the characteristic delocalization length of electride electrons and provides a chemically intuitive real-space picture of the electrons' distribution. It also gives a potential diagnostic for whether a given formula unit will form a solid electride at ambient pressure, quantifies the effects of electron-electron correlation on confined electrons' interactions, and highlights analogies between covalent bonding and the interaction of interstitial quasi-atoms in high-pressure electrides. These results motivate adding the EDR to the toolbox of theoretical methods applied to electrides. PMID:26652208

  15. Presentation Showing Results of a Hydrogeochemical Investigation of the Standard Mine Vicinity, Upper Elk Creek Basin, Colorado

    USGS Publications Warehouse

    Manning, Andrew H.; Verplanck, Philip L.; Mast, M. Alisa; Wanty, Richard B.

    2008-01-01

    PREFACE This Open-File Report consists of a presentation given in Crested Butte, Colorado on December 13, 2007 to the Standard Mine Advisory Group. The presentation was paired with another presentation given by the Colorado Division of Reclamation, Mining, and Safety on the physical features and geology of the Standard Mine. The presentation in this Open-File Report summarizes the results and conclusions of a hydrogeochemical investigation of the Standard Mine performed by the U.S. Geological Survey (Manning and others, in press). The purpose of the investigation was to aid the U.S. Environmental Protection Agency in evaluating remediation options for the Standard Mine site. Additional details and supporting data related to the information in this presentation can be found in Manning and others (in press).

  16. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle.

    PubMed

    Brito Lopes, Fernando; da Silva, Marcelo Corra; Magnabosco, Cludio Ulha; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  17. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle

    PubMed Central

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  18. QUANTIFYING SPICULES

    SciTech Connect

    Pereira, Tiago M. D.; De Pontieu, Bart; Carlsson, Mats

    2012-11-01

    Understanding the dynamic solar chromosphere is fundamental in solar physics. Spicules are an important feature of the chromosphere, connecting the photosphere to the corona, potentially mediating the transfer of energy and mass. The aim of this work is to study the properties of spicules over different regions of the Sun. Our goal is to investigate if there is more than one type of spicule, and how spicules behave in the quiet Sun, coronal holes, and active regions. We make use of high cadence and high spatial resolution Ca II H observations taken by Hinode/Solar Optical Telescope. Making use of a semi-automated detection algorithm, we self-consistently track and measure the properties of 519 spicules over different regions. We find clear evidence of two types of spicules. Type I spicules show a rise and fall and have typical lifetimes of 150-400 s and maximum ascending velocities of 15-40 km s{sup -1}, while type II spicules have shorter lifetimes of 50-150 s, faster velocities of 30-110 km s{sup -1}, and are not seen to fall down, but rather fade at around their maximum length. Type II spicules are the most common, seen in the quiet Sun and coronal holes. Type I spicules are seen mostly in active regions. There are regional differences between quiet-Sun and coronal hole spicules, likely attributable to the different field configurations. The properties of type II spicules are consistent with published results of rapid blueshifted events (RBEs), supporting the hypothesis that RBEs are their disk counterparts. For type I spicules we find the relations between their properties to be consistent with a magnetoacoustic shock wave driver, and with dynamic fibrils as their disk counterpart. The driver of type II spicules remains unclear from limb observations.

  19. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  20. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best

  1. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  2. Two heteronuclear dipolar results at the price of one: Quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite

    NASA Astrophysics Data System (ADS)

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edn, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M2 (S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M2 (S-I) and M2 (I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by 23Na{31P} REDOR and 31P{23Na} REAPDOR NMR applied to Na2O-CaO-SiO2-P2O5 glasses and biomimetic hydroxyapatite, where the M2 (P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M2 (Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M2 (S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures.

  3. Two heteronuclear dipolar results at the price of one: quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite.

    PubMed

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edn, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M(2)(S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M(2)(S-I) and M(2)(I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by (23)Na{(31)P} REDOR and (31)P{(23)Na} REAPDOR NMR applied to Na(2)O-CaO-SiO(2)-P(2)O(5) glasses and biomimetic hydroxyapatite, where the M(2)(P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M(2)(Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M(2)(S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures. PMID:25557863

  4. Reanalysis of mGWAS results and in vitro validation show that lactate dehydrogenase interacts with branched-chain amino acid metabolism.

    PubMed

    Heemskerk, Mattijs M; van Harmelen, Vanessa Ja; van Dijk, Ko Willems; van Klinken, Jan Bert

    2016-01-01

    The assignment of causative genes to noncoding variants identified in genome-wide association studies (GWASs) is challenging. We show how combination of knowledge from gene and pathway databases and chromatin interaction data leads to reinterpretation of published quantitative trait loci for blood metabolites. We describe a previously unidentified link between the rs2403254 locus, which is associated with the ratio of 3-methyl-2-oxobutanoate and alpha-hydroxyisovalerate levels, and the distal LDHA gene. We confirmed that lactate dehydrogenase can catalyze the conversion between these metabolites in vitro, suggesting that it has a role in branched-chain amino acid metabolism. Examining datasets from the ENCODE project we found evidence that the locus and LDHA promoter physically interact, showing that LDHA expression is likely under control of distal regulatory elements. Importantly, this discovery demonstrates that bioinformatic workflows for data integration can have a vital role in the interpretation of GWAS results. PMID:26014429

  5. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ernő; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments. PMID:26126688

  6. T-cell lines from 2 patients with adenosine deaminase (ADA) deficiency showed the restoration of ADA activity resulted from the reversion of an inherited mutation.

    PubMed

    Ariga, T; Oda, N; Yamaguchi, K; Kawamura, N; Kikuta, H; Taniuchi, S; Kobayashi, Y; Terada, K; Ikeda, H; Hershfield, M S; Kobayashi, K; Sakiyama, Y

    2001-05-01

    Inherited deficiency of adenosine deaminase (ADA) results in one of the autosomal recessive forms of severe combined immunodeficiency. This report discusses 2 patients with ADA deficiency from different families, in whom a possible reverse mutation had occurred. The novel mutations were identified in the ADA gene from the patients, and both their parents were revealed to be carriers. Unexpectedly, established patient T-cell lines, not B-cell lines, showed half-normal levels of ADA enzyme activity. Reevaluation of the mutations in these T-cell lines indicated that one of the inherited ADA gene mutations was reverted in both patients. At least one of the patients seemed to possess the revertant cells in vivo; however, the mutant cells might have overcome the revertant after receiving ADA enzyme replacement therapy. These findings may have significant implications regarding the prospects for stem cell gene therapy for ADA deficiency. PMID:11313286

  7. Thermosensory reversal effect quantified.

    PubMed

    Bergmann Tiest, Wouter M; Kappers, Astrid M L

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by measuring discrimination thresholds for subjective coldness at different ambient temperatures using stimuli of different thicknesses. The reversal point was found to be at 34 degrees C, somewhat above skin temperature. At this reversal point, discrimination is quite impossible. At room temperature, subjects were able to discriminate between stimuli of different thickness based on subjective coldness, showing that the sense of touch, unlike vision, can penetrate solid objects. Furthermore, somewhat surprisingly, at ambient temperatures well below normal room temperature, discrimination is worse than at room temperature. PMID:17306203

  8. Simple instruments used in monitoring ionospheric perturbations and some observational results showing the ionospheric responses to the perturbations mainly from the lower atmosphere

    NASA Astrophysics Data System (ADS)

    Xiao, Zuo; Hao, Yongqiang; Zhang, Donghe; Xiao, Sai-Guan; Huang, Weiquan

    Ionospheric disturbances such as SID and acoustic gravity waves in different scales are well known and commonly discussed topics. Some simple ground equipment was designed and used for monitoring continuously the effects of these disturbances, especially, SWF, SFD. Besides SIDs, They also reflect clearly the acoustic gravity waves in different scale and Spread-F and these data are important supplementary to the traditional ionosonde records. It is of signifi-cance in understanding physical essentials of the ionospheric disturbances and applications in SID warning. In this paper, the designing of the instruments is given and results are discussed in detail. Some case studies were introduced as example which showed very clearly not only immediate effects of solar flare, but also the phenomena of ionospheric responses to large scale gravity waves from lower atmosphere such as typhoon, great earthquake and volcano erup-tion. Particularlyresults showed that acoustic gravity waves play significant role in seeding ionospheric Spread-F. These examples give evidence that lower atmospheric activities strongly influence the ionosphere.

  9. How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches a cross-sectional study

    PubMed Central

    2014-01-01

    Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n?=?208) met the ICD-10 criteria of a mild depressive episode and 11% (n?=?305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n?=?156) and 11.6% (n?=?323), conduct disorder in 15.2% (n?=?373) and 14.6% (n?=?357). Self-reported symptoms in 11 to 17year olds resulted in 15% (n?=?279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n?=?307) based on cut-off) and 10.9% (n?=?201) reported symptoms of anxiety (vs. 15.4% (n?=?283)). Results from routine data identify 0.9% (n?=?1,196) with a depression diagnosis, 3.1% (n?=?6,729) with anxiety and 1.4% (n?=?3,100) with conduct disorder in outpatient health care. Conclusions Statistical and clinical case definition strategies show moderate concordance in depression and conduct disorder in a German national sample. Comparatively, lower rates of children and adolescents with diagnosed mental health problems in the outpatient health care setting support the assumptions that a small number of children and adolescents in need of treatment receive it. PMID:24597565

  10. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Toms; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Bjrn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS. PMID:25877214

  11. Magnetic Sphincter Augmentation for Gastroesophageal Reflux at 5 Years: Final Results of a Pilot Study Show Long-Term Acid Reduction and Symptom Improvement

    PubMed Central

    Saino, Greta; Bonavina, Luigi; Lipham, John C.; Dunn, Daniel

    2015-01-01

    Abstract Background: As previously reported, the magnetic sphincter augmentation device (MSAD) preserves gastric anatomy and results in less severe side effects than traditional antireflux surgery. The final 5-year results of a pilot study are reported here. Patients and Methods: A prospective, multicenter study evaluated safety and efficacy of the MSAD for 5 years. Prior to MSAD placement, patients had abnormal esophageal acid and symptoms poorly controlled by proton pump inhibitors (PPIs). Patients served as their own control, which allowed comparison between baseline and postoperative measurements to determine individual treatment effect. At 5 years, gastroesophageal reflux disease (GERD)-Health Related Quality of Life (HRQL) questionnaire score, esophageal pH, PPI use, and complications were evaluated. Results: Between February 2007 and October 2008, 44 patients (26 males) had an MSAD implanted by laparoscopy, and 33 patients were followed up at 5 years. Mean total percentage of time with pH <4 was 11.9% at baseline and 4.6% at 5 years (P?shows the relative safety and efficacy of magnetic sphincter augmentation for GERD. PMID:26437027

  12. Quantifiable Lateral Flow Assay Test Strips

    NASA Technical Reports Server (NTRS)

    2003-01-01

    As easy to read as a home pregnancy test, three Quantifiable Lateral Flow Assay (QLFA) strips used to test water for E. coli show different results. The brightly glowing control line on the far right of each strip indicates that all three tests ran successfully. But the glowing test line on the middle left and bottom strips reveal their samples were contaminated with E. coli bacteria at two different concentrations. The color intensity correlates with concentration of contamination.

  13. SENS-IS, a 3D reconstituted epidermis based model for quantifying chemical sensitization potency: Reproducibility and predictivity results from an inter-laboratory study.

    PubMed

    Cottrez, Françoise; Boitel, Elodie; Ourlin, Jean-Claude; Peiffer, Jean-Luc; Fabre, Isabelle; Henaoui, Imène-Sarah; Mari, Bernard; Vallauri, Ambre; Paquet, Agnes; Barbry, Pascal; Auriault, Claude; Aeby, Pierre; Groux, Hervé

    2016-04-01

    The SENS-IS test protocol for the in vitro detection of sensitizers is based on a reconstructed human skin model (Episkin) as the test system and on the analysis of the expression of a large panel of genes. Its excellent performance was initially demonstrated with a limited set of test chemicals. Further studies (described here) were organized to confirm these preliminary results and to obtain a detailed statistical analysis of the predictive capacity of the assay. A ring-study was thus organized and performed within three laboratories, using a test set of 19 blind coded chemicals. Data analysis indicated that the assay is robust, easily transferable and offers high predictivity and excellent within- and between-laboratories reproducibility. To further evaluate the predictivity of the test protocol according to Cooper statistics a comprehensive test set of 150 chemicals was then analyzed. Again, data analysis confirmed the excellent capacity of the SENS-IS assay for predicting both hazard and potency characteristics, confirming that this assay should be considered as a serious alternative to the available in vivo sensitization tests. PMID:26795242

  14. Prospects of an alternative treatment against Trypanosoma cruzi based on abietic acid derivatives show promising results in Balb/c mouse model.

    PubMed

    Olmo, F; Guardia, J J; Marin, C; Messouri, I; Rosales, M J; Urbanová, K; Chayboun, I; Chahboun, R; Alvarez-Manzaneda, E J; Sánchez-Moreno, M

    2015-01-01

    Chagas disease, caused by the protozoa parasite Trypanosoma cruzi, is an example of extended parasitaemia with unmet medical needs. Current treatments based on old-featured benznidazole (Bz) and nifurtimox are expensive and do not fulfil the criteria of effectiveness, and a lack of toxicity devoid to modern drugs. In this work, a group of abietic acid derivatives that are chemically stable and well characterised were introduced as candidates for the treatment of Chagas disease. In vitro and in vivo assays were performed in order to test the effectiveness of these compounds. Finally, those which showed the best activity underwent additional studies in order to elucidate the possible mechanism of action. In vitro results indicated that some compounds have low toxicity (i.e. >150 μM, against Vero cell) combined with high efficacy (i.e. <20 μM) against some forms of T. cruzi. Further in vivo studies on mice models confirmed the expectations of improvements in infected mice. In vivo tests on the acute phase gave parasitaemia inhibition values higher those of Bz, and a remarkable decrease in the reactivation of parasitaemia was found in the chronic phase after immunosuppression of the mice treated with one of the compounds. The morphological alterations found in treated parasites with our derivatives confirmed extensive damage; energetic metabolism disturbances were also registered by (1)H NMR. The demonstrated in vivo activity and low toxicity, together with the use of affordable starting products and the lack of synthetic complexity, put these abietic acid derivatives in a remarkable position toward the development of an anti-Chagasic agent. PMID:25462275

  15. Quantifying and Reducing the Uncertainties in Future Projections of Droughts and Heat Waves for North America that Result from the Diversity of Models in CMIP5

    NASA Astrophysics Data System (ADS)

    Herrera-Estrada, J. E.; Sheffield, J.

    2014-12-01

    There are many sources of uncertainty regarding the future projections of our climate, including the multiple possible Representative Concentration Pathways (RCPs), the variety of climate models used, and the initial and boundary conditions with which they are run. Moreover, it has been shown that the internal variability of the climate system can sometimes be of the same order of magnitude as the climate change signal or even larger for some variables. Nonetheless, in order to help inform stakeholders in water resources and agriculture in North America when developing adaptation strategies, particularly for extreme events such as droughts and heat waves, it is necessary to study the plausible range of changes that the region might experience during the 21st century. We aim to understand and reduce the uncertainties associated with this range of possible scenarios by focusing on the diversity of climate models involved in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Data output from various CMIP5 models is compared against near surface climate and land-surface hydrological data from the North American Land Data Assimilation System (NLDAS)-2 to evaluate how well each climate model represents the land-surface processes associated with droughts and heat waves during the overlapping historical period (1979-2005). These processes include the representation of precipitation and radiation and their partitioning at the land surface, land-atmosphere interactions, and the propagation of signals of these extreme events through the land surface. The ability of the CMIP5 models to reproduce these important physical processes for regions of North America is used to inform a multi-model ensemble in which models that represent the processes relevant to droughts and heat waves better are given more importance. Furthermore, the future projections are clustered to identify possible dependencies in behavior across models. The results indicate a wide range in performance for the historical runs with some models hampered by poor interannual variability in summer precipitation and near surface air temperature, whilst others partition too much precipitation into evapotranspiration with implications for drought and heat wave development.

  16. Modeling upward brine migration through faults as a result of CO2 storage in the Northeast German Basin shows negligible salinization in shallow aquifers

    NASA Astrophysics Data System (ADS)

    Kuehn, M.; Tillner, E.; Kempka, T.; Nakaten, B.

    2012-12-01

    The geological storage of CO2 in deep saline formations may cause salinization of shallower freshwater resources by upward flow of displaced brine from the storage formation into potable groundwater. In this regard, permeable faults or fractures can serve as potential leakage pathways for upward brine migration. The present study uses a regional-scale 3D model based on real structural data of a prospective CO2 storage site in Northeastern Germany to determine the impact of compartmentalization and fault permeability on upward brine migration as a result of pressure elevation by CO2 injection. To evaluate the degree of salinization in the shallower aquifers, different fault leakage scenarios were carried out using a newly developed workflow in which the model grid from the software package Petrel applied for pre-processing is transferred to the reservoir simulator TOUGH2-MP/ECO2N. A discrete fault description is achieved by using virtual elements. A static 3D geological model of the CO2 storage site with an a real size of 40 km x 40 km and a thickness of 766 m was implemented. Subsequently, large-scale numerical multi-phase multi-component (CO2, NaCl, H2O) flow simulations were carried out on a high performance computing system. The prospective storage site, located in the Northeast German Basin is part of an anticline structure characterized by a saline multi-layer aquifer system. The NE and SW boundaries of the study area are confined by the Fuerstenwalde Gubener and the Lausitzer Abbruch fault zones represented by four discrete faults in the model. Two formations of the Middle Bunter were chosen to assess brine migration through faults triggered by an annual injection rate of 1.7 Mt CO2 into the lowermost formation over a time span of 20 years. In addition to varying fault permeabilities, different boundary conditions were applied to evaluate the effects of reservoir compartmentalization. Simulation results show that the highest pressurization within the storage formation with a relative pressure increase of up to 150 % after 20 years of injection is caused by strong compartmentalization effects if closed boundaries and closed faults are assumed. The CO2 plume is considerably smaller compared to those that develop when laterally open boundaries are applied. Laterally open boundaries and highly permeable faults lead to the strongest pressure dissipation and cause the CO2 plume to come up almost 3 km closer to the fault. Closed model boundaries in the lower aquifers and four highly permeable faults (> 1,000 mD) lead to the highest salinities in the uppermost Stuttgart formation with an average salinity increase of 0.24 % (407 mg/l) after 20 years of injection. Less salinity changes in the uppermost aquifers are observed with closed boundaries in the lower aquifers and only one major fault open for brine flow. Here, also fault permeability, unexpectedly does not significantly influence salinization in the uppermost Stuttgart formation. Salinity increases by 0.04% (75 mg/l) for a fault permeability of 1,000 mD and by at least 0.06 % (96 mg/l) for a fault permeability of 10,000 mD and until the end of injection. Taking into account the modeling results shallow aquifer salinization is not expected to be of concern for the investigated study area in the Northeastern German Basin.

  17. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the

  18. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  19. Quantifying Proportional Variability

    PubMed Central

    Heath, Joel P.; Borowski, Peter

    2013-01-01

    Real quantities can undergo such a wide variety of dynamics that the mean is often a meaningless reference point for measuring variability. Despite their widespread application, techniques like the Coefficient of Variation are not truly proportional and exhibit pathological properties. The non-parametric measure Proportional Variability (PV) [1] resolves these issues and provides a robust way to summarize and compare variation in quantities exhibiting diverse dynamical behaviour. Instead of being based on deviation from an average value, variation is simply quantified by comparing the numbers to each other, requiring no assumptions about central tendency or underlying statistical distributions. While PV has been introduced before and has already been applied in various contexts to population dynamics, here we present a deeper analysis of this new measure, derive analytical expressions for the PV of several general distributions and present new comparisons with the Coefficient of Variation, demonstrating cases in which PV is the more favorable measure. We show that PV provides an easily interpretable approach for measuring and comparing variation that can be generally applied throughout the sciences, from contexts ranging from stock market stability to climate variation. PMID:24386334

  20. Two yeast acid phosphatase structural genes are the result of a tandem duplication and show different degrees of homology in their promoter and coding sequences.

    PubMed Central

    Meyhack, B; Bajwa, W; Rudolph, H; Hinnen, A

    1982-01-01

    We have cloned the structural genes for a regulated ( PHO5 ) and a constitutive ( PHO3 ) acid phosphatase from yeast by transformation and complementation of a yeast pho3 , pho5 double mutant. Both genes are located on a 5.1-kb BamHI fragment. The cloned genes were identified on the basis of genetic evidence and by hybrid selection of mRNA coupled with in vitro translation and immunoprecipitation. Subcloning of partial Sau3A digests and functional in vivo analysis by transformation together with DNA sequence analysis showed that the two genes are oriented in the order (5') PHO5 , PHO3 (3'). While the nucleotide sequences of the two coding regions are quite similar, the putative promoter regions show a lower degree of sequence homology. Partly divergent promoter sequences may explain the different regulation of the two genes. Images Fig. 2. PMID:6329697

  1. The Relevance of External Quality Assessment for Molecular Testing for ALK Positive Non-Small Cell Lung Cancer: Results from Two Pilot Rounds Show Room for Optimization

    PubMed Central

    Tembuyser, Lien; Tack, Vronique; Zwaenepoel, Karen; Pauwels, Patrick; Miller, Keith; Bubendorf, Lukas; Kerr, Keith; Schuuring, Ed; Thunnissen, Erik; Dequeker, Elisabeth M. C.

    2014-01-01

    Background and Purpose Molecular profiling should be performed on all advanced non-small cell lung cancer with non-squamous histology to allow treatment selection. Currently, this should include EGFR mutation testing and testing for ALK rearrangements. ROS1 is another emerging target. ALK rearrangement status is a critical biomarker to predict response to tyrosine kinase inhibitors such as crizotinib. To promote high quality testing in non-small cell lung cancer, the European Society of Pathology has introduced an external quality assessment scheme. This article summarizes the results of the first two pilot rounds organized in 20122013. Materials and Methods Tissue microarray slides consisting of cell-lines and resection specimens were distributed with the request for routine ALK testing using IHC or FISH. Participation in ALK FISH testing included the interpretation of four digital FISH images. Results Data from 173 different laboratories was obtained. Results demonstrate decreased error rates in the second round for both ALK FISH and ALK IHC, although the error rates were still high and the need for external quality assessment in laboratories performing ALK testing is evident. Error rates obtained by FISH were lower than by IHC. The lowest error rates were observed for the interpretation of digital FISH images. Conclusion There was a large variety in FISH enumeration practices. Based on the results from this study, recommendations for the methodology, analysis, interpretation and result reporting were issued. External quality assessment is a crucial element to improve the quality of molecular testing. PMID:25386659

  2. Map Showing Earthquake Shaking and Tsunami Hazard in Guadeloupe and Dominica, as a Result of an M8.0 Earthquake on the Lesser Antilles Megathrust

    USGS Multimedia Gallery

    Earthquake shaking (onland) and tsunami (ocean) hazard in Guadeloupe and Dominica, as a result of anM8.0 earthquake on the Lesser Antilles megathrust adjacent to Guadeloupe. Colors onland represent scenario earthquake shaking intensities calculated in USGS ShakeMap software (Wald et al. 20...

  3. Quantifying the Adaptive Cycle

    PubMed Central

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  4. Genetic susceptibility to childhood acute lymphoblastic leukemia shows protection in Malay boys: results from the Malaysia-Singapore ALL Study Group.

    PubMed

    Yeoh, Allen Eng-Juh; Lu, Yi; Chan, Jason Yong-Sheng; Chan, Yiong Huak; Ariffin, Hany; Kham, Shirley Kow-Yin; Quah, Thuan Chong

    2010-03-01

    To study genetic epidemiology of childhood acute lymphoblastic leukemia (ALL) in the Chinese and Malays, we investigated 10 polymorphisms encoding carcinogen- or folate-metabolism and transport. Sex-adjusted analysis showed NQO1 609CT significantly protects against ALL, whilst MTHFR 677CT confers marginal protection. Interestingly, we observed that NQO1 609CT and MTHFR 1298 C-allele have greater genetic impact in boys than in girls. The combination of SLC19A1 80GA heterozygosity and 3'-TYMS -6bp/-6bp homozygous deletion is associated with reduced ALL risk in Malay boys. Our study has suggested the importance of gender and race in modulating ALL susceptibility via the folate metabolic pathway. PMID:19651439

  5. Quantifying ice sheet flow characteristics

    NASA Astrophysics Data System (ADS)

    Balcerak, Ernie

    2012-02-01

    Advances have been made in describing ice sheet motion, but in situ rheology (characteristics that affect the flow) of the ice has been hard to measure in the field. Gillet-Chaulet et al. show that they can measure ice rheology and strain rates in situ using a phase-sensitive radar. They used the technique on the Greenland ice sheet to quantify the rheology there. The researchers were able to achieve sufficient resolution to measure a flow phenomenon known as the Raymond effect, in which the ice sheet shows horizontal variations of the vertical strain rate pattern, sometimes creating anticlines in radar-detected stratigraphic layers that are known as Raymond arches. This effect is due to a highly viscous plug of nearly stagnant ice under an ice ridge. The study is, the researchers believe, the first direct confirmation of the Raymond effect. Their results suggest that laboratory ice studies do not capture the full range of ice flow that exists in nature, so additional field studies are needed. (Geophysical Research Letters, doi:10.1029/2011GL049843, 2011)

  6. Mathematical modelling in Matlab of the experimental results shows the electrochemical potential difference - temperature of the WC coatings immersed in a NaCl solution

    NASA Astrophysics Data System (ADS)

    Benea, M. L.; Benea, O. D.

    2016-02-01

    The method used for purchasing the corrosion behaviour the WC coatings deposited by plasma spraying, on a martensitic stainless steel substrate consists in measuring the electrochemical potential of the coating, respectively that of the substrate, immersed in a NaCl solution as corrosive agent. The mathematical processing of the obtained experimental results in Matlab allowed us to make some correlations between the electrochemical potential of the coating and the solution temperature is very well described by some curves having equations obtained by interpolation order 4.

  7. Transitioning from preclinical to clinical chemopreventive assessments of lyophilized black raspberries: interim results show berries modulate markers of oxidative stress in Barrett's esophagus patients.

    PubMed

    Kresty, Laura A; Frankel, Wendy L; Hammond, Cynthia D; Baird, Maureen E; Mele, Jennifer M; Stoner, Gary D; Fromkes, John J

    2006-01-01

    Increased fruit and vegetable consumption is associated with decreased risk of a number of cancers of epithelial origin, including esophageal cancer. Dietary administration of lyophilized black raspberries (LBRs) has significantly inhibited chemically induced oral, esophageal, and colon carcinogenesis in animal models. Likewise, berry extracts added to cell cultures significantly inhibited cancer-associated processes. Positive results in preclinical studies have supported further investigation of berries and berry extracts in high-risk human cohorts, including patients with existing premalignancy or patients at risk for cancer recurrence. We are currently conducting a 6-mo chemopreventive pilot study administering 32 or 45 g (female and male, respectively) of LBRs to patients with Barrett's esophagus (BE), a premalignant esophageal condition in which the normal stratified squamous epithelium changes to a metaplastic columnar-lined epithelium. BE's importance lies in the fact that it confers a 30- to 40-fold increased risk for the development of esophageal adenocarcinoma, a rapidly increasing and extremely deadly malignancy. This is a report on interim findings from 10 patients. To date, the results support that daily consumption of LBRs promotes reductions in the urinary excretion of two markers of oxidative stress, 8-epi-prostaglandin F2alpha (8-Iso-PGF2) and, to a lesser more-variable extent, 8-hydroxy-2'-deoxyguanosine (8-OHdG), among patients with BE. PMID:16800781

  8. A second generation cervico-vaginal lavage device shows similar performance as its preceding version with respect to DNA yield and HPV DNA results

    PubMed Central

    2013-01-01

    Background Attendance rates of cervical screening programs can be increased by offering HPV self-sampling to non-attendees. Acceptability, DNA yield, lavage volumes and choice of hrHPV test can influence effectiveness of the self-sampling procedures and could therefore play a role in recruiting non-attendees. To increase user-friendliness, a frequently used lavage sampler was modified. In this study, we compared this second generation lavage device with the first generation device within similar birth cohorts. Methods Within a large self-sampling cohort-study among non-responders of the Dutch cervical screening program, a subset of 2,644 women received a second generation self-sampling lavage device, while 11,977 women, matched for age and ZIP-code, received the first generation model. The second generation device was different in shape, color, lavage volume, and packaging, in comparison to its first generation model. The Cochran’s test was used to compare both devices for hrHPV positivity rate and response rate. To correct for possible heterogeneity between age and ZIP codes in both groups the Breslow-Day test of homogeneity was used. A T-test was utilized to compare DNA yields of the obtained material in both groups. Results Median DNA yields were 90.4 μg/ml (95% CI 83.2-97.5) and 91.1 μg/ml (95% CI 77.8-104.4, p= 0.726) and hrHPV positivity rates were 8.2% and 6.9% (p= 0.419) per sample self-collected by the second - and the first generation of the device (p= 0.726), respectively. In addition, response rates were comparable for the two models (35.4% versus 34.4%, p= 0.654). Conclusions Replacing the first generation self-sampling device by an ergonomically improved, second generation device resulted in equal DNA yields, comparable hrHPV positivity rates and similar response rates. Therefore, it can be concluded that the clinical performance of the first and second generation models are similar. Moreover, participation of non-attendees in cervical cancer screening is probably not predominantly determined by the type of self-collection device. PMID:23639287

  9. Value of Fused 18F-Choline-PET/MRI to Evaluate Prostate Cancer Relapse in Patients Showing Biochemical Recurrence after EBRT: Preliminary Results

    PubMed Central

    Piccardo, Arnoldo; Paparo, Francesco; Picazzo, Riccardo; Naseri, Mehrdad; Ricci, Paolo; Marziano, Andrea; Bacigalupo, Lorenzo; Biscaldi, Ennio; Rollandi, Gian Andrea; Grillo-Ruggieri, Filippo; Farsad, Mohsen

    2014-01-01

    Purpose. We compared the accuracy of 18F-Choline-PET/MRI with that of multiparametric MRI (mMRI), 18F-Choline-PET/CT, 18F-Fluoride-PET/CT, and contrast-enhanced CT (CeCT) in detecting relapse in patients with suspected relapse of prostate cancer (PC) after external beam radiotherapy (EBRT). We assessed the association between standard uptake value (SUV) and apparent diffusion coefficient (ADC). Methods. We evaluated 21 patients with biochemical relapse after EBRT. Patients underwent 18F-Choline-PET/contrast-enhanced (Ce)CT, 18F-Fluoride-PET/CT, and mMRI. Imaging coregistration of PET and mMRI was performed. Results. 18F-Choline-PET/MRI was positive in 18/21 patients, with a detection rate (DR) of 86%. DRs of 18F-Choline-PET/CT, CeCT, and mMRI were 76%, 43%, and 81%, respectively. In terms of DR the only significant difference was between 18F-Choline-PET/MRI and CeCT. On lesion-based analysis, the accuracy of 18F-Choline-PET/MRI, 18F-Choline-PET/CT, CeCT, and mMRI was 99%, 95%, 70%, and 85%, respectively. Accuracy, sensitivity, and NPV of 18F-Choline-PET/MRI were significantly higher than those of both mMRI and CeCT. On whole-body assessment of bone metastases, the sensitivity of 18F-Choline-PET/CT and 18F-Fluoride-PET/CT was significantly higher than that of CeCT. Regarding local and lymph node relapse, we found a significant inverse correlation between ADC and SUV-max. Conclusion. 18F-Choline-PET/MRI is a promising technique in detecting PC relapse. PMID:24877053

  10. Catalysis: Quantifying charge transfer

    NASA Astrophysics Data System (ADS)

    James, Trevor E.; Campbell, Charles T.

    2016-02-01

    Improving the design of catalytic materials for clean energy production requires a better understanding of their electronic properties, which remains experimentally challenging. Researchers now quantify the number of electrons transferred from metal nanoparticles to an oxide support as a function of particle size.

  11. Resolution of quantifier scope ambiguities.

    PubMed

    Kurtzman, H S; MacDonald, M C

    1993-09-01

    Various processing principles have been suggested to be governing the resolution of quantifier scope ambiguities in sentences such as Every kid climbed a tree. This paper investigates structural principles, that is, those which refer to the syntactic or semantic positions of the quantified phrases. To test these principles, the preferred interpretations for three grammatical constructions were determined in a task in which participants made speeded judgments of whether a sentence following a doubly quantified sentence was a reasonable discourse continuation of the quantified sentence. The observed preferences cannot be explained by any single structural principle, but point instead to the interaction of several principles. Contrary to many proposals, there is little or no effect of a principle that assigns scope according to the linear order of the phrases. The interaction of principles suggests that alternative interpretations of the ambiguity may be initially considered in parallel, followed by selection of the single interpretation that best satisfies the principles. These results are discussed in relation to theories of ambiguity resolution at other levels of linguistic representation. PMID:8269698

  12. Processing of Numerical and Proportional Quantifiers.

    PubMed

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-09-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using numerical (at least seven, at least thirteen, at most seven, at most thirteen) or proportional (many, few) quantifiers. The visual displays were composed of systematically varied proportions of yellow and blue circles. The results demonstrated that numerical estimation and numerical reference information are fundamental in encoding the meaning of quantifiers in terms of response times and acceptability judgments. However, a difference emerges in the comparison strategies when a fixed external reference numerosity (seven or thirteen) is used for numerical quantifiers, whereas an internal numerical criterion is invoked for proportional quantifiers. Moreover, for both quantifier types, quantifier semantics and its polarity (positive vs. negative) biased the response direction (accept/reject). Overall, our results indicate that quantifier comprehension involves core numerical and lexical semantic properties, demonstrating integrated processing of language and numbers. PMID:25631283

  13. Quantifying magma segregation in dykes

    NASA Astrophysics Data System (ADS)

    Yamato, P.; Duretz, T.; May, D. A.; Tartse, R.

    2015-10-01

    The dynamics of magma flow is highly affected by the presence of a crystalline load. During magma ascent, it has been demonstrated that crystal-melt segregation constitutes a viable mechanism for magmatic differentiation. Moreover, crystal-melt segregation during magma transport has important implications not only in terms of magma rheology, but also in terms of differentiation of the continental crust. However, the influences of the crystal volume percentage (?), of their geometry, their size and their density on crystal-melt segregation are still not well constrained. To address these issues, we performed a parametric study using 2D direct numerical simulations, which model the ascension of a crystal-bearing magma in a vertical dyke. Using these models, we have characterised the amount of segregation as a function of different physical properties including ?, the density contrast between crystals and the melt phase (??), the size of the crystals (Ac) and their aspect ratio (R). Results show that small values of R do not affect the segregation. In this case, the amount of segregation depends upon four parameters. Segregation is highest when ?? and Ac are large, and lowest for large pressure gradient (Pd) and/or large values of dyke width (Wd). These four parameters can be combined into a single one, the Snumber, which can be used to quantify the amount of segregation occurring during magma ascent. Based on systematic numerical modelling and dimensional analysis, we provide a first order scaling law which allows quantification of the segregation for an arbitrary Snumber and ?, encompassing a wide range of typical parameters encountered in terrestrial magmatic systems. Although developed in a simplified system, this study has strong implications regarding our understanding of crystal segregation processes during magma transport. Our first order scaling law allows to immediately determine the amount of crystal-melt segregation occurring in any given magmatic dyke system.

  14. Quantifying ubiquitin signaling.

    PubMed

    Ordureau, Alban; Mnch, Christian; Harper, J Wade

    2015-05-21

    Ubiquitin (UB)-driven signaling systems permeate biology, and are often integrated with other types of post-translational modifications (PTMs), including phosphorylation. Flux through such pathways is dictated by the fractional stoichiometry of distinct modifications and protein assemblies as well as the spatial organization of pathway components. Yet, we rarely understand the dynamics and stoichiometry of rate-limiting intermediates along a reaction trajectory. Here, we review how quantitative proteomic tools and enrichment strategies are being used to quantify UB-dependent signaling systems, and to integrate UB signaling with regulatory phosphorylation events, illustrated with the PINK1/PARKIN pathway. A key feature of ubiquitylation is that the identity of UB chain linkage types can control downstream processes. We also describe how proteomic and enzymological tools can be used to identify and quantify UB chain synthesis and linkage preferences. The emergence of sophisticated quantitative proteomic approaches will set a new standard for elucidating biochemical mechanisms of UB-driven signaling systems. PMID:26000850

  15. Quantifying water diffusion in secondary organic material

    NASA Astrophysics Data System (ADS)

    Price, Hannah; Murray, Benjamin; Mattsson, Johan; O'Sullivan, Daniel; Wilson, Theodore; Zhang, Yue; Martin, Scot

    2014-05-01

    Recent research suggests that some secondary organic aerosol (SOA) is highly viscous under certain atmospheric conditions. This may have important consequences for equilibration timescales, SOA growth, heterogeneous chemistry and ice nucleation. In order to quantify these effects, knowledge of the diffusion coefficients of relevant gas species within aerosol particles is vital. In this work, a Raman isotope tracer method is used to quantify water diffusion coefficients over a range of atmospherically relevant humidity and temperature conditions. D2O is observed as it diffuses from the gas phase into a disk of aqueous solution, without the disk changing in size or viscosity. An analytical solution of Fick's second law is then used with a fitting procedure to determine water diffusion coefficients in reference materials for method validation. The technique is then extended to compounds of atmospheric relevance and ?-pinene secondary organic material. We produce water diffusion coefficients from 20 to 80 % RH at 23.5 C for sucrose, levoglucosan, M5AS and MgSO4. For levoglucosan we show that under conditions where a particle bounces, water diffusion in aqueous solutions can be fast (a fraction of a second for a 100 nm radius). For sucrose solutions, we also show that the Stokes-Einstein relation breaks down at high viscosity and cannot be used to predict water diffusion timescales with accuracy. In addition, we also quantify water diffusion coefficients in ?-pinene SOM from 20-80% RH and over temperatures from 6 to -30 C. Our results suggest that, at 6 C, water diffusion in ?-pinene SOA is not kinetically limited on the second timescale, even at 20% RH. As temperatures decrease, however, diffusion slows and may become an increasingly limiting factor for atmospheric processes. A parameterization for the diffusion coefficient of water in ?-pinene secondary organic material, as a function of relative humidity and temperature, is presented. The implications for atmospheric processes such as ice nucleation and heterogeneous chemistry in the mid- and upper-troposphere will be discussed.

  16. Quantifying positional information during early embryonic development

    NASA Astrophysics Data System (ADS)

    Dubuis, Julien

    During the development of multicellular organisms, cells acquire information about their position in the embryo in response to morphogens whose concentrations vary along the anteroposterior axis. In this thesis, we provide an information-theoretic definition of positional information and demonstrate how it can be quantified from experimental data. We start by setting up the mathematical framework and qualitatively discuss which features of expression patterns can contribute to positional information. Then, using the four major gap genes of Drosophila (Hunchback, Krppel, Giant, and Knirps) as an example, we focus on the experimental standards that need to be met to accurately compute positional information from imunofluorescence stainings. We show that imunofluorescence makes it possible to extract not only very accurate mean profiles but also statistical noise and noise correlations from gene expression profile distributions. We use this analysis to extract gap gene profile dynamics with 1-2 min precision and to quantify their profile reproducibility. Finally, we describe how to quantify positional information, in bits, from the experimental gap gene profiles previously generated. Our results show that any individual gene carries nearly two bits of information and that, taken together, these four gap genes carry enough information to define a cell's location along the anteroposterior axis of the embryo with an error bar of half the intercellular distance. This precision is nearly constant along the length of the embryo and nearly enough for each cell to have a unique identity. We argue that this constancy is a signature of optimality in the transmission of information from primary morphogen inputs to the output of the gap gene network.

  17. Normalized wavelet packets quantifiers for condition monitoring

    NASA Astrophysics Data System (ADS)

    Feng, Yanhui; Schlindwein, Fernando S.

    2009-04-01

    Normalized wavelet packets quantifiers are proposed and studied as a new tool for condition monitoring. The new quantifiers construct a complete quantitative time-frequency analysis: the Wavelet packets relative energy measures the normalized energy of the wavelet packets node; the Total wavelet packets entropy measures how the normalized energies of the wavelet packets nodes are distributed in the frequency domain; the Wavelet packets node entropy describes the uncertainty of the normalized coefficients of the wavelet packets node. Unlike the feature extraction methods directly using the amplitude of wavelet coefficients, the new quantifiers are derived from probability distributions and are more robust in diagnostic applications. By applying these quantifiers to Acoustic Emission signals from faulty bearings of rotating machines, our study shows that both localized defects and advanced contamination faults can be successfully detected and diagnosed if the appropriate quantifier is chosen. The Bayesian classifier is used to quantitatively analyse and evaluate the performance of the proposed quantifiers. We also show that reducing the Daubechies wavelet order or the length of the segment will deteriorate the performance of the quantifiers. A two-dimensional diagnostic scheme can also help to improve the diagnostic performance but the improvements are only significant when using lower wavelet orders.

  18. On quantifying insect movements

    SciTech Connect

    Wiens, J.A.; Crist, T.O. ); Milne, B.T. )

    1993-08-01

    We elaborate on methods described by Turchin, Odendaal Rausher for quantifying insect movement pathways. We note the need to scale measurement resolution to the study insects and the questions being asked, and we discuss the use of surveying instrumentation for recording sequential positions of individuals on pathways. We itemize several measures that may be used to characterize movement pathways and illustrate these by comparisons among several Eleodes beetles occurring in shortgrass steppe. The fractal dimension of pathways may provide insights not available from absolute measures of pathway configuration. Finally, we describe a renormalization procedure that may be used to remove sequential interdependence among locations of moving individuals while preserving the basic attributes of the pathway.

  19. A new index quantifying the precipitation extremes

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Baciu, Madalina; Stoica, Cerasela

    2015-04-01

    Events of extreme precipitation have a great impact on society. They are associated with flooding, erosion and landslides.Various indices have been proposed to quantify these extreme events and they are mainly related to daily precipitation amount, which are usually available for long periods in many places over the world. The climate signal related to changes in the characteristics of precipitation extremes is different over various regions and it is dependent on the season and the index used to quantify the precipitation extremes. The climate model simulations and empirical evidence suggest that warmer climates, due to increased water vapour, lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. It was suggested that there is a shift in the nature of precipitation events towards more intense and less frequent rains and increases in heavy rains are expected to occur in most places, even when the mean precipitation is not increasing. This conclusion was also proved for the Romanian territory in a recent study, showing a significant increasing trend of the rain shower frequency in the warm season over the entire country, despite no significant changes in the seasonal amount and the daily extremes. The shower events counted in that paper refer to all convective rains, including torrential ones giving high rainfall amount in very short time. The problem is to find an appropriate index to quantify such events in terms of their highest intensity in order to extract the maximum climate signal. In the present paper, a new index is proposed to quantify the maximum precipitation intensity in an extreme precipitation event, which could be directly related to the torrential rain intensity. This index is tested at nine Romanian stations (representing various physical-geographical conditions) and it is based on the continuous rainfall records derived from the graphical registrations (pluviograms) available at National Meteorological Administration in Romania. These types of records contain the rainfall intensity (mm/minute) over various intervals for which it remains constant. The maximum intensity for each continuous rain over the May-August interval has been calculated for each year. The corresponding time series over the 1951-2008 period have been analysed in terms of their long term trends and shifts in the mean; the results have been compared to those resulted from other rainfall indices based on daily and hourly data, computed over the same interval such as: total rainfall amount, maximum daily amount, contribution of total hourly amounts exceeding 10mm/day, contribution of daily amounts exceeding the 90th percentile, the 90th, 99th and 99.9th percentiles of 1-hour data . The results show that the proposed index exhibit a coherent and stronger climate signal (significant increase) for all analysed stations compared to the other indices associated to precipitation extremes, which show either no significant change or weaker signal. This finding shows that the proposed index is most appropriate to quantify the climate change signal of the precipitation extremes. We consider that this index is more naturally connected to the maximum intensity of a real rainfall event. The results presented is this study were funded by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the research project CLIMHYDEX, "Changes in climate extremes and associated impact in hydrological events in Romania", code PNII-ID-2011-2-0073 (http://climhydex.meteoromania.ro)

  20. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  1. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.

  2. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

  3. Quantifying traffic exposure.

    PubMed

    Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean

    2014-01-01

    Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50?m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300?m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications. PMID:24045427

  4. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  5. Quantifying T Lymphocyte Turnover

    PubMed Central

    De Boer, Rob J.; Perelson, Alan S.

    2013-01-01

    Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2′-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men. PMID:23313150

  6. Children's Developing Intuitions About the Truth Conditions and Implications of Novel Generics Versus Quantified Statements.

    PubMed

    Brandone, Amanda C; Gelman, Susan A; Hedglen, Jenna

    2015-05-01

    Generic statements express generalizations about categories and present a unique semantic profile that is distinct from quantified statements. This paper reports two studies examining the development of children's intuitions about the semantics of generics and how they differ from statements quantified by all, most, and some. Results reveal that, like adults, preschoolers (a) recognize that generics have flexible truth conditions and are capable of representing a wide range of prevalence levels; and (b) interpret novel generics as having near-universal prevalence implications. Results further show that by age 4, children are beginning to differentiate the meaning of generics and quantified statements; however, even 7- to 11-year-olds are not adultlike in their intuitions about the meaning of most-quantified statements. Overall, these studies suggest that by preschool, children interpret generics in much the same way that adults do; however, mastery of the semantics of quantified statements follows a more protracted course. PMID:25297340

  7. Quantifying actin wave modulation on periodic topography

    NASA Astrophysics Data System (ADS)

    Guven, Can; Driscoll, Meghan; Sun, Xiaoyu; Parker, Joshua; Fourkas, John; Carlsson, Anders; Losert, Wolfgang

    2014-03-01

    Actin is the essential builder of the cell cytoskeleton, whose dynamics are responsible for generating the necessary forces for the formation of protrusions. By exposing amoeboid cells to periodic topographical cues, we show that actin can be directionally guided via inducing preferential polymerization waves. To quantify the dynamics of these actin waves and their interaction with the substrate, we modify a technique from computer vision called ``optical flow.'' We obtain vectors that represent the apparent actin flow and cluster these vectors to obtain patches of newly polymerized actin, which represent actin waves. Using this technique, we compare experimental results, including speed distribution of waves and distance from the wave centroid to the closest ridge, with actin polymerization simulations. We hypothesize the modulation of the activity of nucleation promotion factors on ridges (elevated regions of the surface) as a potential mechanism for the wave-substrate coupling. Funded by NIH grant R01GM085574.

  8. Terahertz spectroscopy for quantifying refined oil mixtures.

    PubMed

    Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui

    2012-08-20

    In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017

  9. Quantifying errors in trace species transport modeling

    PubMed Central

    Prather, Michael J.; Zhu, Xin; Strahan, Susan E.; Steenrod, Stephen D.; Rodriguez, Jose M.

    2008-01-01

    One expectation when computationally solving an Earth system model is that a correct answer exists, that with adequate physical approximations and numerical methods our solutions will converge to that single answer. With such hubris, we performed a controlled numerical test of the atmospheric transport of CO2 using 2 models known for accurate transport of trace species. Resulting differences were unexpectedly large, indicating that in some cases, scientific conclusions may err because of lack of knowledge of the numerical errors in tracer transport models. By doubling the resolution, thereby reducing numerical error, both models show some convergence to the same answer. Now, under realistic conditions, we identify a practical approach for finding the correct answer and thus quantifying the advection error. PMID:19066224

  10. Methods to quantify intermittent exercises.

    PubMed

    Desgorces, Franois-Denis; Sngas, Xavier; Garcia, Judith; Decker, Leslie; Noirez, Philippe

    2007-08-01

    The purpose of this study was to quantify intermittent training sessions using different types of exercise. Strength, sprint, and endurance sessions were performed until exhaustion. These sessions were quantified by the product of duration and heart rate (HR) (i.e., training impulse (TRIMP) and HR-zone methods), by the product of duration and rate of perceived exertion (RPE-based method), and a new method (work endurance recovery (WER)). The WER method aims to determine the level of exercise-induced physiological stress using the ratio of cumulated work - endurance limit, which is associated with the naparian logarithm of the ratio of work-recovery. Each session's effects were assessed using blood lactate, delayed onset muscle soreness (DOMS), RPE, and HR. Because sessions were performed until exhaustion, it was assumed that each session would have a similar training load (TL) and there would be low interindividual variability. Each method was used to compare each of the TL quantifications. The endurance session induced the higher HR response (p < 0.001), the sprint session the higher blood lactate increase (p < 0.001), and the strength session the higher DOMS when compared with sprint (p = 0.007). TLs were similar after WER calculations, whereas the HR- and RPE-based methods showed differences between endurance and sprint (p < 0.001), and between endurance and strength TL (p < 0.001 and p < 0.01, respectively). The TLs from WER were correlated to those of the HR-based methods of endurance exercise, for which HR was known to accurately reflect the exercise-induced physiological stress (r = 0.63 and r = 0.64, p < 0.05). In addition, the TL from WER presented low interindividual variability, yet a marked variability was observed in the TLs of HR- and RPE-based methods. As opposed to the latter two methods, WER can quantify varied intermittent exercises and makes it possible to compare the athletes' TL. Furthermore, WER can also assist in comparing athlete responses to training programs. PMID:17622291

  11. Quantifying phylogenetically structured environmental variation.

    PubMed

    Desdevises, Yves; Legendre, Pierre; Azouzi, Lamia; Morand, Serge

    2003-11-01

    Comparative analysis methods control for the variation linked to phylogeny before attempting to correlate the remaining variation of a trait to present-day conditions (i.e., ecology and/or environment). A portion of the phylogenetic variation of the trait may be related to ecology, however; this portion is called "phylogenetic niche conservatism." We propose a method of variation partitioning that allows users to quantify this portion of the variation, called the "phylogenetically structured environmental variation." The new method is applied to published data to study, in a phylogenetic framework, the link between body mass and population density in 79 species of mammals. The results suggest that an important part of the variation of mammal body mass is related to the common influence of phylogeny and population density. PMID:14686540

  12. Quantifying network heterogeneity.

    PubMed

    Estrada, Ernesto

    2010-12-01

    Despite degree distributions give some insights about how heterogeneous a network is, they fail in giving a unique quantitative characterization of network heterogeneity. This is particularly the case when several different distributions fit for the same network, when the number of data points is very scarce due to network size, or when we have to compare two networks with completely different degree distributions. Here we propose a unique characterization of network heterogeneity based on the difference of functions of node degrees for all pairs of linked nodes. We show that this heterogeneity index can be expressed as a quadratic form of the Laplacian matrix of the network, which allows a spectral representation of network heterogeneity. We give bounds for this index, which is equal to zero for any regular network and equal to one only for star graphs. Using it we study random networks showing that those generated by the Erds-Rnyi algorithm have zero heterogeneity, and those generated by the preferential attachment method of Barabsi and Albert display only 11% of the heterogeneity of a star graph. We finally study 52 real-world networks and we found that they display a large variety of heterogeneities. We also show that a classification system based on degree distributions does not reflect the heterogeneity properties of real-world networks. PMID:21230700

  13. Quantifying the quiet epidemic

    PubMed Central

    2014-01-01

    During the late 20th century numerical rating scales became central to the diagnosis of dementia and helped transform attitudes about its causes and prevalence. Concentrating largely on the development and use of the Blessed Dementia Scale, I argue that rating scales served professional ends during the 1960s and 1970s. They helped old age psychiatrists establish jurisdiction over conditions such as dementia and present their field as a vital component of the welfare state, where they argued that ‘reliable modes of diagnosis’ were vital to the allocation of resources. I show how these arguments appealed to politicians, funding bodies and patient groups, who agreed that dementia was a distinct disease and claimed research on its causes and prevention should be designated ‘top priority’. But I also show that worries about the replacement of clinical acumen with technical and depersonalized methods, which could conceivably be applied by anyone, led psychiatrists to stress that rating scales had their limits and could be used only by trained experts. PMID:25866448

  14. Quantifying nonisothermal subsurface soil water evaporation

    NASA Astrophysics Data System (ADS)

    Deol, Pukhraj; Heitman, Josh; Amoozegar, Aziz; Ren, Tusheng; Horton, Robert

    2012-11-01

    Accurate quantification of energy and mass transfer during soil water evaporation is critical for improving understanding of the hydrologic cycle and for many environmental, agricultural, and engineering applications. Drying of soil under radiation boundary conditions results in formation of a dry surface layer (DSL), which is accompanied by a shift in the position of the latent heat sink from the surface to the subsurface. Detailed investigation of evaporative dynamics within this active near-surface zone has mostly been limited to modeling, with few measurements available to test models. Soil column studies were conducted to quantify nonisothermal subsurface evaporation profiles using a sensible heat balance (SHB) approach. Eleven-needle heat pulse probes were used to measure soil temperature and thermal property distributions at the millimeter scale in the near-surface soil. Depth-integrated SHB evaporation rates were compared with mass balance evaporation estimates under controlled laboratory conditions. The results show that the SHB method effectively measured total subsurface evaporation rates with only 0.01-0.03 mm h-1difference from mass balance estimates. The SHB approach also quantified millimeter-scale nonisothermal subsurface evaporation profiles over a drying event, which has not been previously possible. Thickness of the DSL was also examined using measured soil thermal conductivity distributions near the drying surface. Estimates of the DSL thickness were consistent with observed evaporation profile distributions from SHB. Estimated thickness of the DSL was further used to compute diffusive vapor flux. The diffusive vapor flux also closely matched both mass balance evaporation rates and subsurface evaporation rates estimated from SHB.

  15. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow

  16. Quantifying Dictyostelium discoideum Aggregation

    NASA Astrophysics Data System (ADS)

    McCann, Colin; Kriebel, Paul; Parent, Carole; Losert, Wolfgang

    2008-03-01

    Upon nutrient deprivation, the social amoebae Dictyostelium discoideum enter a developmental program causing them to aggregate into multicellular organisms. During this process cells sense and secrete chemical signals, often moving in a head-to-tail fashion called a `stream' as they assemble into larger entities. We measure Dictyostelium speed, shape, and directionality, both inside and outside of streams, and develop methods to distinguish group dynamics from behavior of individual cells. We observe an overall increase in speed during aggregation and a decrease in speed fluctuations once a cell joins a stream. Initial results indicate that when cells are in close proximity the trailing cells migrate specifically toward the backs of leading cells.

  17. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Libault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  18. Quantifying periodicity in omics data

    PubMed Central

    Amariei, Cornelia; Tomita, Masaru; Murray, Douglas B.

    2014-01-01

    Oscillations play a significant role in biological systems, with many examples in the fast, ultradian, circadian, circalunar, and yearly time domains. However, determining periodicity in such data can be problematic. There are a number of computational methods to identify the periodic components in large datasets, such as signal-to-noise based Fourier decomposition, Fisher's g-test and autocorrelation. However, the available methods assume a sinusoidal model and do not attempt to quantify the waveform shape and the presence of multiple periodicities, which provide vital clues in determining the underlying dynamics. Here, we developed a Fourier based measure that generates a de-noised waveform from multiple significant frequencies. This waveform is then correlated with the raw data from the respiratory oscillation found in yeast, to provide oscillation statistics including waveform metrics and multi-periods. The method is compared and contrasted to commonly used statistics. Moreover, we show the utility of the program in the analysis of noisy datasets and other high-throughput analyses, such as metabolomics and flow cytometry, respectively. PMID:25364747

  19. Quantifying the seismicity on Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Turcotte, Donald L.; Rundle, John B.

    2013-07-01

    We quantify the seismicity on the island of Taiwan using the frequency-magnitude statistics of earthquakes since 1900. A break in Gutenberg-Richter scaling for large earthquakes in global seismicity has been observed, this break is also observed in our Taiwan study. The seismic data from the Central Weather Bureau Seismic Network are in good agreement with the Gutenberg-Richter relation taking b ? 1 when M < 7. For large earthquakes, M ? 7, the seismic data fit Gutenberg-Richter scaling with b ? 1.5. If the Gutenberg-Richter scaling for M < 7 earthquakes is extrapolated to larger earthquakes, we would expect a M > 8 earthquake in the study region about every 25 yr. However, our analysis shows a lower frequency of occurrence of large earthquakes so that the expected frequency of M > 8 earthquakes is about 200 yr. The level of seismicity for smaller earthquakes on Taiwan is about 12 times greater than in Southern California and the possibility of a M ? 9 earthquake north or south of Taiwan cannot be ruled out. In light of the Fukushima, Japan nuclear disaster, we also discuss the implications of our study for the three operating nuclear power plants on the coast of Taiwan.

  20. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  1. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with its high spatial and spectral resolution, it was possible to zoom into the very heart of this very massive star. In this innermost region, the observations are dominated by the extremely dense stellar wind that totally obscures the underlying central star. The AMBER observations show that this dense stellar wind is not spherically symmetric, but exhibits a clearly elongated structure. Overall, the AMBER observations confirm that the extremely high mass loss of Eta Carinae's massive central star is non-spherical and much stronger along the poles than in the equatorial plane. This is in agreement with theoretical models that predict such an enhanced polar mass-loss in the case of rapidly rotating stars. ESO PR Photo 06c/07 ESO PR Photo 06c/07 RS Ophiuchi in Outburst Several papers from this special feature focus on the later stages in a star's life. One looks at the binary system Gamma 2 Velorum, which contains the closest example of a star known as a Wolf-Rayet. A single AMBER observation allowed the astronomers to separate the spectra of the two components, offering new insights in the modeling of Wolf-Rayet stars, but made it also possible to measure the separation between the two stars. This led to a new determination of the distance of the system, showing that previous estimates were incorrect. The observations also revealed information on the region where the winds from the two stars collide. The famous binary system RS Ophiuchi, an example of a recurrent nova, was observed just 5 days after it was discovered to be in outburst on 12 February 2006, an event that has been expected for 21 years. AMBER was able to detect the extension of the expanding nova emission. These observations show a complex geometry and kinematics, far from the simple interpretation of a spherical fireball in extension. AMBER has detected a high velocity jet probably perpendicular to the orbital plane of the binary system, and allowed a precise and careful study of the wind and the shockwave coming from the nova. The stream of results from the VLTI and AMBER

  2. Quantifying renewable groundwater stress with GRACE

    NASA Astrophysics Data System (ADS)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min-Hui; Reager, John T.; Famiglietti, James S.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-07-01

    Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions.

  3. Quantifying renewable groundwater stress with GRACE

    PubMed Central

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Reager, John T.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human‐dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE‐based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions. PMID:26900185

  4. Quantifying stressors among Iowa farmers.

    PubMed

    Freeman, S A; Schwab, C V; Jiang, Q

    2008-10-01

    In order to identify events/activities that are particularly stressful for farmers/ranchers, afarm stress survey based on the proportionate scaling method was mailed to a stratified random sample of 3000 Iowa farmers by the USDA National Agricultural Statistics Service. The participants were asked to compare 62 life events and farm activities to a marriage (assigned a baseline rating of 50), decide if it was less stressful or more stressful, and then assign a stress rating between 1 and 100. As expected, the most stressful events were the death of a spouse or child. Other high-stress events were disabling injuries, foreclosure on a mortgage, divorce, machinery breakdown during harvest, and loss of crop to weather. Mean stress ratings varied by age, marital status, and type of farming enterprise. Farmers between the ages of 40-59 and 60-79 had the most items with high stress levels. Females had more high-stress items than males. Divorced farmers had fewer high-stress items than other respondents. Farmer's whose primary focus was raising horses had more high-stress items than other farm types. Significant outcomes of this study go beyond the specific mean stress ratings of the events and activities. The results indicate that farm stressors can be quantified using the proportionate scaling method and that the impact of the stressor is based not just on the event but is also dependent on the characteristics of the farmer (e.g., age, gender, marital status, etc.). PMID:19044170

  5. The Art Show

    ERIC Educational Resources Information Center

    Scolarici, Alicia

    2004-01-01

    This article describes what once was thought to be impossible--a formal art show extravaganza at an elementary school with 1,000 students, a Department of Defense Dependent School (DODDS) located overseas, on RAF Lakenheath, England. The dream of this this event involved the transformation of the school cafeteria into an elegant art show

  6. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2014-07-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples), and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  7. Unary quantifiers, transitive closure, and relations of large degree

    NASA Astrophysics Data System (ADS)

    Libkin, Leonid; Wong, Limsoon

    This paper studies expressivity bounds for extensions of first-order logic with counting and unary quantifiers in the presence of relations of large degree. There are several motivations for this work. First, it is known that first-order logic with counting quantifiers captures uniform TC over ordered structures. Thus, proving expressivity bounds for first-order with counting can be seen as an attempt to show TC {?/?} DLOG using techniques of descriptive complexity. Second, the presence of auxiliary built-in relations (e.g., order, successor) is known to make a big impact on expressivity results in finite-model theory and database theory. Our goal is to extend techniques from "pure" setting to that of auxiliary relations.

  8. Quantifying foot deformation using finite helical angle.

    PubMed

    Pothrat, Claude; Goislard de Monsabert, Benjamin; Vigouroux, Laurent; Viehweger, Elke; Berton, Eric; Rao, Guillaume

    2015-10-15

    Foot intrinsic motion originates from the combination of numerous joint motions giving this segment a high adaptive ability. Existing foot kinematic models are mostly focused on analyzing small scale foot bone to bone motions which require both complex experimental methodology and complex interpretative work to assess the global foot functionality. This study proposes a method to assess the total foot deformation by calculating a helical angle from the relative motions of the rearfoot and the forefoot. This method required a limited number of retro-reflective markers placed on the foot and was tested for five different movements (walking, forefoot impact running, heel impact running, 90 cutting, and 180 U-turn) and 12 participants. Overtime intraclass correlation coefficients were calculated to quantify the helical angle pattern repeatability for each movement. Our results indicated that the method was suitable to identify the different motions as different amplitudes of helical angle were observed according to the flexibility required in each movement. Moreover, the results showed that the repeatability could be used to identify the mastering of each motion as this repeatability was high for well mastered movements. Together with existing methods, this new protocol could be applied to fully assess foot function in sport or clinical contexts. PMID:26319503

  9. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  10. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  11. Quantifying coherence of Gaussian states

    NASA Astrophysics Data System (ADS)

    Xu, Jianwei

    2016-03-01

    Coherence arises from the superposition principle and plays a key role in quantum mechanics. Recently, Baumgratz et al. [T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401] established a rigorous framework for quantifying the coherence of finite-dimensional quantum states. In this work we provide a framework for quantifying the coherence of Gaussian states and explicitly give a coherence measure for Gaussian states based on the relative entropy.

  12. Talk Show Science.

    ERIC Educational Resources Information Center

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  13. Obesity in show cats.

    PubMed

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. PMID:24612018

  14. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became

  15. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  16. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  17. Show Them the Money.

    ERIC Educational Resources Information Center

    Levine, Elliott

    2002-01-01

    Strategies to help garner community support in school technology growth or maintenance include the following: (1) consider a "current state of technology" report; (2) forget five-year plans; (3) develop annual technology reports; (4) look to your website; (5) seek constructive opportunities to share technology, and (6) show off best examples at

  18. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?15N and ?18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  19. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  20. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  1. Obesity in show dogs.

    PubMed

    Corbee, R J

    2012-08-11

    Obesity is an important disease with a growing incidence. Because obesity is related to several other diseases, and decreases life span, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain breeds is often suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, we investigated 1379 dogs of 128 different breeds by determining their body condition score (BCS). Overall, 18.6% of the show dogs had a BCS >5, and 1.1% of the show dogs had a BCS>7. There were significant differences between breeds, which could be correlated to the breed standards. It warrants firm discussions with breeders and judges in order to come to different interpretations of the standards to prevent overweight conditions from being the standard of beauty. PMID:22882163

  2. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  3. Show Me the Carbonates

    NASA Astrophysics Data System (ADS)

    Martel, L. M. V.

    2003-10-01

    The Martian surface dust is 2 to 5 weight % carbonate minerals. Joshua Bandfield, Timothy Glotch, and Philip Christensen (Arizona State University) reported the result after examining Mars Global Surveyor Thermal Emission Spectrometer (TES) data from 21 high-albedo, dusty surfaces on Mars located between 30 degrees S and 15 degrees N. Trace amounts of carbonates are widely distributed in the silicate-rich dust, but no evidence has been found in the TES data for widespread deposits of exposed carbonate rock. The small amount of detected carbonate is more consistent with the idea that Mars has long been cold and mostly dry rather than a place formerly warm and wet with a thick carbon dioxide atmosphere, and especially favorable for life.

  4. Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results

    SciTech Connect

    Plataniotis, George A. Dale, Roger G.

    2009-04-01

    Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

  5. Quantifying potential recharge in mantled sinkholes using ERT.

    PubMed

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system. PMID:18823398

  6. TRK Inhibitor Shows Early Promise.

    PubMed

    2016-01-01

    Results from a phase I study show that the TRK inhibitor LOXO-101 is well tolerated and effective, with patients whose tumors bear NTRK fusions responding well and durably to this targeted therapy. PMID:26603524

  7. Tracking and Quantifying Objects and Non-Cohesive Substances

    ERIC Educational Resources Information Center

    van Marle, Kristy; Wynn, Karen

    2011-01-01

    The present study tested infants' ability to assess and compare quantities of a food substance. Contrary to previous findings, the results suggest that by 10 months of age infants can quantify non-cohesive substances, and that this ability is different in important ways from their ability to quantify discrete objects: (1) In contrast to even much

  8. Gaussian intrinsic entanglement: An entanglement quantifier based on secret correlations

    NASA Astrophysics Data System (ADS)

    Mita, Ladislav; Tatham, Richard

    2015-06-01

    Intrinsic entanglement (IE) is a quantity which aims at quantifying bipartite entanglement carried by a quantum state as an optimal amount of the intrinsic information that can be extracted from the state by measurement. We investigate in detail the properties of a Gaussian version of IE, the so-called Gaussian intrinsic entanglement (GIE). We show explicitly how GIE simplifies to the mutual information of a distribution of outcomes of measurements on a conditional state obtained by a measurement on a purifying subsystem of the analyzed state, which is first minimized over all measurements on the purifying subsystem and then maximized over all measurements on the conditional state. By constructing for any separable Gaussian state a purification and a measurement on the purifying subsystem which projects the purification onto a product state, we prove that GIE vanishes on all Gaussian separable states. Via realization of quantum operations by teleportation, we further show that GIE is nonincreasing under Gaussian local trace-preserving operations and classical communication. For pure Gaussian states and a reduction of the continuous-variable GHZ state, we calculate GIE analytically and we show that it is always equal to the Gaussian Rnyi-2 entanglement. We also extend the analysis of IE to a non-Gaussian case by deriving an analytical lower bound on IE for a particular form of the non-Gaussian continuous-variable Werner state. Our results indicate that mapping of entanglement onto intrinsic information is capable of transmitting also quantitative properties of entanglement and that this property can be used for introduction of a quantifier of Gaussian entanglement which is a compromise between computable and physically meaningful entanglement quantifiers.

  9. PARAMETERS FOR QUANTIFYING BEAM HALO

    SciTech Connect

    C.K. ALLEN; T.P. WANGLER

    2001-06-01

    Two different parameters for the quantitative description of beam halo are introduced, both based on moments of the particle distribution. One parameter is a measure of spatial halo formation and has been defined previously by Wangler and Crandall [3], termed the profile parameter. The second parameter relies on kinematic invariants to quantify halo formation in phase space; we call it the halo parameter. The profile parameter can be computed from experimental beam profile data. The halo parameter provides a theoretically more complete description of halo in phase space, but is difficult to obtain experimentally.

  10. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-01

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short' - prior-24-hour recall obtained in the afternoon and 'long' - previous-day recall obtained in the morning) with four prompts ('forward' - distant to recent, 'meal name' - breakfast, etc., 'open' - no instructions, and 'reverse' - recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure - report rate - and reporting-error-sensitive measures - correspondence rate and inflation ratio - were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio - but not report rate - showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID:26865356

  11. Quantifying mixing using equilibrium reactions

    SciTech Connect

    Wheat, Philip M.; Posner, Jonathan D.

    2009-03-15

    A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca{sup 2+} ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.

  12. Quantifying torso deformity in scoliosis

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James

    2006-03-01

    Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.

  13. Quantifying entanglement with witness operators

    SciTech Connect

    Brandao, Fernando G.S.L.

    2005-08-15

    We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

  14. Tagging SNP haplotype analysis of the secretory PLA2-V gene, PLA2G5, shows strong association with LDL and oxLDL levels, suggesting functional distinction from sPLA2-IIA: results from the UDACS study.

    PubMed

    Wootton, Peter T E; Arora, Nupur L; Drenos, Fotios; Thompson, Simon R; Cooper, Jackie A; Stephens, Jeffrey W; Hurel, Steven J; Hurt-Camejo, Eva; Wiklund, Olov; Humphries, Steve E; Talmud, Philippa J

    2007-06-15

    Animal and human studies suggest that both secretory PLA2 (sPLA2)-V and sPLA2-IIA (encoded, respectively, by the neighbouring PLA2G5 and PLA2G2A genes) contribute to atherogenesis. Elevated plasma sPLA2-IIA predicts coronary heart disease (CHD) risk, but no mass assay for sPLA2-V is available. We previously reported that tagging single nucleotide polymorphism (tSNP) haplotypes of PLA2G2A are strongly associated with sPLA2-IIA mass, but not lipid levels. Here, we use tSNPs of the sPLA2-V gene to investigate the association of PLA2G5 with CHD risk markers. Seven PLA2G5 tSNPs genotypes, explaining >92% of the locus genetic variability, were determined in 519 patients with Type II diabetes (in whom PLA2G2A tSNP data was available), and defined seven common haplotypes (frequencies >5%). PLA2G5 and PLA2G2A tSNPs showed linkage disequilibrium (LD). Compared to the common PLA2G5 haplotype, H1 (frequency 34.9%), haplotypes H2-7 were associated with overall higher plasma LDL (P < 0.00004) and total cholesterol (P < 0.00003) levels yet lower oxLDL/LDL (P = 0.006) and sPLA2-IIA mass (P = 0.04), probably reflecting LD with PLA2G2A. Intronic tSNP (rs11573248), unlikely itself to be functional, distinguished H1 from LDL-raising haplotypes and may mark a functional site. In conclusion, PLA2G5 tSNP haplotypes demonstrate an association with total and LDL cholesterol and oxLDL/LDL, not seen with PLA2G2A, thus confirming distinct functional roles for these two sPLA2s. PMID:17545304

  15. Quantifying Evaporation in a Permeable Pavement System

    EPA Science Inventory

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  16. Quantifying MLI Thermal Conduction in Cryogenic Applications from Experimental Data

    NASA Astrophysics Data System (ADS)

    Ross, R. G., Jr.

    2015-12-01

    Multilayer Insulation (MLI) uses stacks of low-emittance metalized sheets combined with low-conduction spacer features to greatly reduce the heat transfer to cryogenic applications from higher temperature surrounds. However, as the hot-side temperature decreases from room temperature to cryogenic temperatures, the level of radiant heat transfer drops as the fourth power of the temperature, while the heat transfer by conduction only falls off linearly. This results in cryogenic MLI being dominated by conduction, a quantity that is extremely sensitive to MLI blanket construction and very poorly quantified in the literature. To develop useful quantitative data on cryogenic blanket conduction, multilayer nonlinear heat transfer models are used to analyze extensive heat transfer data measured by Lockheed Palo Alto on their cryogenic dewar MLI and measured by JPL on their spacecraft MLI. The data-fitting aspect of the modeling allows the radiative and conductive thermal properties of the tested blankets to be explicitly quantified. Results are presented showing that MLI conductance varies by a factor of 600 between spacecraft MLI and Lockheed's best cryogenic MLI.

  17. Quantifying agitation in sedated ICU patients using digital imaging.

    PubMed

    Chase, J Geoffrey; Agogue, Franck; Starfinger, Christina; Lam, ZhuHui; Shaw, Geoffrey M; Rudge, Andrew D; Sirisena, Harsha

    2004-11-01

    Agitation is a significant problem in the Intensive Care Unit (ICU), affecting 71% of sedated adult patients during 58% of ICU patient-days. Subjective scale based assessment-methods focused primarily on assessing excessive patient motion are currently used to assess the level of patient agitation, but are limited in their accuracy and resolution. This research quantifies this approach by developing an objective agitation measurement from patient motion that is sensed using digital video image processing. A fuzzy inference system (FIS) is developed to classify levels of motion that correlate with observed patient agitation, while accounting for motion due to medical staff working on the patient. Clinical tests for five ICU patients have been performed to verify the validity of this approach in comparison to agitation graded by nursing staff using the Riker Sedation-Agitation Scale (SAS). All trials were performed in the Christchurch Hospital Department of Intensive Care, with ethics approval from the Canterbury Ethics Committee. Results show good correlation with medical staff assessment with no false positive results during calm periods. Clinically, this initial agitation measurement method promises the ability to consistently and objectively quantify patient agitation to enable better management of sedation and agitation through optimised drug delivery leading to reduced length of stay and improved outcome. PMID:15451162

  18. A Study of Quantifiers in Mandarin Chinese.

    ERIC Educational Resources Information Center

    Lu, John H-T.

    1980-01-01

    Studies, using Mandarin Chinese as a test case: (1) the interaction of syntax and semantics when quantifiers and negatives co-occur; (2) the linear interpretation of quantifiers when the universal and existential quantifiers co-occur; (3) the logical relationship between them; and (4) the basic word order of existential sentences involving

  19. Quantifying macromolecular conformational transition pathways

    NASA Astrophysics Data System (ADS)

    Seyler, Sean; Kumar, Avishek; Thorpe, Michael; Beckstein, Oliver

    2015-03-01

    Diverse classes of proteins function through large-scale conformational changes that are challenging for computer simulations. A range of fast path-sampling techniques have been used to generate transitions, but it has been difficult to compare paths from (and assess the relative strengths of) different methods. We introduce a comprehensive method (pathway similarity analysis, PSA) for quantitatively characterizing and comparing macromolecular pathways. The Hausdorff and Fréchet metrics (known from computational geometry) are used to quantify the degree of similarity between polygonal curves in configuration space. A strength of PSA is its use of the full information available from the 3 N-dimensional configuration space trajectory without requiring additional specific knowledge about the system. We compare a sample of eleven different methods for the closed-to-open transitions of the apo enzyme adenylate kinase (AdK) and also apply PSA to an ensemble of 400 AdK trajectories produced by dynamic importance sampling MD and the Geometrical Pathways algorithm. We discuss the method's potential to enhance our understanding of transition path sampling methods, validate them, and help guide future research toward deeper physical insights into conformational transitions.

  20. Quantifying the Shape of Aging

    PubMed Central

    Wrycza, Tomasz F.; Missov, Trifon I.; Baudisch, Annette

    2015-01-01

    In Biodemography, aging is typically measured and compared based on aging rates. We argue that this approach may be misleading, because it confounds the time aspect with the mere change aspect of aging. To disentangle these aspects, here we utilize a time-standardized framework and, instead of aging rates, suggest the shape of aging as a novel and valuable alternative concept for comparative aging research. The concept of shape captures the direction and degree of change in the force of mortality over age, whichon a demographic levelreflects aging. We 1) provide a list of shape properties that are desirable from a theoretical perspective, 2) suggest several demographically meaningful and non-parametric candidate measures to quantify shape, and 3) evaluate performance of these measures based on the list of properties as well as based on an illustrative analysis of a simple dataset. The shape measures suggested here aim to provide a general means to classify aging patterns independent of any particular mortality model and independent of any species-specific time-scale. Thereby they support systematic comparative aging research across different species or between populations of the same species under different conditions and constitute an extension of the toolbox available to comparative research in Biodemography. PMID:25803427

  1. Quantifying the vitamin D economy.

    PubMed

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. PMID:26024057

  2. Anadarko shows unique problems, economics

    SciTech Connect

    Drisdale, J.K.

    1982-08-09

    Focuses on the drilling of the deep Springer Sands in the Cyril basin 65 miles southwest of Oklahoma City. Anadarko is a high cost, unstable price area, where reserve risk offers some quantifiable downside. Increasing costs of drilling and the threat of lower gas prices have driven down leasehold costs to 40-50% of 1981 costs. Suggests that independents should attempt to improve technology and reduce costs; and combat government regulations and excessive taxes that significantly affect exploration and production.

  3. Quantifying cell behaviors during embryonic wound healing

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Ma, Xiaoyan; Crews, Sarah; Lynch, Holley; McCleery, W. Tyler; Hutson, M. Shane

    2011-03-01

    During embryogenesis, internal forces induce motions in cells leading to widespread motion in tissues. We previously developed laser hole-drilling as a consistent, repeatable way to probe such epithelial mechanics. The initial recoil (less than 30s) gives information about physical properties (elasticity, force) of cells surrounding the wound, but the long-term healing process (tens of minutes) shows how cells adjust their behavior in response to stimuli. To study this biofeedback in many cells through time, we developed tools to quantify statistics of individual cells. By combining watershed segmentation with a powerful and efficient user interaction system, we overcome problems that arise in any automatic segmentation from poor image quality. We analyzed cell area, perimeter, aspect ratio, and orientation relative to wound for a wide variety of laser cuts in dorsal closure. We quantified statistics for different regions as well, i.e. cells near to and distant from the wound. Regional differences give a distribution of wound-induced changes, whose spatial localization provides clues into the physical/chemical signals that modulate the wound healing response. Supported by the Human Frontier Science Program (RGP0021/2007 C).

  4. Quantifying Emergent Behavior of Autonomous Robots

    NASA Astrophysics Data System (ADS)

    Martius, Georg; Olbrich, Eckehard

    2015-10-01

    Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.

  5. Quantifying capital goods for waste incineration

    SciTech Connect

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-06-15

    Highlights: Materials and energy used for the construction of waste incinerators were quantified. The data was collected from five incineration plants in Scandinavia. Included were six main materials, electronic systems, cables and all transportation. The capital goods contributed 23% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,00026,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 40005000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 714 kg CO{sub 2} per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 23% with respect to kg CO{sub 2} per tonne of waste combusted.

  6. Quantifying of bactericide properties of medicinal plants

    PubMed Central

    cs, Andrs; Glncsr, Flra; Barabs, Anik

    2011-01-01

    Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defense, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study, the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

  7. Quantifying utricular stimulation during natural behavior

    PubMed Central

    Rivera, Angela R. V.; Davis, Julian; Grant, Wally; Blob, Richard W.; Peterson, Ellengene; Neiman, Alexander B.; Rowe, Michael

    2012-01-01

    The use of natural stimuli in neurophysiological studies has led to significant insights into the encoding strategies used by sensory neurons. To investigate these encoding strategies in vestibular receptors and neurons, we have developed a method for calculating the stimuli delivered to a vestibular organ, the utricle, during natural (unrestrained) behaviors, using the turtle as our experimental preparation. High-speed digital video sequences are used to calculate the dynamic gravito-inertial (GI) vector acting on the head during behavior. X-ray computed tomography (CT) scans are used to determine the orientation of the otoconial layer (OL) of the utricle within the head, and the calculated GI vectors are then rotated into the plane of the OL. Thus, the method allows us to quantify the spatio-temporal structure of stimuli to the OL during natural behaviors. In the future, these waveforms can be used as stimuli in neurophysiological experiments to understand how natural signals are encoded by vestibular receptors and neurons. We provide one example of the method which shows that turtle feeding behaviors can stimulate the utricle at frequencies higher than those typically used in vestibular studies. This method can be adapted to other species, to other vestibular end organs, and to other methods of quantifying head movements. PMID:22753360

  8. Evaluation of two methods for quantifying passeriform lice.

    PubMed

    Koop, Jennifer A H; Clayton, Dale H

    2013-06-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer's timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238-302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  9. Evaluation of two methods for quantifying passeriform lice

    PubMed Central

    Koop, Jennifer A. H.; Clayton, Dale H.

    2013-01-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observers timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  10. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  11. Career and Technical Education: Show Us the Buck, We'll Show You the Bang!

    ERIC Educational Resources Information Center

    Whetstone, Ryan

    2011-01-01

    Adult and CTE programs in California have been cut by about 60 percent over the past three years. A number of school districts have summarily eliminated these programs to preserve funding for other educational endeavors. The author says part of the problem has been the community's inability to communicate quantifiable results. One of the hottest

  12. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  13. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePLUS

    ... in Women with Alzheimers A Few Cups of Coffee a Day May Lower Alzheimers Risk Chocolate May ... May Be Less Likely to Get Alzheimers Drinking Coffee in Mid-Life May Help Ward Off Alzheimers ...

  14. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePLUS

    ... Care Obesity at Midlife May Speed Alzheimer’s Onset Hello from my mom Easing the Behavior Problems of ... Managers Continuing Care FOR MORE ARTICLES CLICK HERE Hello from my mom Common Estate Planning Errors Alzheimer’s ...

  15. Quantifying the isotopic continental effect

    NASA Astrophysics Data System (ADS)

    Winnick, Matthew J.; Chamberlain, C. Page; Caves, Jeremy K.; Welker, Jeffrey M.

    2014-11-01

    Since the establishment of the IAEA-WMO precipitation-monitoring network in 1961, it has been observed that isotope ratios in precipitation (? 2H and ? 18O) generally decrease from coastal to inland locations, an observation described as the 'continental effect.' While discussed frequently in the literature, there have been few attempts to quantify the variables controlling this effect despite the fact that isotopic gradients over continents can vary by orders of magnitude. In a number of studies, traditional Rayleigh fractionation has proven inadequate in describing the global variability of isotopic gradients due to its simplified treatment of moisture transport and its lack of moisture recycling processes. In this study, we use a one-dimensional idealized model of water vapor transport along a storm track to investigate the dominant variables controlling isotopic gradients in precipitation across terrestrial environments. We find that the sensitivity of these gradients to progressive rainout is controlled by a combination of the amount of evapotranspiration and the ratio of transport by advection to transport by eddy diffusion, with these variables becoming increasingly important with decreasing length scales of specific humidity. A comparison of modeled gradients with global precipitation isotope data indicates that these variables can account for the majority of variability in observed isotopic gradients between coastal and inland locations. Furthermore, the dependence of the 'continental effect' on moisture recycling allows for the quantification of evapotranspiration fluxes from measured isotopic gradients, with implications for both paleoclimate reconstructions and large-scale monitoring efforts in the context of global warming and a changing hydrologic cycle.

  16. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the deformation behavior was found to depend strongly on the character of the nearby microstructure.

  17. Computed tomography to quantify tooth abrasion

    NASA Astrophysics Data System (ADS)

    Kofmehl, Lukas; Schulz, Georg; Deyhle, Hans; Filippi, Andreas; Hotz, Gerhard; Berndt-Dagassan, Dorothea; Kramis, Simon; Beckmann, Felix; Mller, Bert

    2010-09-01

    Cone-beam computed tomography, also termed digital volume tomography, has become a standard technique in dentistry, allowing for fast 3D jaw imaging including denture at moderate spatial resolution. More detailed X-ray images of restricted volumes for post-mortem studies in dental anthropology are obtained by means of micro computed tomography. The present study evaluates the impact of the pipe smoking wear on teeth morphology comparing the abraded tooth with its contra-lateral counterpart. A set of 60 teeth, loose or anchored in the jaw, from 12 dentitions have been analyzed. After the two contra-lateral teeth were scanned, one dataset has been mirrored before the two datasets were registered using affine and rigid registration algorithms. Rigid registration provides three translational and three rotational parameters to maximize the overlap of two rigid bodies. For the affine registration, three scaling factors are incorporated. Within the present investigation, affine and rigid registrations yield comparable values. The restriction to the six parameters of the rigid registration is not a limitation. The differences in size and shape between the tooth and its contra-lateral counterpart generally exhibit only a few percent in the non-abraded volume, validating that the contralateral tooth is a reasonable approximation to quantify, for example, the volume loss as the result of long-term clay pipe smoking. Therefore, this approach allows quantifying the impact of the pipe abrasion on the internal tooth morphology including root canal, dentin, and enamel volumes.

  18. Quantifying radionuclide signatures from a ?-? coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The ?-? system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible ? and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected ? cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, ? multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the ?-? system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. PMID:26254208

  19. A Generalizable Methodology for Quantifying User Satisfaction

    NASA Astrophysics Data System (ADS)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  20. Quantifying a cellular automata simulation of electric vehicles

    NASA Astrophysics Data System (ADS)

    Hill, Graeme; Bell, Margaret; Blythe, Phil

    2014-12-01

    Within this work the Nagel-Schreckenberg (NS) cellular automata is used to simulate a basic cyclic road network. Results from SwitchEV, a real world Electric Vehicle trial which has collected more than two years of detailed electric vehicle data, are used to quantify the results of the NS automata, demonstrating similar power consumption behavior to that observed in the experimental results. In particular the efficiency of the electric vehicles reduces as the vehicle density increases, due in part to the reduced efficiency of EVs at low speeds, but also due to the energy consumption inherent in changing speeds. Further work shows the results from introducing spatially restricted speed restriction. In general it can be seen that induced congestion from spatially transient events propagates back through the road network and alters the energy and efficiency profile of the simulated vehicles, both before and after the speed restriction. Vehicles upstream from the restriction show a reduced energy usage and an increased efficiency, and vehicles downstream show an initial large increase in energy usage as they accelerate away from the speed restriction.

  1. Quantifying asymmetry of quantum states using entanglement

    NASA Astrophysics Data System (ADS)

    Toloui, Borzu

    2013-03-01

    For open systems, symmetric dynamics do not always lead to conservation laws. We show that, for a dynamic symmetry associated with a compact Lie group, one can derive new selection rules from entanglement theory. These selection rules apply to both closed and open systems as well as reversible and irreversible time evolutions. Our approach is based on an embedding of the system's Hilbert space into a tensor product of two Hilbert spaces allowing for the symmetric dynamics to be simulated with local operations. The entanglement of the embedded states determines which transformations are forbidden because of the symmetry. In fact, every bipartite entanglement monotone can be used to quantify the asymmetry of the initial states. Moreover, where the dynamics is reversible, each of these monotones becomes a new conserved quantity. This research has been supported by the Institute for Quantum Information Science (IQIS) at the University of Calgary, Alberta Innovates, NSERC, General Dynamics Canada, and MITACS.

  2. Stimfit: quantifying electrophysiological data with Python

    PubMed Central

    Guzman, Segundo J.; Schlgl, Alois; Schmidt-Hieber, Christoph

    2013-01-01

    Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. PMID:24600389

  3. Towards quantifying complexity with quantum mechanics

    NASA Astrophysics Data System (ADS)

    Tan, Ryan; R. Terno, Daniel; Thompson, Jayne; Vedral, Vlatko; Gu, Mile

    2014-09-01

    While we have intuitive notions of structure and complexity, the formalization of this intuition is non-trivial. The statistical complexity is a popular candidate. It is based on the idea that the complexity of a process can be quantified by the complexity of its simplest mathematical model the model that requires the least past information for optimal future prediction. Here we review how such models, known as -machines can be further simplified through quantum logic, and explore the resulting consequences for understanding complexity. In particular, we propose a new measure of complexity based on quantum -machines. We apply this to a simple system undergoing constant thermalization. The resulting quantum measure of complexity aligns more closely with our intuition of how complexity should behave.

  4. Quantifying Drosophila food intake: comparative analysis of current methodology.

    PubMed

    Deshpande, Sonali A; Carvalho, Gil B; Amador, Ariadna; Phillips, Angela M; Hoxha, Sany; Lizotte, Keith J; Ja, William W

    2014-05-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the capillary feeder (CAFE), food labeling with a radioactive tracer or colorimetric dye and observations of proboscis extension (PE). We show that the CAFE and radioisotope labeling provide the most consistent results, have the highest sensitivity and can resolve differences in feeding that dye labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of methods for measuring food intake will greatly advance Drosophila studies of nutrition, behavior and disease. PMID:24681694

  5. Quantifying Drosophila food intake: comparative analysis of current methodology

    PubMed Central

    Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.

    2014-01-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694

  6. Measuring political polarization: Twitter shows the two sides of Venezuela

    NASA Astrophysics Data System (ADS)

    Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  7. Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

    2006-01-01

    While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy for these errors is outlined.

  8. Quantifying strain variability in modeling growth of Listeria monocytogenes.

    PubMed

    Aryani, D C; den Besten, H M W; Hazeleger, W C; Zwietering, M H

    2015-09-01

    Prediction of microbial growth kinetics can differ from the actual behavior of the target microorganisms. In the present study, the impact of strain variability on maximum specific growth rate (?max) (h(-1)) was quantified using twenty Listeria monocytogenes strains. The ?max was determined as function of four different variables, namely pH, water activity (aw)/NaCl concentration [NaCl], undissociated lactic acid concentration ([HA]), and temperature (T). The strain variability was compared to biological and experimental variabilities to determine their importance. The experiment was done in duplicate at the same time to quantify experimental variability and reproduced at least twice on different experimental days to quantify biological (reproduction) variability. For all variables, experimental variability was clearly lower than biological variability and strain variability; and remarkably, biological variability was similar to strain variability. Strain variability in cardinal growth parameters, namely pHmin, [NaCl]max, [HA]max, and Tmin was further investigated by fitting secondary growth models to the ?max data, including a modified secondary pH model. The fitting results showed that L. monocytogenes had an average pHmin of 4.5 (5-95% prediction interval (PI) 4.4-4.7), [NaCl]max of 2.0mM (PI 1.8-2.1), [HA]max of 5.1mM (PI 4.2-5.9), and Tmin of -2.2C (PI (-3.3)-(-1.1)). The strain variability in cardinal growth parameters was benchmarked to available literature data, showing that the effect of strain variability explained around 1/3 or less of the variability found in literature. The cardinal growth parameters and their prediction intervals were used as input to illustrate the effect of strain variability on the growth of L. monocytogenes in food products with various characteristics, resulting in 2-4 logCFU/ml(g) difference in growth prediction between the most and least robust strains, depending on the type of food product. This underlined the importance to obtain quantitative knowledge on variability factors to realistically predict the microbial growth kinetics. PMID:26011600

  9. Quantifying the magnetic nature of light emission.

    PubMed

    Taminiau, Tim H; Karaveli, Sinan; van Hulst, Niek F; Zia, Rashid

    2012-01-01

    Tremendous advances in the study of magnetic light-matter interactions have recently been achieved using man-made nanostructures that exhibit and exploit an optical magnetic response. However, naturally occurring emitters can also exhibit magnetic resonances in the form of optical-frequency magnetic-dipole transitions. Here we quantify the magnetic nature of light emission using energy- and momentum-resolved spectroscopy, and leverage a pair of spectrally close electric- and magnetic-dipole transitions in trivalent europium to probe vacuum fluctuations in the electric and magnetic fields at the nanometre scale. These results reveal a new tool for nano-optics: an atomic-size quantum emitter that interacts with the magnetic component of light. PMID:22864572

  10. Quantifying anatomical shape variations in neurological disorders.

    PubMed

    Singh, Nikhil; Fletcher, P Thomas; Preston, J Samuel; King, Richard D; Marron, J S; Weiner, Michael W; Joshi, Sarang

    2014-04-01

    We develop a multivariate analysis of brain anatomy to identify the relevant shape deformation patterns and quantify the shape changes that explain corresponding variations in clinical neuropsychological measures. We use kernel Partial Least Squares (PLS) and formulate a regression model in the tangent space of the manifold of diffeomorphisms characterized by deformation momenta. The scalar deformation momenta completely encode the diffeomorphic changes in anatomical shape. In this model, the clinical measures are the response variables, while the anatomical variability is treated as the independent variable. To better understand the "shape-clinical response" relationship, we also control for demographic confounders, such as age, gender, and years of education in our regression model. We evaluate the proposed methodology on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline structural MR imaging data and neuropsychological evaluation test scores. We demonstrate the ability of our model to quantify the anatomical deformations in units of clinical response. Our results also demonstrate that the proposed method is generic and generates reliable shape deformations both in terms of the extracted patterns and the amount of shape changes. We found that while the hippocampus and amygdala emerge as mainly responsible for changes in test scores for global measures of dementia and memory function, they are not a determinant factor for executive function. Another critical finding was the appearance of thalamus and putamen as most important regions that relate to executive function. These resulting anatomical regions were consistent with very high confidence irrespective of the size of the population used in the study. This data-driven global analysis of brain anatomy was able to reach similar conclusions as other studies in Alzheimer's disease based on predefined ROIs, together with the identification of other new patterns of deformation. The proposed methodology thus holds promise for discovering new patterns of shape changes in the human brain that could add to our understanding of disease progression in neurological disorders. PMID:24667299

  11. Quantifying spatial heterogeneity from images

    NASA Astrophysics Data System (ADS)

    Pomerantz, Andrew E.; Song, Yi-Qiao

    2008-12-01

    Visualization techniques are extremely useful for characterizing natural materials with complex spatial structure. Although many powerful imaging modalities exist, simple display of the images often does not convey the underlying spatial structure. Instead, quantitative image analysis can extract the most important features of the imaged object in a manner that is easier to comprehend and to compare from sample to sample. This paper describes the formulation of the heterogeneity spectrum to show the extent of spatial heterogeneity as a function of length scale for all length scales to which a particular measurement is sensitive. This technique is especially relevant for describing materials that simultaneously present spatial heterogeneity at multiple length scales. In this paper, the heterogeneity spectrum is applied for the first time to images from optical microscopy. The spectrum is measured for thin section images of complex carbonate rock cores showing heterogeneity at several length scales in the range 10 10 000 ?m.

  12. Processing of Numerical and Proportional Quantifiers

    ERIC Educational Resources Information Center

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-01-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using

  13. Deaf Learners' Knowledge of English Universal Quantifiers

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith

    2008-01-01

    Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,

  14. Scalar Quantifiers: Logic, Acquisition, and Processing

    ERIC Educational Resources Information Center

    Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

    2010-01-01

    Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1", respectively. Notwithstanding the prima facie plausibility of this claim, Geurts

  15. Scalar Quantifiers: Logic, Acquisition, and Processing

    ERIC Educational Resources Information Center

    Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

    2010-01-01

    Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1", respectively. Notwithstanding the prima facie plausibility of this claim, Geurts…

  16. 10. INTERIOR VIEW SHOWING MOUNTINGS FROM TUNING DEVICE. VIEW SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. INTERIOR VIEW SHOWING MOUNTINGS FROM TUNING DEVICE. VIEW SHOWS COPPER SHEETING ON WALLS. - Chollas Heights Naval Radio Transmitting Facility, Helix House, 6410 Zero Road, San Diego, San Diego County, CA

  17. Quantifying Coral Reef Ecosystem Services

    EPA Science Inventory

    Coral reefs have been declining during the last four decades as a result of both local and global anthropogenic stresses. Numerous research efforts to elucidate the nature, causes, magnitude, and potential remedies for the decline have led to the widely held belief that the recov...

  18. Quantifying the value of information

    SciTech Connect

    Riis, T.

    1999-06-01

    Every oil and gas company frequently makes decisions in situations where the result is not directly measurable in terms of impact on costs and revenue. This article presents the concept of Value of Information and discusses how this approach can assist in the decision process, using a simple example and a more realistic case.

  19. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  20. Dust as interstellar catalyst. I. Quantifying the chemical desorption process

    NASA Astrophysics Data System (ADS)

    Minissale, M.; Dulieu, F.; Cazaux, S.; Hocuk, S.

    2016-01-01

    Context. The presence of dust in the interstellar medium has profound consequences on the chemical composition of regions where stars are forming. Recent observations show that many species formed onto dust are populating the gas phase, especially in cold environments where UV- and cosmic-ray-induced photons do not account for such processes. Aims: The aim of this paper is to understand and quantify the process that releases solid species into the gas phase, the so-called chemical desorption process, so that an explicit formula can be derived that can be included in astrochemical models. Methods: We present a collection of experimental results of more than ten reactive systems. For each reaction, different substrates such as oxidized graphite and compact amorphous water ice were used. We derived a formula for reproducing the efficiencies of the chemical desorption process that considers the equipartition of the energy of newly formed products, followed by classical bounce on the surface. In part II of this study we extend these results to astrophysical conditions. Results: The equipartition of energy correctly describes the chemical desorption process on bare surfaces. On icy surfaces, the chemical desorption process is much less efficient, and a better description of the interaction with the surface is still needed. Conclusions: We show that the mechanism that directly transforms solid species into gas phase species is efficient for many reactions.

  1. Quantifying Diet for Nutrigenomic Studies

    PubMed Central

    Tucker, Katherine L.; Smith, Caren E.; Lai, Chao-Qiang; Ordovas, Jose M.

    2015-01-01

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations tliat may be exaggerated in the context of gene x nutrient interaction in large multiethnic studies. Because of the specificity of most gene x nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

  2. Quantifying gyrotropy in magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Swisdak, M.

    2016-01-01

    A new scalar measure of the gyrotropy of a pressure tensor is defined. Previously suggested measures are shown to be incomplete by means of examples for which they give unphysical results. To demonstrate its usefulness as an indicator of magnetic topology, the new measure is calculated for electron data taken from numerical simulations of magnetic reconnection, shown to peak at separatrices and X points, and compared to the other measures. The new diagnostic has potential uses in analyzing spacecraft observations, and so a method for calculating it from measurements performed in an arbitrary coordinate system is derived.

  3. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600

  4. Quantifying Barrier Island Recovery Following a Hurricane

    NASA Astrophysics Data System (ADS)

    Hammond, B.; Houser, C.

    2014-12-01

    Barrier islands are dynamic landscapes that are believed to minimize storm impact to mainland communities and also provide important ecological services in the coastal environment. The protection afforded by the island and the services it provides, however, depend on island resiliency in the face of accelerated sea level rise, which is in turn dependent on the rate of island recovery following storm events that may also change in both frequency and magnitude in the future. These changes in frequency may affect even large dunes and their resiliency, resulting in the island transitioning from a high to a low elevation. Previous research has shown that the condition of the foredune depends on the recovery of the nearshore and beach profile and the ability of vegetation to capture aeolian-transported sediment. An inability of the foredune to recover may result in mainland susceptibility to storm energy, inability for ecosystems to recover and thrive, and sediment budget instability. In this study, LiDAR data is used to quantify the rates of dune recovery at Fire Island, NY, the Outer Banks, NC, Santa Rosa Island, FL, and Matagorda Island, TX. Preliminary results indicate foredune recovery varies significantly both alongshore and in the cross-shore, suggesting that barrier island response and recovery to storm events cannot be considered from a strictly two-dimensional (cross-shore) perspective.

  5. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  6. Quantifying tetrodotoxin levels in the California newt using a non-destructive sampling method.

    PubMed

    Bucciarelli, Gary M; Li, Amy; Zimmer, Richard K; Kats, Lee B; Green, David B

    2014-03-01

    Toxic or noxious substances often serve as a means of chemical defense for numerous taxa. However, such compounds may also facilitate ecological or evolutionary processes. The neurotoxin, tetrodotoxin (TTX), which is found in newts of the genus Taricha, acts as a selection pressure upon predatory garter snakes, is a chemical cue to conspecific larvae, which elicits antipredator behavior, and may also affect macroinvertebrate foraging behavior. To understand selection patterns and how potential variation might affect ecological and evolutionary processes, it is necessary to quantify TTX levels within individuals and populations. To do so has often required that animals be destructively sampled or removed from breeding habitats and brought into the laboratory. Here we demonstrate a non-destructive method of sampling adult Taricha that obviates the need to capture and collect individuals. We also show that embryos from oviposited California newt (Taricha torosa) egg masses can be individually sampled and TTX quantified from embryos. We employed three different extraction techniques to isolate TTX. Using a custom fabricated high performance liquid chromatography (HPLC) system we quantified recovery of TTX. We found that a newly developed micro-extraction technique significantly improved recovery compared to previously used methods. Results also indicate our improvements to the HPLC method have high repeatability and increased sensitivity, with a detection limit of 48pg (0.15pmol) TTX. The quantified amounts of TTX in adult newts suggest fine geographic variation in toxin levels between sampling localities isolated by as little as 3km. PMID:24467994

  7. Quantifying the value of redundant measurements at GRUAN sites

    NASA Astrophysics Data System (ADS)

    Madonna, F.; Rosoldi, M.; Güldner, J.; Haefele, A.; Kivi, R.; Cadeddu, M. P.; Sisterson, D.; Pappalardo, G.

    2014-06-01

    The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network) Stations in 2010-2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV) content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.

  8. Quantifying Tsunami Impact on Structures

    NASA Astrophysics Data System (ADS)

    Yalciner, A. C.; Kanoglu, U.; Titov, V.; Gonzalez, F.; Synolakis, C. E.

    2004-12-01

    Tsunami impact is usually assessed through inundation simulations and maps which provide estimates of coastal flooding zones based on "credible worst case" scenarios. Earlier maps relied on one-dimensional computations, but two-dimensional computations are now employed routinely. In some cases, the maps do not represent flooding from any particular scenario event, but present an inundation line that reflects the worst inundation at this particular location among a range of scenario events. Current practice in tsunami resistant design relies on estimates of tsunami impact forces derived from empirical relationships that have been borrowed from riverine flooding calculations, which involve only inundation elevations. We examine this practice critically. Recent computational advances allow for calculation of additional parameters from scenario events such as the detailed distributions of tsunami currents and fluid accelerations, and this suggests that alternative and more comprehensive expressions for calculating tsunami impact and tsunami forces should be examined. We do so, using model output for multiple inundation simulations of Seaside, Oregon, as part of a pilot project to develop probabilistic tsunami hazard assessment methodologies for incorporation into FEMA Flood Insurance Rate Maps. We consider three different methods, compare the results with existing methodology for estimating forces and impact, and discuss the implications of these methodologies for probabilistic tsunami hazard assessment.

  9. Quantifying error distributions in crowding.

    PubMed

    Hanus, Deborah; Vul, Edward

    2013-01-01

    When multiple objects are in close proximity, observers have difficulty identifying them individually. Two classes of theories aim to account for this crowding phenomenon: spatial pooling and spatial substitution. Variations of these accounts predict different patterns of errors in crowded displays. Here we aim to characterize the kinds of errors that people make during crowding by comparing a number of error models across three experiments in which we manipulate flanker spacing, display eccentricity, and precueing duration. We find that both spatial intrusions and individual letter confusions play a considerable role in errors. Moreover, we find no evidence that a nave pooling model that predicts errors based on a nonadditive combination of target and flankers explains errors better than an independent intrusion model (indeed, in our data, an independent intrusion model is slightly, but significantly, better). Finally, we find that manipulating trial difficulty in any way (spacing, eccentricity, or precueing) produces homogenous changes in error distributions. Together, these results provide quantitative baselines for predictive models of crowding errors, suggest that pooling and spatial substitution models are difficult to tease apart, and imply that manipulations of crowding all influence a common mechanism that impacts subject performance. PMID:23525133

  10. The missing metric: quantifying contributions of reviewers.

    PubMed

    Cantor, Maurício; Gero, Shane

    2015-02-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early-mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  11. The missing metric: quantifying contributions of reviewers

    PubMed Central

    Cantor, Maurício; Gero, Shane

    2015-01-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early–mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  12. Quantifying biomechanical motion using Procrustes motion analysis.

    PubMed

    Adams, Dean C; Cerney, Melinda M

    2007-01-01

    The ability to quantify and compare the movements of organisms is a central focus of many studies in biology, anthropology, biomechanics, and ergonomics. However, while the importance of functional motion analysis has long been acknowledged, quantitative methods for identifying differences in motion have not been widely developed. In this article, we present an approach to the functional analysis of motion and quantification of motion types. Our approach, Procrustes Motion Analysis (PMA) can be used to distinguish differences in cyclical, repeated, or goal-directed motions. PMA exploits the fact that any motion can be represented by an ordered sequence of postures exhibited throughout the course of a motion. Changes in posture from time step to time step form a trajectory through a multivariate data space, representing a specific motion. By evaluating the size, shape, and orientation of these motion trajectories, it is possible to examine variation in motion type within and among groups or even with respect to continuous variables. This represents a significant analytical advance over current approaches. Using simulated and digitized data representing cyclical, repeated and goal-directed motions, we show that PMA correctly identifies distinct motion tasks in these data sets. PMID:16448654

  13. Quantifying capital goods for waste incineration.

    PubMed

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. PMID:23561797

  14. Asteroid Geophysics and Quantifying the Impact Hazard

    NASA Technical Reports Server (NTRS)

    Sears, D.; Wooden, D. H.; Korycanksy, D. G.

    2015-01-01

    Probably the major challenge in understanding, quantifying, and mitigating the effects of an impact on Earth is understanding the nature of the impactor. Of the roughly 25 meteorite craters on the Earth that have associated meteorites, all but one was produced by an iron meteorite and only one was produced by a stony meteorite. Equally important, even meteorites of a given chemical class produce a wide variety of behavior in the atmosphere. This is because they show considerable diversity in their mechanical properties which have a profound influence on the behavior of meteorites during atmospheric passage. Some stony meteorites are weak and do not reach the surface or reach the surface as thousands of relatively harmless pieces. Some stony meteorites roll into a maximum drag configuration and are strong enough to remain intact so a large single object reaches the surface. Others have high concentrations of water that may facilitate disruption. However, while meteorite falls and meteorites provide invaluable information on the physical nature of the objects entering the atmosphere, there are many unknowns concerning size and scale that can only be determined by from the pre-atmospheric properties of the asteroids. Their internal structure, their thermal properties, their internal strength and composition, will all play a role in determining the behavior of the object as it passes through the atmosphere, whether it produces an airblast and at what height, and the nature of the impact and amount and distribution of ejecta.

  15. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    PubMed Central

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  16. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    PubMed

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  17. Quantifying Effective Flow and Transport Properties in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Heidari, P.; Li, L.

    2012-12-01

    Spatial heterogeneity, the spatial variation in physical and chemical properties, exists at almost all scales and is an intrinsic property of natural porous media. It is important to understand and quantify how small-scale spatial variations determine large-scale "effective" properties in order to predict fluid flow and transport behavior in the natural subsurface. In this work, we aim to systematically understand and quantify the role of the spatial distribution of sand grains of different sizes in determining effective dispersivity and effective permeability using quasi-2D flow-cell experiments and numerical simulations. Two dimensional flow cells (20 cm by 20 cm) were packed with the same total amount of fine and coarse sands however with different spatial patterns. The homogeneous case has the completely mixed fine and coarse sands. The four zone case distributes the fine sand in four identical square zones within the coarse sand matrix. The one square case has all the fine sands in one square block. With the one square case pattern, two more experiments were designed in order to examine the effect of grain size contrast on effective permeability and dispersivity. Effective permeability was calculated based on both experimental and modeling results. Tracer tests were run for all cases. Advection dispersion equations were solved to match breakthrough data and to obtain average dispersivity. We also used Continuous Time Random Walk (CTRW) to quantify the non-Fickian transport behavior for each case. For the three cases with the same grain size contrast, the results show that the effective permeability does not differ significantly. The effective dispersion coefficient is the smallest for the homogeneous case (0.05 cm) and largest for the four zone case (0.27 cm). With the same pattern, the dispersivity value is the largest with the highest size contrast (0.28 cm), which is higher than the one with the lowest case by a factor of 2. The non-Fickian behavior was quantified by the ? value within the CTRW framework. Fickian transport will result in ? values larger than 2 while its deviation from 2 indicates the extent of non-Fickian behavior. Among the three cases with the same grain size contrast, the ? value is closest to 2 in the homogeneous case (1.95), while smallest in the four zone case (1.89). In the one square case, with the highest size contrast, the ? value was 1.57, indicating increasing extent of non-Fickian behavior with higher size contrast. This study is one step toward understanding how small-scale spatial variation in physical properties affect large-scale flow and transport behavior. This step is important in predicting subsurface transport processes that are relevant to earth sciences, environmental engineering, and petroleum engineering.

  18. Quantifying the infectivity of human immunodeficiency virus.

    PubMed Central

    Layne, S P; Spouge, J L; Dembo, M

    1989-01-01

    We have developed a mathematical model that quantifies lymphocyte infection by human immunodeficiency virus (HIV) and lymphocyte protection by blocking agents such as soluble CD4. We use this model to suggest standardized parameters for quantifying viral infectivity and to suggest techniques for calculating these parameters from well-mixed infectivity assays. We discuss the implications of the model for our understanding of the infectious process and virulence of HIV in vivo. PMID:2734313

  19. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  20. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  1. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact

  2. Portable XRF Technology to Quantify Pb in Bone In Vivo

    PubMed Central

    Specht, Aaron James; Weisskopf, Marc; Nie, Linda Huiling

    2014-01-01

    Lead is a ubiquitous toxicant. Bone lead has been established as an important biomarker for cumulative lead exposures and has been correlated with adverse health effects on many systems in the body. K-shell X-ray fluorescence (KXRF) is the standard method for measuring bone lead, but this approach has many difficulties that have limited the widespread use of this exposure assessment method. With recent advancements in X-ray fluorescence (XRF) technology, we have developed a portable system that can quantify lead in bone in vivo within 3 minutes. Our study investigated improvements to the system, four calibration methods, and system validation for in vivo measurements. Our main results show that the detection limit of the system is 2.9 ppm with 2 mm soft tissue thickness, the best calibration method for in vivo measurement is background subtraction, and there is strong correlation between KXRF and portable LXRF bone lead results. Our results indicate that the technology is ready to be used in large human population studies to investigate adverse health effects of lead exposure. The portability of the system and fast measurement time should allow for this technology to greatly advance the research on lead exposure and public/environmental health. PMID:26317033

  3. Critical issues in quantifying line edge roughness

    NASA Astrophysics Data System (ADS)

    Nikitin, A.; Sicignano, A.; Yeremin, D.; Sandy, M.; Goldburt, T.

    2005-05-01

    The problem of quantifying LER in the semiconductor industry has become critical with sub-100nm node manufacturing. However, routine methods for LER measurement to meet the needs of industry have not been reported. Even the definition for LER has not been defined unambiguously. Also, the length of the photoresist structure, on which LER is measured, has not bee standardized. Meanwhile, demands for precision in LER calculations have been put forward without accounting for the statistical nature of this parameter. In addition, the algorithms used for feature edge localization when performing LER measurements frequently have free parameters which makes LER estimation ambiguous and does not allow LER comparisons of the same feature. In particular, without taking into account the influence of signal noise in the SEM video, the LER measurements obtained will have contributions from both the measured feature and measuring tool (SEM). The manner in which this measurement is done results in LER values that exceed the true LER. Moreover, when measured objects have spect ratios exceeding three, it is not clear where along the cross-section height of the object-bottom, top, or some intermediate position-correspond to the measured values. The above issues make the interpretation of obtained results very difficult, and significantly reduces the reliability and value of LER measurement results present in the referenced literature. Nanometrology has developed a new concept for LER measurements that is free of many of the disadvantages mentioned above. It is based on the definition of LER as "a standard deviation of the factual edge position on SEM scan lines from an approximated straight line". Nanometrology's use of a patented algorithm for edge localization of 3D objects results in the measurement of the bottom CD of photoresist structures. Our algorithms do not have free parameters. These algorithms have been incorporated into a CD measurement software package called CD-LER.

  4. Quantifying capital goods for biological treatment of organic waste.

    PubMed

    Brogaard, Line K; Petersen, Per H; Nielsen, Peter D; Christensen, Thomas H

    2015-02-01

    Materials and energy used for construction of anaerobic digestion (AD) and windrow composting plants were quantified in detail. The two technologies were quantified in collaboration with consultants and producers of the parts used to construct the plants. The composting plants were quantified based on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest contribution to Global Warming. The total impact on Global Warming from the capital goods compared to the operation reported in the literature on the AD plant showed an insignificant contribution of 1-2%. For the composting plants, the capital goods accounted for 10-22% of the total impact on Global Warming from composting. PMID:25595291

  5. Identifying and quantifying urban recharge: a review

    NASA Astrophysics Data System (ADS)

    Lerner, David N.

    2002-02-01

    The sources of and pathways for groundwater recharge in urban areas are more numerous and complex than in rural environments. Buildings, roads, and other surface infrastructure combine with man-made drainage networks to change the pathways for precipitation. Some direct recharge is lost, but additional recharge can occur from storm drainage systems. Large amounts of water are imported into most cities for supply, distributed through underground pipes, and collected again in sewers or septic tanks. The leaks from these pipe networks often provide substantial recharge. Sources of recharge in urban areas are identified through piezometry, chemical signatures, and water balances. All three approaches have problems. Recharge is quantified either by individual components (direct recharge, water-mains leakage, septic tanks, etc.) or holistically. Working with individual components requires large amounts of data, much of which is uncertain and is likely to lead to large uncertainties in the final result. Recommended holistic approaches include the use of groundwater modelling and solute balances, where various types of data are integrated. Urban recharge remains an under-researched topic, with few high-quality case studies reported in the literature.

  6. Quantifying the Complexity of Flaring Active Regions

    NASA Technical Reports Server (NTRS)

    Stark, B.; Hagyard, M. J.

    1997-01-01

    While solar physicists have a better understanding of the importance magnetic fields play in the solar heating mechanism, it is still not possible to predict whether or when an active region will flare. In recent decades, qualitative studies of the changes in active region morphology have shown that there is generally an increase in the complexity of the spatial configuration of a solar active region leading up to a flare event. In this study, we quantify the spatial structure of the region using the differential Box-Counting Method (DBC) of fractal analysis. We analyze data from NASA/Marshall Space Flight Centr's vector magnetograph from two flaring active regions: AR 6089 from June 10, 1990, which produced one M1.7 flare, and AR 6659 from June 8, 9 and 10, 1991, this data set including one C5.7 and two M(6.4 and 3.2) flare. (AR 6659 produced several other flares). Several magnetic parameters are studied, including the transverse and longitudinal magnetic field components (Bt and B1), the total field (Bmag), and the magnetic shear, which describes the non-potentiality of the field. Results are presented for the time series of magnetograms in relation to the timing of flare events.

  7. Quantifying ant activity using vibration measurements.

    PubMed

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C S; Evans, Theodore A

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  8. Quantifying truncation errors in effective field theory

    NASA Astrophysics Data System (ADS)

    Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2015-08-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Meißner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.

  9. Quantifying Access Disparities in Response Plans

    PubMed Central

    Indrakanti, Saratchandra; Mikler, Armin R.; O’Neill, Martin; Tiwari, Chetan

    2016-01-01

    Effective response planning and preparedness are critical to the health and well-being of communities in the face of biological emergencies. Response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable subpopulations, resulting in access disparities during emergency response. For a response plan to be effective, sufficient mitigation resources must be made accessible to target populations within short, federally-mandated time frames. A major challenge in response plan design is to establish a balance between the allocation of available resources and the provision of equal access to PODs for all individuals in a given geographic region. Limitations on the availability, granularity, and currency of data to identify vulnerable populations further complicate the planning process. To address these challenges and limitations, data driven methods to quantify vulnerabilities in the context of response plans have been developed and are explored in this article. PMID:26771551

  10. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  11. Quantifying Ant Activity Using Vibration Measurements

    PubMed Central

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C. S.; Evans, Theodore A.

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  12. Quantifying selection in immune receptor repertoires

    PubMed Central

    Elhanati, Yuval; Murugan, Anand; Callan, Curtis G.; Mora, Thierry; Walczak, Aleksandra M.

    2014-01-01

    The efficient recognition of pathogens by the adaptive immune system relies on the diversity of receptors displayed at the surface of immune cells. T-cell receptor diversity results from an initial random DNA editing process, called VDJ recombination, followed by functional selection of cells according to the interaction of their surface receptors with self and foreign antigenic peptides. Using high-throughput sequence data from the ?-chain of human T-cell receptors, we infer factors that quantify the overall effect of selection on the elements of receptor sequence composition: the V and J gene choice and the length and amino acid composition of the variable region. We find a significant correlation between biases induced by VDJ recombination and our inferred selection factors together with a reduction of diversity during selection. Both effects suggest that natural selection acting on the recombination process has anticipated the selection pressures experienced during somatic evolution. The inferred selection factors differ little between donors or between naive and memory repertoires. The number of sequences shared between donors is well-predicted by our model, indicating a stochastic origin of such public sequences. Our approach is based on a probabilistic maximum likelihood method, which is necessary to disentangle the effects of selection from biases inherent in the recombination process. PMID:24941953

  13. Quantifying Energy Intake Changes during Obesity Pharmacotherapy

    PubMed Central

    Gbel, Britta; Sanghvi, Arjun; Hall, Kevin D.

    2014-01-01

    Objective Despite the fact that most obesity drugs primarily work by reducing metabolizable energy intake, elucidation of the time course of energy intake changes during long-term obesity pharmacotherapy has been prevented by the limitations of self-report methods of measuring energy intake. Methods We used a validated mathematical model of human metabolism to provide the first quantification of metabolizable energy intake changes during long-term obesity pharmacotherapy using body weight data from randomized, placebo-controlled trials that evaluated 14 different drugs or drug combinations. Results Changes in metabolizable energy intake during obesity pharmacotherapy were reasonably well-described by an exponential pattern comprising three simple parameters, with early large changes in metabolizable energy intake followed by a slow transition to a smaller persistent drug effect. Conclusions Repeated body weight measurements along with a mathematical model of human metabolism can be used to quantify changes in metabolizable energy intake during obesity pharmacotherapy. The calculated metabolizable energy intake changes followed an exponential time course, and therefore different drugs can be evaluated and compared using a common mathematical framework. PMID:24961931

  14. Quantifying dynamical spillover in co-evolving multiplex networks.

    PubMed

    Vijayaraghavan, Vikram S; Nol, Pierre-Andr; Maoz, Zeev; D'Souza, Raissa M

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of "dynamical spillover" showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  15. Methods for quantifying Staphylococcus aureus in indoor air.

    PubMed

    Chang, C-W; Wang, L-J

    2015-02-01

    Staphylococcus aureus has been detected in indoor air and linked to human infection. Quantifying S. aureus by efficient sampling methods followed by appropriate sample storage treatments is essential to characterize the exposure risk of humans. This laboratory study evaluated the effects of sampler type (all-glass impinger (AGI-30), BioSampler, and Andersen one-stage sampler (Andersen 1-STG)), collection fluid (deionized water (DW), phosphate-buffered saline (PBS), and Tween mixture (TM)), and sampling time (3-60 min) on cell recovery. Effects of storage settings on bacterial concentration were also assessed over 48 h. Results showed BioSampler performed better than Andersen 1-STG and AGI-30 (P < 0.05) and TM was superior to PBS and DW (P < 0.05). An increase in sampling time negatively affected the recoveries of cells in PBS of BioSampler and AGI-30 (P < 0.05), whereas cell recoveries in TM were increased at sampling of 6-15 min compared with 3 min. Concentrations of cells collected in PBS were decreased with storage time at 4 and 23 C (P < 0.05), while cells stored in TM showed stable concentrations at 4 C (P > 0.05) and increased cell counts at 23 C (P < 0.05). Overall, sampling by BioSampler with TM followed by sample transportation and storage at 4 C is recommended. PMID:24773454

  16. Quantifying dynamical spillover in co-evolving multiplex networks

    PubMed Central

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D’Souza, Raissa M.

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  17. Quantifying dynamical spillover in co-evolving multiplex networks

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D'Souza, Raissa M.

    2015-10-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways.

  18. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  19. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  20. Quantifying uncertainty in the phylogenetics of Australian numeral systems.

    PubMed

    Zhou, Kevin; Bowern, Claire

    2015-09-22

    Researchers have long been interested in the evolution of culture and the ways in which change in cultural systems can be reconstructed and tracked. Within the realm of language, these questions are increasingly investigated with Bayesian phylogenetic methods. However, such work in cultural phylogenetics could be improved by more explicit quantification of reconstruction and transition probabilities. We apply such methods to numerals in the languages of Australia. As a large phylogeny with almost universal 'low-limit' systems, Australian languages are ideal for investigating numeral change over time. We reconstruct the most likely extent of the system at the root and use that information to explore the ways numerals evolve. We show that these systems do not increment serially, but most commonly vary their upper limits between 3 and 5. While there is evidence for rapid system elaboration beyond the lower limits, languages lose numerals as well as gain them. We investigate the ways larger numerals build on smaller bases, and show that there is a general tendency to both gain and replace 4 by combining 2 + 2 (rather than inventing a new unanalysable word 'four'). We develop a series of methods for quantifying and visualizing the results. PMID:26378214

  1. Quantifying Annual Aboveground Net Primary Production in the Intermountain West

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As part of a larger project, methods were developed to quantify current year growth on grasses, forbs, and shrubs. Annual aboveground net primary production (ANPP) data are needed for this project to calibrate results from computer simulation models and remote-sensing data. Measuring annual ANPP of ...

  2. Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers

    NASA Astrophysics Data System (ADS)

    Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

    2009-03-01

    We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lovers Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannons divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeares work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

  3. Arches showing UV flaring activity

    NASA Technical Reports Server (NTRS)

    Fontenla, J. M.

    1988-01-01

    The UVSP data obtained in the previous maximum activity cycle show the frequent appearance of flaring events in the UV. In many cases these flaring events are characterized by at least two footpoints which show compact impulsive non-simultaneous brightenings and a fainter but clearly observed arch developes between the footpoints. These arches and footpoints are observed in line corresponding to different temperatures, as Lyman alpha, N V, and C IV, and when observed above the limb display large Doppler shifts at some stages. The size of the arches can be larger than 20 arcsec.

  4. Quantifying Sentiment and Influence in Blogspaces

    SciTech Connect

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  5. Quantifying consistent individual differences in habitat selection.

    PubMed

    Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie

    2016-03-01

    Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection. PMID:26597548

  6. Interpolating Quantifier-Free Presburger Arithmetic

    NASA Astrophysics Data System (ADS)

    Kroening, Daniel; Leroux, Jrme; Rmmer, Philipp

    Craig interpolation has become a key ingredient in many symbolic model checkers, serving as an approximative replacement for expensive quantifier elimination. In this paper, we focus on an interpolating decision procedure for the full quantifier-free fragment of Presburger Arithmetic, i.e., linear arithmetic over the integers, a theory which is a good fit for the analysis of software systems. In contrast to earlier procedures based on quantifier elimination and the Omega test, our approach uses integer linear programming techniques: relaxation of interpolation problems to the rationals, and a complete branch-and-bound rule tailored to efficient interpolation. Equations are handled via a dedicated polynomial-time sub-procedure. We have fully implemented our procedure on top of the SMT-solver OpenSMT and present an extensive experimental evaluation.

  7. Quantifying the reheating temperature of the universe

    NASA Astrophysics Data System (ADS)

    Mazumdar, Anupam; Zaldvar, Bryan

    2014-09-01

    The aim of this paper is to determine an exact definition of the reheat temperature for a generic perturbative decay of the inflaton. In order to estimate the reheat temperature, there are two important conditions one needs to satisfy: (a) the decay products of the inflaton must dominate the energy density of the universe, i.e. the universe becomes completely radiation dominated, and (b) the decay products of the inflaton have attained local thermodynamical equilibrium. For some choices of parameters, the latter is a more stringent condition, such that the decay products may thermalise much after the beginning of radiation-domination. Consequently, we have obtained that the reheat temperature can be much lower than the standard-lore estimation. In this paper we describe under what conditions our universe could have efficient or inefficient thermalisation, and quantify the reheat temperature for both the scenarios. This result has an immediate impact on many applications which rely on the thermal history of the universe, in particular gravitino abundance. Instant thermalisation: when the inflaton decay products instantly thermalise upon decay. Efficient thermalisation: when the inflaton decay products thermalise right at the instant when radiation epoch starts dominating the universe. Delayed thermalisation: when the inflaton decay products thermalise deep inside the radiation dominated epoch after the transition from inflaton-to-radiation domination had occurred. This paper is organised as follows. In Section 2 we set the stage and write down the relevant equations for our analysis. The standard lore about the reheating epoch is briefly commented in Section 3. Section 4 is devoted to present our analysis, in which we study the conditions under which the plasma attains thermalisation. Later on, in Section 5 we discuss the concept of reheat temperature such as to properly capture the issues of thermalisation. Finally, we conclude in Section 6.

  8. Quantified Histopathology of the Keratoconic Cornea

    PubMed Central

    Mathew, Jessica H.; Goosey, John D.; Bergmanson, Jan P. G.

    2011-01-01

    Purpose The present study systematically investigated and quantified histopathological changes in a series of keratoconic (Kc) corneas utilizing a physiologically formulated fixative to not further distort the already distorted diseased corneas. Methods Twelve surgically removed Kc corneal buttons were immediately preserved and processed for light and transmission electron microscopy using an established corneal protocol. Measurements were taken from the central cone and peripheral regions of the host button. The sample size examined ranged in length from 3902608um centrally and 4392242um peripherally. Results The average corneal thickness was 437um centrally and 559um peripherally. Epithelial thickness varied centrally from 1492um and peripherally from 3091um. A marked thickening of the epithelial basement membrane was noted in 58% of corneas. Centrally, anterior limiting lamina (ALL) was thinned or lost over 60% of the area examined, while peripheral cornea was also affected, but to a lesser extent. Histopathologically, posterior cornea remained undisturbed by the disease. Anteriorly in the stroma, an increased number of cells and tissue debris were encountered and some of these cells were clearly not keratocytes. Conclusions It is concluded that Kc pathology, at least initially, has a distinct anterior focus involving the epithelium, ALL and anterior stroma. The epithelium had lost its cellular uniformity and was compromised by the loss or damage to the ALL. The activity of the hitherto unreported recruited stromal cells may be to break down and remove ALL and anterior stromal lamellae leading to the overall thinning that accompanies this disease. PMID:21623252

  9. Quantifying and Communicating Uncertainty in Preclinical Human Dose-Prediction

    PubMed Central

    Sundqvist, M; Lundahl, A; Ngrd, MB; Bredberg, U; Gennemark, P

    2015-01-01

    Human dose-prediction is fundamental for ranking lead-optimization compounds in drug discovery and to inform design of early clinical trials. This tutorial describes how uncertainty in such predictions can be quantified and efficiently communicated to facilitate decision-making. Using three drug-discovery case studies, we show how several uncertain pieces of input information can be integrated into one single uncomplicated plot with key predictions, including their uncertainties, for many compounds or for many scenarios, or both. PMID:26225248

  10. ENVITEC shows off air technologies

    SciTech Connect

    McIlvaine, R.W.

    1995-08-01

    The ENVITEC International Trade Fair for Environmental Protection and Waste Management Technologies, held in June in Duesseldorf, Germany, is the largest air pollution exhibition in the world and may be the largest environmental technology show overall. Visitors saw thousands of environmental solutions from 1,318 companies representing 29 countries and occupying roughly 43,000 square meters of exhibit space. Many innovations were displayed under the category, ``thermal treatment of air pollutants.`` New technologies include the following: regenerative thermal oxidizers; wet systems for removing pollutants; biological scrubbers;electrostatic precipitators; selective adsorption systems; activated-coke adsorbers; optimization of scrubber systems; and air pollution monitors.

  11. Graphene Oxides Show Angiogenic Properties.

    PubMed

    Mukherjee, Sudip; Sriram, Pavithra; Barui, Ayan Kumar; Nethi, Susheel Kumar; Veeriah, Vimal; Chatterjee, Suvro; Suresh, Kattimuttathu Ittara; Patra, Chitta Ranjan

    2015-08-01

    Angiogenesis, a process resulting in the formation of new capillaries from the pre-existing vasculature plays vital role for the development of therapeutic approaches for cancer, atherosclerosis, wound healing, and cardiovascular diseases. In this report, the synthesis, characterization, and angiogenic properties of graphene oxide (GO) and reduced graphene oxide (rGO) have been demonstrated, observed through several in vitro and in vivo angiogenesis assays. The results here demonstrate that the intracellular formation of reactive oxygen species and reactive nitrogen species as well as activation of phospho-eNOS and phospho-Akt might be the plausible mechanisms for GO and rGO induced angiogenesis. The results altogether suggest the possibilities for the development of alternative angiogenic therapeutic approach for the treatment of cardiovascular related diseases where angiogenesis plays a significant role. PMID:26033847

  12. Quantifying thresholds in state space to advance our understanding of emergent behavior

    NASA Astrophysics Data System (ADS)

    Lintz, H. E.; Graham, C. B.

    2011-12-01

    Thresholds are common across diverse systems and scales and often represent emergent, complex behavior. While thresholds are a widely accepted concept, most empirical methods focus on their detection in time. Although threshold detection is useful, it does not quantify the direct drivers of the threshold response. Causal understanding of thresholds detected empirically requires their investigation in a multi-factor domain containing the direct drivers (often referred to as state space). Here, we present a new approach that quantifies threshold strength from response surfaces modeled in state space. We illustrate how this method can be used to study and better understand mechanisms that drive thresholds resulting from interactions among multiple factors. In particular, we examine stream threshold response to storm precipitation and antecedent wetness and ask how climate and catchment factors modulate local interactions that determine threshold strength. We pair data from the basin outlet of USGS gauging stations within 1 kilometer of meteorological stations with data from the nearest met-station. Non-parametric multiplicative regression (NPMR) is used to build response surfaces of flow with respect to antecedent wetness indices and storm precipitation. We quantify threshold strength using a threshold strength index applied to response surfaces that are built for each gauging station. We show how the approach can be used to study and better understand mechanisms that drive multi-factor thresholds resulting from interactions across scales. We find that catchment characteristics modulate the domain of interaction (between storm precipitation and antecedent wetness) that exhibits the strongest thresholds in runoff. We argue that our method and results can advance mechanistic understanding of hydrologic thresholds in stream response across catchments. Finally, we also argue that the relative strength of multi-factor thresholds exhibited by a system or across systems should be quantified and compared in state space. In so doing, we can enhance our understanding of threshold behavior across systems and disciplines.

  13. Quantifying biased response of axon to chemical gradient steepness in a microfluidic device.

    PubMed

    Xiao, Rong-Rong; Wang, Lei; Zhang, Lin; Liu, Yu-Ning; Yu, Xiao-Lei; Huang, Wei-Hua

    2014-12-01

    Axons are very sensitive to molecular gradients and can discriminate extremely small differences in gradient steepness. Microfluidic devices capable of generating chemical gradients and adjusting their steepness could be used to quantify the sensitivity of axonal response. Here, we present a versatile and robust microfluidic device that can generate substrate-bound molecular gradients with evenly varying steepness on a single chip to precisely quantify axonal response. In this device, two solutions are perfused into a central channel via two inlets while partially flowing into two peripheral channels through interconnecting grooves, which gradually decrease the fluid velocity along the central channel. Molecular gradients with evenly and gradually decreased steepness can therefore be generated with a high resolution that is less than 0.05%/mm. In addition, the overall distribution range and resolution of the gradient steepness can be highly and flexibly controlled by adjusting various parameters of the device. Using this device, we quantified the hippocampal axonal response to substrate-bound laminin and ephrin-A5 gradients with varying steepnesses. Our results provided more detailed information on how and to what extent different steepnesses guide hippocampal neuron development during the initial outgrowth. Furthermore, our results show that axons can sensitively respond to very shallow laminin and ephrin-A5 gradients, which could effectively initiate biased differentiation of hippocampal neurons in the steepness range investigated in this study. PMID:25381866

  14. What Do Blood Tests Show?

    MedlinePLUS

    ... might be a sign of a disorder or disease. Other factors—such as diet, menstrual cycle, physical activity level, alcohol intake, and medicines (both prescription and over the counter)—also can cause abnormal results. Your ... problem. Many diseases and medical problems can't be diagnosed with ...

  15. Erythropoietic protoporphyria showing solar purpura.

    PubMed

    Torinuki, W; Miura, T

    1983-01-01

    An 11-year-old girl with erythropoietic protoporphyria is described. She was admitted to our hospital complaining of swelling and purpura on her arms resulting from overexposure to solar radiation. An elevated level of protoporphyrin in the red blood cells and feces was detected by thin-layer chromatography and fluorescent scanning analysis. PMID:6642040

  16. Improved Estimates Show Large Circumpolar Stocks of Permafrost Carbon While Quantifying Substantial Uncertainty Ranges and Identifying Remaining Data Gaps

    NASA Astrophysics Data System (ADS)

    Hugelius, G.; Strauss, J.; Zubrzycki, S.; Harden, J. W.; Schuur, E. A. G.; Ping, C. L.; Schirrmeister, L.; Grosse, G.; Michaelson, G. J.; Koven, C. D.; ODonnell, J. A.; Elberling, B.; Mishra, U.; Camill, P.; Yu, Z.; Palmtag, J.; Kuhry, P.

    2014-12-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but stock estimates are poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of the permafrost SOC pool, including quantitative uncertainty estimates, in the 0-3 m depth range in soils as well as for deeper sediments (> 3 m) in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. The revised estimates are based on significantly larger databases compared to previous studies. Compared to previous studies, the number of individual sites/pedons has increased by a factor ×8-11 for 1-3 m soils, a factor ×8 for deltaic alluvium and a factor ×5 for Yedoma region deposits. A total estimated mean storage for the permafrost region of ca. 1300-1400 Pg with an uncertainty range of 1050-1650 Pg encompasses the revised estimates. Of this, ≤900 Pg is perennially frozen. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are also substantial differences in individual components. There is evidence of substantial remaining regional data-gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for >3 m depth deposits in deltas and the Yedoma region.

  17. Improved estimates show large circumpolar stocks of permafrost carbon while quantifying substantial uncertainty ranges and identifying remaining data gaps

    NASA Astrophysics Data System (ADS)

    Hugelius, G.; Strauss, J.; Zubrzycki, S.; Harden, J. W.; Schuur, E. A. G.; Ping, C. L.; Schirrmeister, L.; Grosse, G.; Michaelson, G. J.; Koven, C. D.; O'Donnell, J. A.; Elberling, B.; Mishra, U.; Camill, P.; Yu, Z.; Palmtag, J.; Kuhry, P.

    2014-03-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but stock estimates are poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of the permafrost SOC pool, including quantitative uncertainty estimates, in the 0-3 m depth range in soils as well as for deeper sediments (>3 m) in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. The revised estimates are based on significantly larger databases compared to previous studies. Compared to previous studies, the number of individual sites/pedons has increased by a factor 8-11 for soils in the 1-3 m depth range,, a factor 8 for deltaic alluvium and a factor 5 for Yedoma region deposits. Upscaled based on regional soil maps, estimated permafrost region SOC stocks are 217 15 and 472 34 Pg for the 0-0.3 m and 0-1 m soil depths, respectively (95% confidence intervals). Depending on the regional subdivision used to upscale 1-3 m soils (following physiography or continents), estimated 0-3 m SOC storage is 1034 183 Pg or 1104 133 Pg. Of this, 34 16 Pg C is stored in thin soils of the High Arctic. Based on generalised calculations, storage of SOC in deep deltaic alluvium (>3 m to ?60 m depth) of major Arctic rivers is estimated to 91 39 Pg (of which 69 34 Pg is in permafrost). In the Yedoma region, estimated >3 m SOC stocks are 178 +140/-146 Pg, of which 74 +54/-57 Pg is stored in intact, frozen Yedoma (late Pleistocene ice- and organic-rich silty sediments) with the remainder in refrozen thermokarst deposits (16/84th percentiles of bootstrapped estimates). A total estimated mean storage for the permafrost region of ca. 1300-1370 Pg with an uncertainty range of 930-1690 Pg encompasses the combined revised estimates. Of this, ?819-836 Pg is perennially frozen. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are also substantial differences in individual components. There is evidence of remaining regional data-gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for >3 m depth deposits in deltas and the Yedoma region.

  18. Quantifying uncertainty sources in hydrological climate impact projections

    NASA Astrophysics Data System (ADS)

    Bosshard, T.; Kotlarski, S.; Carambia, M.; Grgen, K.; Krahe, P.; Zappa, M.; Schr, C.

    2012-04-01

    Impact modeling systems, consisting of an emission scenario, global and regional climate models, statistical post-processing methods and hydrological models, are commonly used to assess hydrological climate impacts. Uncertainties associated with the projected impacts arise from each element of the modeling chain. While propagating through the modeling chain, the uncertainties from various modeling steps might interact. Interactions mean that the uncertainty of projected climate impacts by an ensemble of, e.g., multiple hydrological models, depends on the preceding modeling steps. In order to quantify such interactions, one needs to generate an ensemble of projections that varies different elements of the impact modeling chain simultaneously. In this study, we conducted a modeling experiment in the Alpine Rhine catchment using an ensemble of 9 climate model chains (CMs) from the ENSEMBLES project (www.ensembles-eu.org), 2 statistical post-processing (PP) methods and 2 hydrological models (HMs). We address changes in the annual cycle of runoff and of different runoff quantiles for the period 2021-2050 relative to 1961-1990. Based on this database of 36 different modeling chains, we tried to answer the questions: (1) how large is the total uncertainty of the projections, and (2) how much do the three modeling chain elements (CMs, PP methods, HMs) and interactions between them contribute to the total uncertainty as estimated in (1). The results show that most of the projections agree on an increase of runoff in winter (+15.6 [range +5.5 to +40.7] %) and a decrease in summer (-13.8 [range -26.0 to +3.9] %). However, there is large uncertainty in the magnitude of the changes. We used an ANalysis Of VAriance (ANOVA) model to quantify the contributions of various uncertainty sources to the total uncertainty of the ensemble. We found that CMs are the most important source of uncertainty for changes in the annual cycle of runoff during most parts of the year, and over a large quantile range. We also found that interactions might be as important as CMs during winter and spring and for extreme runoff quantiles. This indicates that it is crucial to vary multiple impact modeling chain elements simultaneously in order to assess the full uncertainty of hydrological climate impacts. Concerning the design of future impact studies, our results indicate that one should invest more into having a balanced sampling of all possible uncertainty sources rather than increase the sample size of just one particular source. Furthermore, the employed ANOVA model for the decomposition of the total uncertainty is flexible and could be adapted to modeling experiments that include other uncertainty sources such as e.g. emission scenarios or land use changes.

  19. Quantifying solute transport processes: are chemically "conservative" tracers electrically conservative?

    USGS Publications Warehouse

    Singha, Kamini; Li, Li; Day-Lewis, Frederick D.; Regberg, Aaron B.

    2012-01-01

    The concept of a nonreactive or conservative tracer, commonly invoked in investigations of solute transport, requires additional study in the context of electrical geophysical monitoring. Tracers that are commonly considered conservative may undergo reactive processes, such as ion exchange, thus changing the aqueous composition of the system. As a result, the measured electrical conductivity may reflect not only solute transport but also reactive processes. We have evaluated the impacts of ion exchange reactions, rate-limited mass transfer, and surface conduction on quantifying tracer mass, mean arrival time, and temporal variance in laboratory-scale column experiments. Numerical examples showed that (1) ion exchange can lead to resistivity-estimated tracer mass, velocity, and dispersivity that may be inaccurate; (2) mass transfer leads to an overestimate in the mobile tracer mass and an underestimate in velocity when using electrical methods; and (3) surface conductance does not notably affect estimated moments when high-concentration tracers are used, although this phenomenon may be important at low concentrations or in sediments with high and/or spatially variable cation-exchange capacity. In all cases, colocated groundwater concentration measurements are of high importance for interpreting geophysical data with respect to the controlling transport processes of interest.

  20. Convective activity quantified by sub-gridscale fluxes

    NASA Astrophysics Data System (ADS)

    Hantel, M.; Hamelbeck, F.

    A dynamic quantity to measure the actual strength of convection is the sub-gridscale transport of equivalent temperature v? T+ c-1pLq; we refer to the corresponding correlation overline???? with ?? {dp }/{dt } as the convective flux. Vertical profiles of the convective flux within atmospheric columns are computed with a diagnostic model (DIAMOD) from the observed gridscale budgets by using analysed fields of ECMWF. Their high quality makes DIAMOD sufficiently accurate despite the strong internal compensation in the gridscale budget terms. Boundary value for the vertical integration is the latent plus sensible heat flux across the Earth's surface. We show that the maximum convective flux in a column is proportional to the mean vertical slope of the gridscale budget averaged over the troposphere. Results for 144 columns (100km/12h each) over Europe for a case of deep convection south of the Alps in September 1995 (the South Ticino case) are presented. There is areal precipitation of up to 45 mm/12 h. The areal convective flux exceeds 1 000 W/m 2 around 600 hPa in some columns. Maxima of precipitation and convective flux do not exactly coincide. This is not inconsistent with the notion that the convective flux (estimated with DIAMOD or an equivalent approach) is the proper dynamic measure to quantify the convective process.

  1. Quantifying Kink Mode Dissipation Using Radial Eigenmode Measurements*

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Navratil, G. A.; Maurer, D. A.; Mauel, M. E.; Pedersen, T. S.

    2006-10-01

    Understanding the magnitude and source of plasma dissipation that governs resistive wall mode rotational stabilization is crucial for the extrapolation of current experimental results to future burning plasma regimes of operation. To date, methods to determine the magnitude of dissipation affecting kink mode dynamics has been through the measurement of the complex damping rate of the mode using MHD spectroscopic techniques [1,2], or by detailed profile measurements of momentum loss as the kink mode evolves in time [3]. Here we present an alternate method to quantify the magnitude of dissipation using measurements of the polodial magnetic field fluctuations associated with the kink's radial eigenfunction. A twenty element, high spatial resolution Hall sensor array was used to measure the kink mode perturbed poloidal fields. Comparison of the relative phase shift of these fluctuations as a function of minor radius with calculations of the expected structure of the kink-RWM eigensystem show a sensitive dependence upon the magnitude of dissipation allowing its quantitative characterization. Estimates of the magnitude of dissipation using these phase shift measurements are in good agreement with previous MHD spectroscopy measurements [1]. *Supported by U.S. DOE Grant DE-FG02-86ER53222 1. M.E.Mauel, et al., Nuc. Fusion, 45, 285 (2005) 2. H. Reimerdes, et al., PRL, 93, 135002 (2004) 3. W. Zhu, et al., PRL, 96, 225002 (2006)

  2. Quantifying singlet fission in novel organic materials using nonlinear optics

    NASA Astrophysics Data System (ADS)

    Busby, Erik; Xia, Jianlong; Yaffe, Omer; Kumar, Bharat; Berkelbach, Timothy; Wu, Qin; Miller, John; Nuckolls, Colin; Zhu, Xiaoyang; Reichman, David; Campos, Luis; Sfeir, Matthew Y.

    2014-10-01

    Singlet fission is a form of multiple exciton generation in which two triplet excitons are produced from the decay of a photoexcited singlet exciton. In a small number of organic materials, most notably pentacene, this conversion process has been shown to occur with unity quantum yield on sub-ps timescales. However, a poorly understood mechanism for fission along with strict energy and geometry requirements have so far limited the observation of this process to a few classes of organic materials, with only a subset of these (most notably the polyacenes) showing both efficient fission and long-lived triplets. Here, we utilize novel organic materials to investigate how the efficiency of the fission process depends on the coupling and the energetic driving force between chromophores in both intra- and intermolecular singlet fission materials. We demonstrate how the triplet yield can be accurately quantified using a combination of traditional transient spectroscopies and recently developed excited state saturable absorption techniques. These results allow us to gain mechanistic insight into the fission process and suggest general strategies for generating new materials that can undergo efficient fission.

  3. Statistical physics approach to quantifying differences in myelinated nerve fibers

    NASA Astrophysics Data System (ADS)

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-03-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.

  4. Statistical physics approach to quantifying differences in myelinated nerve fibers.

    PubMed

    Comin, Csar H; Santos, Joo R; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L; Gabrielli, Andrea; Costa, Luciano da F; Stanley, H Eugene

    2014-01-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

  5. Statistical physics approach to quantifying differences in myelinated nerve fibers

    PubMed Central

    Comin, Csar H.; Santos, Joo R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-01-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze crosssectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

  6. ShowMe3D

    SciTech Connect

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  7. "Show me" bioethics and politics.

    PubMed

    Christopher, Myra J

    2007-10-01

    Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy. PMID:17926217

  8. ShowMe3D

    Energy Science and Technology Software Center (ESTSC)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  9. Phoenix Scoop Inverted Showing Rasp

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows the silver colored rasp protruding from NASA's Phoenix Mars Lander's Robotic Arm scoop. The scoop is inverted and the rasp is pointing up.

    Shown with its forks pointing toward the ground is the thermal and electrical conductivity probe, at the lower right. The Robotic Arm Camera is pointed toward the ground.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  10. Quantifying Item Dependency by Fisher's Z.

    ERIC Educational Resources Information Center

    Shen, Linjun

    Three aspects of the usual approach to assessing local item dependency, Yen's "Q" (H. Huynh, H. Michaels, and S. Ferrara, 1995), deserve further investigation. Pearson correlation coefficients do not distribute normally when the coefficients are large, and thus cannot quantify the dependency well. In the second place, the accuracy of item response…

  11. Quantifying the Thermal Fatigue of CPV Modules

    SciTech Connect

    Bosco, N.; Kurtz, S.

    2011-02-01

    A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative; study between cities demonstrates a significant difference in the accumulated damage. These differences are most; sensitive to the number of larger (?T) thermal cycles experienced for a location. High frequency data (<1/min) may; be required to most accurately employ this method.

  12. Quantifying cellular alignment on anisotropic biomaterial platforms

    PubMed Central

    Nectow, Alexander R.; Kilmer, Misha E.; Kaplan, David L.

    2014-01-01

    How do we quantify cellular alignment? Cellular alignment is an important technique used to study and promote tissue regeneration in vitro and in vivo. Indeed, regenerative outcomes are often strongly correlated with the efficacy of alignment, making quantitative, automated assessment an important goal for the field of tissue engineering. There currently exist various classes of algorithms, which effectively address the problem of quantifying individual cellular alignments using Fourier methods, kernel methods, and elliptical approximation; however, these algorithms often yield population distributions and are limited by their inability to yield a scalar metric quantifying the efficacy of alignment. The current work builds on these classes of algorithms by adapting the signal processing methods previously used by our group to study the alignment of cellular processes. We use an automated, ellipse-fitting algorithm to approximate cell body alignment with respect to a silk biomaterial scaffold, followed by the application of the normalized cumulative periodogram criterion to produce a scalar value quantifying alignment. The proposed work offers a generalized method for assessing cellular alignment in complex, two-dimensional environments. This method may also offer a novel alternative for assessing the alignment of cell types with polarity, such as fibroblasts, endothelial cells, and mesenchymal stem cells, as well as nuclei. PMID:23520051

  13. Cross-Linguistic Relations between Quantifiers and Numerals in Language Acquisition: Evidence from Japanese

    ERIC Educational Resources Information Center

    Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu

    2009-01-01

    A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no

  14. Quantifying population structure on short timescales.

    PubMed

    Raeymaekers, Joost A M; Lens, Luc; Van den Broeck, Frederik; Van Dongen, Stefan; Volckaert, Filip A M

    2012-07-01

    Quantifying the contribution of the various processes that influence population genetic structure is important, but difficult. One of the reasons is that no single measure appropriately quantifies all aspects of genetic structure. An increasing number of studies is analysing population structure using the statistic D, which measures genetic differentiation, next to G(ST) , which quantifies the standardized variance in allele frequencies among populations. Few studies have evaluated which statistic is most appropriate in particular situations. In this study, we evaluated which index is more suitable in quantifying postglacial divergence between three-spined stickleback (Gasterosteus aculeatus) populations from Western Europe. Population structure on this short timescale (10?000 generations) is probably shaped by colonization history, followed by migration and drift. Using microsatellite markers and anticipating that D and G(ST) might have different capacities to reveal these processes, we evaluated population structure at two levels: (i) between lowland and upland populations, aiming to infer historical processes; and (ii) among upland populations, aiming to quantify contemporary processes. In the first case, only D revealed clear clusters of populations, putatively indicative of population ancestry. In the second case, only G(ST) was indicative for the balance between migration and drift. Simulations of colonization and subsequent divergence in a hierarchical stepping stone model confirmed this discrepancy, which becomes particularly strong for markers with moderate to high mutation rates. We conclude that on short timescales, and across strong clines in population size and connectivity, D is useful to infer colonization history, whereas G(ST) is sensitive to more recent demographic events. PMID:22646231

  15. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  16. Entropy generation method to quantify thermal comfort

    NASA Technical Reports Server (NTRS)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.

  17. Quantifiable diagnosis of muscular dystrophies and neurogenic atrophies through network analysis

    PubMed Central

    2013-01-01

    Background The diagnosis of neuromuscular diseases is strongly based on the histological characterization of muscle biopsies. However, this morphological analysis is mostly a subjective process and difficult to quantify. We have tested if network science can provide a novel framework to extract useful information from muscle biopsies, developing a novel method that analyzes muscle samples in an objective, automated, fast and precise manner. Methods Our database consisted of 102 muscle biopsy images from 70 individuals (including controls, patients with neurogenic atrophies and patients with muscular dystrophies). We used this to develop a new method, Neuromuscular DIseases Computerized Image Analysis (NDICIA), that uses network science analysis to capture the defining signature of muscle biopsy images. NDICIA characterizes muscle tissues by representing each image as a network, with fibers serving as nodes and fiber contacts as links. Results After a training phase with control and pathological biopsies, NDICIA was able to quantify the degree of pathology of each sample. We validated our method by comparing NDICIA quantification of the severity of muscular dystrophies with a pathologists evaluation of the degree of pathology, resulting in a strong correlation (R?=?0.900, P <0.00001). Importantly, our approach can be used to quantify new images without the need for prior training. Therefore, we show that network science analysis captures the useful information contained in muscle biopsies, helping the diagnosis of muscular dystrophies and neurogenic atrophies. Conclusions Our novel network analysis approach will serve as a valuable tool for assessing the etiology of muscular dystrophies or neurogenic atrophies, and has the potential to quantify treatment outcomes in preclinical and clinical trials. PMID:23514382

  18. DOE: Quantifying the Value of Hydropower in the Electric Grid

    SciTech Connect

    2012-12-31

    The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, www.epri.com/hydrogrid. With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms of energy, capacity, and ancillary services. Many potential improvements to existing hydropower plants were found to be cost-effective. Pumped storage is the most likely form of large new hydro asset expansions in the U.S. however, justifying investments in new pumped storage plants remains very challenging with current electricity market economics. Even over a wide range of possible energy futures, up to 2020, no energy future was found to bring quantifiable revenues sufficient to cover estimated costs of plant construction. Value streams not quantified in this study may provide a different cost-benefit balance and an economic tipping point for hydro. Future studies are essential in the quest to quantify the full potential value. Additional research should consider the value of services provided by advanced storage hydropower and pumped storage at smaller time steps for integration of variable renewable resources, and should include all possible value streams such as capacity value and portfolio benefits i.e.; reducing cycling on traditional generation.

  19. Quantifying Bedrock Fracture Densities and their Influence on Hillslope Stability

    NASA Astrophysics Data System (ADS)

    Burbank, D. W.; Clarke, B. A.

    2010-12-01

    Bedrock fractures and rock-mass strength play a primary role in governing landscape morphology and the efficiency of surface processes. Quantifying bedrock characteristics at hillslope scales, however, has proven difficult. Here, we present a new field-based method for quantifying bedrock fracture densities within the shallow subsurface based on seismic refraction surveys. We then relate the density and depth of bedrock fractures to the stability of threshold slopes and the magnitude of bedrock landslides. We argue that tectonic forces produce uniform and pervasive bedrock fracturing with depth, whereas geomorphic processes produce strong fracture gradients focused within the shallow subsurface. Additionally, we argue that hillslope strength and stability are functions of both the intact rock strength and the density of bedrock fractures, such that for a given intact rock strength, a threshold fracture-density exists that distinguishes between stable and unstable rock masses at a given slope angle. We examine variations in subsurface fracture patterns in both Fiordland and the western Southern Alps of New Zealand in order to improve constraints on the influence of bedrock properties in governing the rates and patterns of bedrock landslides and the morphology of threshold landscapes. Our shallow seismic analysis reveals that, in the western Southern Alps, tectonic forces have pervasively fractured intrinsically weak rock to the verge of instability, such that the entire rock mass is susceptible to failure and landslides can potentially extend to great depths. In Fiordland, conversely, tectonic fracturing of the strong intact rock has produced fracture densities less than the regional stability threshold, such that bedrock from depth is relatively stable and less likely to fail by landsliding. Therefore, in Fiordland, bedrock failure generally occurs only when geomorphic fracturing further reduces the rock-mass strength by increasing the fracture density within the shallow subsurface. This dependence on geomorphic fracturing results in bedrock landslides that are generally limited to depths within this geomorphically weakened zone. Despite the differences in lithology and the depths of dense bedrock fracturing, hillslopes within both these regions display strikingly similar threshold gradients of ~32, suggesting that they are characterized by equivalent hillslope-scale surface strengths. We show that the equivalent surface strengths and threshold hillslope gradients are achieved through differential surface fracturing of the contrasting rock types. Overall, the magnitude of bedrock landslides are strongly influenced by the depth of intense, destabilizing fractures produced by either pervasive tectonic processes or near-surface geomorphic processes, whereas threshold surface gradients are modulated by the strength of fractured rock at Earths surface.

  20. Quantifying anisotropy in stratified and rotating turbulent flows

    NASA Astrophysics Data System (ADS)

    Liechtenstein, Lukas; Schneider, Kai; Godeferd, Fabien; Farge, Marie; Cambon, Claude

    2006-11-01

    We study freely decaying homogeneous anisotropic turbulent flows, submitted to either rotation or stratification, similar to those encountered in geophysical flows. We solve the three-dimensional Navier-Stokes equations with Boussinesq hypothesis by direct numerical simulation, using a pseudo-spectral method at resolution 512^3. We propose new diagnostics to characterize and quantify the anisotropy of these flows, which are based on three-dimensional orthogonal vector-valued wavelet decomposition. We thus show the energy distribution in terms of both scale and direction for each component of the velocity vector and quantify the flow anisotropy. We also apply the coherent vortex extraction algorithm, based on the nonlinear filtering of the wavelet coefficients of the vorticity field, to different anisotropic flows, yielding a strong data compression.

  1. Quantifying mean velocity and turbulence in experimental flash flood bores

    NASA Astrophysics Data System (ADS)

    Polito, P. J.; Johnson, J. P.

    2010-12-01

    Flash flooding is arguably the dominant hydrological phenomenon over one third of Earth's subaerial surface. While flash floods can result from events such as dam breaks and glacial outburst events, the dominant source is convective precipitation. In arid to sub-humid regions where these flows dominate in-channel flow, receiving more than half of the mean annual precipitation in a single event is not uncommon. Flash flood hydrographs can be characterized by three components: A flood bore, narrow rising limb, and a drawn out falling limb. Previous work shows that suspended sediment concentrations are highest in flash flood bores, rather than scaling monotonically with basal shear stress. As a step in understanding how flash flood sediment transport depends on flow hydraulics, we present results from preliminary flume experiments designed to quantify flow velocities and turbulence within experimental flood bores. Using a large outdoor flume that is 37.8 m long, 1.54 m wide, and 0.8 m deep, we are able produce reproducible flood hydrographs analogous to flash floods with flood bores. We measure instantaneous velocities with acoustic doppler velocimeters (ADVs) and an acoustic doppler profiler. We can produce bore heights up to 20 cm with peak bore velocities up to 1.5 m/s. Preliminary results confirm published correlations between bore height and bore velocity. We find that bore velocities over a wetted concrete bed are up to 15% higher than over a dry bed. The next phase of our experiments will focus on the downstream evolution of turbulence within individual flood bores. We will ultimately examine how increased suspended sediment concentrations affect turbulence within the bore and the bores ability to mobilize bedload. Our findings will be used to calibrate sediment transport models to more accurately predict bedload and suspended sediment flux in flash flood-dominate channels.

  2. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  3. Quantifying thermodynamics of collagen thermal denaturation by second harmonic generation imaging

    NASA Astrophysics Data System (ADS)

    Hovhannisyan, Vladimir A.; Su, Ping-Jung; Lin, Sung-Jan; Dong, Chen-Yuan

    2009-06-01

    Time-lapse second harmonic generation (SHG) microscopy was applied for the extraction of thermodynamic parameters of collagen thermal denaturation. We found that at sufficiently high temperatures, temporal dependence of SHG intensity from the isothermal treatment of chicken dermal collagen was single exponential and can be modeled by the Arrhenius equation. Activation energy and the frequency factor of chicken dermal collagen thermal denaturation were determined using temporal decays of SHG intensity at different temperatures. Our results show that time-lapse, high temperature SHG imaging can be used to quantify kinetic properties of collagen thermal denaturation within a microscopic volume of 1 nl.

  4. Development and application of a method for quantifying factors affecting chloramine decay in service reservoirs.

    PubMed

    Sathasivan, Arumugam; Krishna, K C Bal; Fisher, Ian

    2010-08-01

    Service reservoirs play an important role in maintaining water quality in distribution systems. Several factors affect the reservoir water quality, including bulk water reactions, stratification, sediment accumulation and wall reactions. It is generally thought that biofilm and sediments can harbour microorganisms, especially in chloraminated reservoirs, but their impact on disinfectant loss on disinfectant loss has not been quantified. Hence, debate exists as to the extent of the problem. To quantify the impact, the reservoir acceleration factor (F(Ra)) is defined. This factor represents the acceleration of chloramine decay arising from all causes, including changes in retention time, assuming that the reservoir is completely mixed. Such an approach quantifies the impact of factors, other than chemical reactions, in the bulk water. Data from three full-scale chloraminated service reservoirs in distribution systems of Sydney, Australia, were analysed to demonstrate the generality of the method. Results showed that in two large service reservoirs (404 x 10(3) m(3) and 82 x 10(3) m(3)) there was minimal impact from biofilm/sediment. However, in a small reservoir (3 x 10(3) m(3)), the biofilm/sediment had significant impact. In both small and large reservoirs, the effect of stratification was significant. PMID:20621323

  5. Quantifying uncertainties in geothermal energy exploration

    NASA Astrophysics Data System (ADS)

    Vogt, C.; Mottaghy, D.; Rath, V.; Wolf, A.; Pechnig, R.; Clauser, C.

    2009-04-01

    An increased use of geothermal energy requires reduction of cost and risk. Information on rock properties in the subsurface is essential for planning of projects for geothermal energy use. Based on a stochastic approach, the uncertainties in the rock properties at a given location and for a target parameter are quantified, such as temperature or flow rate. This way, not only average values and error estimates of the target parameter can be obtained, but also its spatial distribution. Based on this information, the risk within a geothermal project can be estimated better. As a result, cost may be reduced or estimated with less uncertainty. The approach employed is based on the algorithm of "Sequential Gaussian Simulation" (sgsim): First, the geometry of a geothermal reservoir model is discretized on some grid. Then the algorithm follows a random path through the model, and each grid node is assigned certain values for the required rock properties. These values take into account (a) assumed property distributions; (b) the correlation length; (c) primary data, such as borehole measurements; (d) secondary data, such as seismic data. A first realization is finished when the entire model is initialized. In order to obtain a distribution for the target parameter, more realizations need to be created by following other random paths. Each of these realizations is equally likely with suspect to the real situation which is defined by the measured data. Sgsim is implemented as a module of the in-house mass and heat flow simulator shemat_suite. This way, the generated realizations are directly used as input for mass and heat flow simulations. Thus, no time and effort is wasted for format conversion. As a demonstration of this method, an exploration scenario is simulated for a projected geothermal district heat use in The Hague, Netherlands. Multiple realizations are generated using the sgsim algorithm for the distribution of thermal conductivity within the geothermal reservoir at depth. Applying the mass and heat flow simulator shemat_suite, the error with respect to predicted temperature at the target location for a production well is reduced from about 25 % to 4 %.

  6. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  7. Quantifying Stock Return Distributions in Financial Markets.

    PubMed

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  8. Quantifying spatial correlations of general quantum dynamics

    NASA Astrophysics Data System (ADS)

    Rivas, ngel; Mller, Markus

    2015-06-01

    Understanding the role of correlations in quantum systems is both a fundamental challenge as well as of high practical relevance for the control of multi-particle quantum systems. Whereas a lot of research has been devoted to study the various types of correlations that can be present in the states of quantum systems, in this work we introduce a general and rigorous method to quantify the amount of correlations in the dynamics of quantum systems. Using a resource-theoretical approach, we introduce a suitable quantifier and characterize the properties of correlated dynamics. Furthermore, we benchmark our method by applying it to the paradigmatic case of two atoms weakly coupled to the electromagnetic radiation field, and illustrate its potential use to detect and assess spatial noise correlations in quantum computing architectures.

  9. Quantifying the CV: Adapting an Impact Assessment Model to Astronomy

    NASA Astrophysics Data System (ADS)

    Bohmier, K. A.

    2015-04-01

    We present the process and results of applying the Becker Model to the curriculum vitae of a Yale University astronomy professor. As background, in July 2013, the Becker Medical Library at Washington Univ. in St. Louis held a workshop for librarians on the Becker Model, a framework developed by research assessment librarians for quantifying medical researchers' individual and group outputs. Following the workshop, the model was analyzed for content to adapt it to the physical sciences.

  10. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    SciTech Connect

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97

  11. The Arizona Sun Corridor: Quantifying climatic implications of megapolitan development

    NASA Astrophysics Data System (ADS)

    Georgescu, M.; Moustaoui, M.; Mahalov, A.

    2010-12-01

    The local and regional-scale hydro-climatic impacts of land use and land cover change (LULCC) that result from urbanization require attention in light of future urban growth projections and related concerns for environmental sustainability. This is an especially serious issue over the southwestern U.S. where mounting pressure on the area’s natural desert environment and increasingly limited resources (e.g. water) exists, and is likely to worsen, due to unrelenting sprawl and associated urbanization. While previous modeling results have shown the degree to which the built environment has contributed to the region’s warming summertime climate, we use projections of future landscape change over the rapidly urbanizing Arizona Sun Corridor - an anticipated stretch of urban expanse that includes current metro Phoenix and Tucson - as surface boundary conditions to conduct high-resolution (order of 1-km) numerical simulations, over the seasonal timescale, to quantify the climatic effect of this relentlessly growing and increasingly vulnerable region. We use the latest version of the WRF modeling system to take advantage of several new capabilities, including a newly implemented nesting method used to refine the vertical mesh, and a comprehensive multi-story urban canopy scheme. We quantify the impact of projected (circa 2050) Sun Corridor megapolitan area on further development of the urban heat island (UHI), assess changes in the surface energy budget, with important implications for the near surface temperature and stability, and discuss modeled impacts on regional rainfall. Lastly, simulated effects are compared with projected warming due to increasing greenhouse gases (the GCMs from which these results are obtained currently do not take into account effects of urbanizing regions) and quantify the degree to which LULCC over the Arizona Sun Corridor will exacerbate regional anthropogenic climate change. A number of potential mitigation strategies are discussed (including effects of renewable energy), the simulated impact on anthropogenic heat production is quantified, and the degree to which future warming may be offset is estimated.

  12. Quantifying the sources of error in measurements of urine activity

    SciTech Connect

    Mozley, P.D.; Kim, H.J.; McElgin, W.

    1994-05-01

    Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

  13. Quantifying thermal modifications on laser welded skin tissue

    NASA Astrophysics Data System (ADS)

    Tabakoglu, Hasim .; Glsoy, Murat

    2011-02-01

    Laser tissue welding is a potential medical treatment method especially on closing cuts implemented during any kind of surgery. Photothermal effects of laser on tissue should be quantified in order to determine optimal dosimetry parameters. Polarized light and phase contrast techniques reveal information about extend of thermal change over tissue occurred during laser welding application. Change in collagen structure in skin tissue stained with hematoxilen and eosin samples can be detected. In this study, three different near infrared laser wavelengths (809 nm, 980 nm and 1070 nm) were compared for skin welding efficiency. 1 cm long cuts were treated spot by spot laser application on Wistar rats' dorsal skin, in vivo. In all laser applications, 0.5 W of optical power was delivered to the tissue, 5 s continuously, resulting in 79.61 J/cm2 energy density (15.92 W/cm2 power density) for each spot. The 1st, 4th, 7th, 14th, and 21st days of recovery period were determined as control days, and skin samples needed for histology were removed on these particular days. The stained samples were examined under a light microscope. Images were taken with a CCD camera and examined with imaging software. 809 Nm laser was found to be capable of creating strong full-thickness closure, but thermal damage was evident. The thermal damage from 980 nm laser welding was found to be more tolerable. The results showed that 1070 nm laser welding produced noticeably stronger bonds with minimal scar formation.

  14. Progress toward quantifying landscape-scale movement patterns of the glassy-winged sharpshooter and its natural enemies using a novel marl-capture technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Here we present the results of the first year of our research targeted at quantifying the landscape-level movement patterns of GWSS and its natural enemies. We showed that protein markers can be rapidly acquired and retained on insects for several weeks after marking directly in the field. Specifica...

  15. Quantifying the Impact of Scenic Environments on Health

    PubMed Central

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of scenicness for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  16. Quantifying the Impact of Scenic Environments on Health

    NASA Astrophysics Data System (ADS)

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-11-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing.

  17. Quantifying the Impact of Scenic Environments on Health.

    PubMed

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of "scenicness" for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  18. Quantifying variances in comparative RNA secondary structure prediction

    PubMed Central

    2013-01-01

    Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the reliability score reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662

  19. Quantifying the underlying landscape and paths of cancer.

    PubMed

    Li, Chunhe; Wang, Jin

    2014-11-01

    Cancer is a disease regulated by the underlying gene networks. The emergence of normal and cancer states as well as the transformation between them can be thought of as a result of the gene network interactions and associated changes. We developed a global potential landscape and path framework to quantify cancer and associated processes. We constructed a cancer gene regulatory network based on the experimental evidences and uncovered the underlying landscape. The resulting tristable landscape characterizes important biological states: normal, cancer and apoptosis. The landscape topography in terms of barrier heights between stable state attractors quantifies the global stability of the cancer network system. We propose two mechanisms of cancerization: one is by the changes of landscape topography through the changes in regulation strengths of the gene networks. The other is by the fluctuations that help the system to go over the critical barrier at fixed landscape topography. The kinetic paths from least action principle quantify the transition processes among normal state, cancer state and apoptosis state. The kinetic rates provide the quantification of transition speeds among normal, cancer and apoptosis attractors. By the global sensitivity analysis of the gene network parameters on the landscape topography, we uncovered some key gene regulations determining the transitions between cancer and normal states. This can be used to guide the design of new anti-cancer tactics, through cocktail strategy of targeting multiple key regulation links simultaneously, for preventing cancer occurrence or transforming the early cancer state back to normal state. PMID:25232051

  20. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows…

  1. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows

  2. Quantifying tire, rim, and vehicle effects on ride quality

    SciTech Connect

    Kenny, T.M.

    1989-01-01

    This paper discusses factors influencing vehicle ride discomfort are analyzed to separate those related to tires, rims, vehicles and other sources. Raw data is presented as vertical first harmonic accelerations, and is transformed into quantitative Ride Discomfort numbers using an empirical model developed by NASA. The results indicate each factor's quantitative contribution to ride discomfort. Ride discomfort numbers are compared against subjective data. Results are indicated as both a ride quality number and as a test set rejection parameter. The relation of a combination of known factors into subjective feelings of ride quality is discussed. An explanation is proposed that defines previously ambiguous results in terms of quantifiable sources of vehicle ride discomfort.

  3. Oxygen-Enhanced MRI Accurately Identifies, Quantifies, and Maps Tumor Hypoxia in Preclinical Cancer Models.

    PubMed

    O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2016-02-15

    There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (?R1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ?R1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ?R1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic. Cancer Res; 76(4); 787-95. 2015 AACR. PMID:26659574

  4. Quantifying Ammonia Emissions from High Elevation Grassland and Forest Soils

    NASA Astrophysics Data System (ADS)

    Stratton, J. J.; Levin, E. J.; Ham, J. M.; Collett, J. L.; Borch, T.

    2010-12-01

    Extensive evidence has shown that Rocky Mountain National Park (RMNP) has undergone ecosystem changes due to excessive nitrogen (N) deposition. Previously, the Rocky Mountain Atmospheric Nitrogen and Sulfur (RoMANS) study was conducted to identify the species of N that deposit in RMNP. Results from the RoMANS study were used to identify contributions to N deposition in RMNP showing that local sources provided 33% of the wet-deposited ammonia in RMNP during the summer period. With the uncertainty of the type of local sources and their influence on the N in RMNP, the major goal of this study is to determine the amount of ammonia released from native grassland and forest soils. Intact soil cores were collected from native grassland and forest soils near RMNP on June 28th, July 20th, and August 9th 2010 and monitored in a laboratory chamber study for seven days. The samples were collected in the morning of the sampling dates to limit artifacts such as temperature variations. Ammonia gas released from the cores was collected in an acid trap and analyzed using Ion Chromatography. Results showed that ammonia gas released, based on an average (n = 18) over seven days, was 1.71 and 0.677 mg NH3/m2 soil/day for grassland and forest soils, respectively. Not all of the 36 soil cores investigated lost quantifiable amounts of ammonia. The results are small in comparison to other non-local sources (e.g. animal feeding operations, fertilizer, etc.), but further studies need to be conducted to determine its significance as a local source. Seasonal trends were visible with June 28th being higher than both July 20th and August 9th sampling. Grassland soil emissions were higher than forest soils emissions for all three sampling dates, and water loss from the soil cores did not strongly correlate with ammonia emission. Studies are also being conducted to understand the fate of wet-deposited N on native grassland and forest soils.

  5. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance

    PubMed Central

    Chevereau, Guillaume; Dravecká, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-01-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the “morbidostat”, a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations—an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. PMID:26581035

  6. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance.

    PubMed

    Chevereau, Guillaume; Draveck, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-11-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the "morbidostat", a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations-an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. PMID:26581035

  7. Toward quantifying the deep Atlantic carbon storage increase during the last glaciation

    NASA Astrophysics Data System (ADS)

    Yu, J.; Menviel, L.; Jin, Z.

    2014-12-01

    Ice core records show that atmospheric CO2 concentrations during peak glacial time were ~30% lower than the levels during interglacial periods. The terrestrial biosphere carbon stock was likely reduced during glacials. Increased carbon storage in the deep ocean is thought to play an important role in lowering glacial atmospheric CO2. However, it has been challenging to quantify carbon storage changes in the deep ocean using existing proxy data. Here, we present deepwater carbonate ion reconstructions for a few locations in the deep Atlantic. These data allow us to estimate the minimum carbon storage increase in the deep Atlantic Ocean during the last glaciation. Our results show that, despite its relative small volume, the deep Atlantic Ocean may contribute significantly to atmospheric CO2 variations at major climate transitions. Furthermore, our results suggest a strong coupling of ocean circulation and carbon cycle in the deep Atlantic during the last glaciation.

  8. A vector space method to quantify agreement in qualitative data

    PubMed Central

    McFarlane, Delano J.; Ancker, Jessica S.; Kukafka, Rita

    2008-01-01

    Interrater agreement in qualitative research is rarely quantified. We present a new method for assessing interrater agreement in the coding of focus group transcripts, based on vector space methods. We also demonstrate similarities between this vector method and two previously published interrater agreement methods. Using these methods, we showed that interrater agreement for the qualitative data was quite low, attributable in part to the subjective nature of the codes and in part to the very large number of possible codes. These methods of assessing inter-rater agreement have the potential to be useful in determining and improving reliability of qualitative codings. PMID:18999026

  9. Precise thermal NDE for quantifying structural damage

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-09-18

    The authors demonstrated a fast, wide-area, precise thermal NDE imaging system to quantify aircraft corrosion damage, such as percent metal loss, above a threshold of 5% with 3% overall uncertainties. The DBIR precise thermal imaging and detection method has been used successfully to characterize defect types, and their respective depths, in aircraft skins, and multi-layered composite materials used for wing patches, doublers and stiffeners. This precise thermal NDE inspection tool has long-term potential benefits to evaluate the structural integrity of airframes, pipelines and waste containers. They proved the feasibility of the DBIR thermal NDE imaging system to inspect concrete and asphalt-concrete bridge decks. As a logical extension to the successful feasibility study, they plan to inspect a concrete bridge deck from a moving vehicle to quantify the volumetric damage within the deck and the percent of the deck which has subsurface delaminations. Potential near-term benefits are in-service monitoring from a moving vehicle to inspect the structural integrity of the bridge deck. This would help prioritize the repair schedule for a reported 200,000 bridge decks in the US which need substantive repairs. Potential long-term benefits are affordable, and reliable, rehabilitation for bridge decks.

  10. Quantifying meta-correlations in financial markets

    NASA Astrophysics Data System (ADS)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  11. Quantifying chemical reactions by using mixing analysis.

    PubMed

    Jurado, Anna; Vzquez-Su, Enric; Carrera, Jess; Tubau, Isabel; Pujades, Estanislao

    2015-01-01

    This work is motivated by a sound understanding of the chemical processes that affect the organic pollutants in an urban aquifer. We propose an approach to quantify such processes using mixing calculations. The methodology consists of the following steps: (1) identification of the recharge sources (end-members) and selection of the species (conservative and non-conservative) to be used, (2) identification of the chemical processes and (3) evaluation of mixing ratios including the chemical processes. This methodology has been applied in the Bess River Delta (NE Barcelona, Spain), where the River Bess is the main aquifer recharge source. A total number of 51 groundwater samples were collected from July 2007 to May 2010 during four field campaigns. Three river end-members were necessary to explain the temporal variability of the River Bess: one river end-member is from the wet periods (W1) and two are from dry periods (D1 and D2). This methodology has proved to be useful not only to compute the mixing ratios but also to quantify processes such as calcite and magnesite dissolution, aerobic respiration and denitrification undergone at each observation point. PMID:25280248

  12. Quantifying near-surface water exchange to assess hydrometeorological models

    NASA Astrophysics Data System (ADS)

    Parent, Annie-Claude; Anctil, Franois; Morais, Anne

    2013-04-01

    Modelling water exchange from the lower atmosphere, crop and soil system using hydrometeorological models allows processing an actual evapotranspiration (ETa) which is a complex but critical value for numerous hydrological purposes e.g. hydrological modelling and crop irrigation. This poster presents a summary of the hydrometeorological research activity conducted by our research group. The first purpose of this research is to quantify ETa and drainage of a rainfed potato crop located in South-Eastern Canada. Then, the outputs of the hydrometeorological models under study are compared with the observed turbulent fluxes. Afterwards, the sensibility of the hydrometeorological models to different inputs is assessed for an environment under a changing climate. ETa was measured from micrometeorological instrumentation (CSAT3, Campbell SCI Inc.; Li7500, LiCor Inc.), and the eddy covariance techniques. Near surface soil heat flux and soil water content at different layers from 10 cm to 100 cm were also measured. Other parameters required by the hydrometeorological models were observed using meteorological standard instrumentation: shortwave and longwave solar radiation, wind speed, air temperature, atmospheric pressure and precipitation. The cumulative ETa during the growth season (123 days) was 331.5 mm, with a daily maximum of 6.5 mm at full coverage; precipitation was 350.6 mm which is rather small compared with the historical mean (563.3 mm). This experimentation allowed calculating crop coefficients that vary among the growth season for a rainfed potato crop. Land surface schemes as CLASS (Canadian Land Surface Scheme) and c-ISBA (a Canadian version of the model Interaction Sol-Biosphre-Atmosphre) are 1-D physical hydrometeorological models that produce turbulent fluxes (including ETa) for a given crop. The schemes performances were assessed for both energy and water balance, based on the resulting turbulent fluxes and the given observations. CLASS showed overestimating the turbulent fluxes (including ETa) so as the fluctuations in the soil flux which were higher than those measured. ETa and runoff were overestimated by c-ISBA while drainage was weaker, compared to CLASS. On the whole, CLASS showed better modelling drainage. Further works include: 1- comparing observations and results from CLASS to the French model SURFEX (Surface Externalise), that uses the scheme ISBA, and 2- assessing the sensibility of CLASS to different meteorological inputs (i.e. 6 regional climate models) in producing a consistent ETa, in a context of climate changes.

  13. Quantifying proteinuria in hypertensive disorders of pregnancy.

    PubMed

    Amin, Sapna V; Illipilla, Sireesha; Hebbar, Shripad; Rai, Lavanya; Kumar, Pratap; Pai, Muralidhar V

    2014-01-01

    Background. Progressive proteinuria indicates worsening of the condition in hypertensive disorders of pregnancy and hence its quantification guides clinician in decision making and treatment planning. Objective. To evaluate the efficacy of spot dipstick analysis and urinary protein-creatinine ratio (UPCR) in hypertensive disease of pregnancy for predicting 24-hour proteinuria. Subjects and Methods. A total of 102 patients qualifying inclusion criteria were evaluated with preadmission urine dipstick test and UPCR performed on spot voided sample. After admission, the entire 24-hour urine sample was collected and analysed for daily protein excretion. Dipstick estimation and UPCR were compared to the 24-hour results. Results. Seventy-eight patients (76.5%) had significant proteinuria of more than 300?mg/24?h. Dipstick method showed 59% sensitivity and 67% specificity for prediction of significant proteinuria. Area under curve for UPCR was 0.89 (95% CI: 0.83 to 0.95, P < 0.001) showing 82% sensitivity and 12.5% false positive rate for cutoff value of 0.45. Higher cutoff values (1.46 and 1.83) predicted heavy proteinuria (2?g and 3?g/24?h, resp.). Conclusion. This study suggests that random urinary protein?:?creatine ratio is a reliable investigation compared to dipstick method to assess proteinuria in hypertensive pregnant women. However, clinical laboratories should standardize the reference values for their setup. PMID:25302114

  14. Quantifying Proteinuria in Hypertensive Disorders of Pregnancy

    PubMed Central

    Amin, Sapna V.; Illipilla, Sireesha; Rai, Lavanya; Kumar, Pratap; Pai, Muralidhar V.

    2014-01-01

    Background. Progressive proteinuria indicates worsening of the condition in hypertensive disorders of pregnancy and hence its quantification guides clinician in decision making and treatment planning. Objective. To evaluate the efficacy of spot dipstick analysis and urinary protein-creatinine ratio (UPCR) in hypertensive disease of pregnancy for predicting 24-hour proteinuria. Subjects and Methods. A total of 102 patients qualifying inclusion criteria were evaluated with preadmission urine dipstick test and UPCR performed on spot voided sample. After admission, the entire 24-hour urine sample was collected and analysed for daily protein excretion. Dipstick estimation and UPCR were compared to the 24-hour results. Results. Seventy-eight patients (76.5%) had significant proteinuria of more than 300?mg/24?h. Dipstick method showed 59% sensitivity and 67% specificity for prediction of significant proteinuria. Area under curve for UPCR was 0.89 (95% CI: 0.83 to 0.95, P < 0.001) showing 82% sensitivity and 12.5% false positive rate for cutoff value of 0.45. Higher cutoff values (1.46 and 1.83) predicted heavy proteinuria (2?g and 3?g/24?h, resp.). Conclusion. This study suggests that random urinary protein?:?creatine ratio is a reliable investigation compared to dipstick method to assess proteinuria in hypertensive pregnant women. However, clinical laboratories should standardize the reference values for their setup. PMID:25302114

  15. Obtaining Laws Through Quantifying Experiments: Justifications of Pre-service Physics Teachers in the Case of Electric Current, Voltage and Resistance

    NASA Astrophysics Data System (ADS)

    Mntyl, Terhi; Hmlinen, Ari

    2015-07-01

    The language of physics is mathematics, and physics ideas, laws and models describing phenomena are usually represented in mathematical form. Therefore, an understanding of how to navigate between phenomena and the models representing them in mathematical form is important for a physics teacher so that the teacher can make physics understandable to students. Here, the focus is on the "experimental mathematization," how laws are established through quantifying experiments. A sequence from qualitative experiments to mathematical formulations through quantifying experiments on electric current, voltage and resistance in pre-service physics teachers' laboratory reports is examined. The way students reason and justify the mathematical formulation of the measurement results and how they combine the treatment and presentation of empirical data to their justifications is analyzed. The results show that pre-service physics teachers understand the basic idea of how quantifying experiments establish the quantities and laws but are not able to argue it in a justified manner.

  16. Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH

    NASA Astrophysics Data System (ADS)

    Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred

    2014-05-01

    It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P

  17. Crisis of Japanese vascular flora shown by quantifying extinction risks for 1618 taxa.

    PubMed

    Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

    2014-01-01

    Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994-1995 and in 2003-2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

  18. Crisis of Japanese Vascular Flora Shown By Quantifying Extinction Risks for 1618 Taxa

    PubMed Central

    Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

    2014-01-01

    Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 19941995 and in 20032004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

  19. Quantifying Lead-Time Bias in Risk-Factor Studies of Cancer through Simulation

    PubMed Central

    Jansen, Rick J.; Alexander, Bruce H.; Anderson, Kristin E.; Church, Timothy R.

    2013-01-01

    Purpose Lead-time is inherent in early detection and creates bias in observational studies of screening efficacy, but its potential to bias effect estimates in risk-factor studies is not always recognized. We describe a form of this bias that conventional analyses cannot address and develop a model to quantify it. Methods Surveillance Epidemiology and End Results (SEER) data form the basis for estimates of age-specific preclinical incidence and log-normal distributions describe the preclinical duration distribution. Simulations assume a joint null hypothesis of no effect of either the risk factor or screening on the preclinical incidence of cancer, and then quantify the bias as the risk-factor odds ratio (OR) from this null study. This bias can be used as a factor to adjust observed OR in the actual study. Results Results showed that for this particular study design, as average preclinical duration increased, the bias in the total-physical-activity OR monotonically increased from 1% to 22% above the null, but the smoking OR monotonically decreased from 1% above the null to 5% below the null. Conclusion The finding of nontrivial bias in fixed risk-factor effect estimates demonstrates the importance of quantitatively evaluating it in susceptible studies. PMID:23988688

  20. How to quantify conduits in wood?

    PubMed Central

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology. PMID:23507674

  1. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  2. Quantifying International Travel Flows Using Flickr

    PubMed Central

    Barchiesi, Daniele; Moat, Helen Susannah; Alis, Christian; Bishop, Steven; Preis, Tobias

    2015-01-01

    Online social media platforms are opening up new opportunities to analyse human behaviour on an unprecedented scale. In some cases, the fast, cheap measurements of human behaviour gained from these platforms may offer an alternative to gathering such measurements using traditional, time consuming and expensive surveys. Here, we use geotagged photographs uploaded to the photo-sharing website Flickr to quantify international travel flows, by extracting the location of users and inferring trajectories to track their movement across time. We find that Flickr based estimates of the number of visitors to the United Kingdom significantly correlate with the official estimates released by the UK Office for National Statistics, for 28 countries for which official estimates are calculated. Our findings underline the potential for indicators of key aspects of human behaviour, such as mobility, to be generated from data attached to the vast volumes of photographs posted online. PMID:26147500

  3. Quantifying the global fractionation of polychlorinated biphenyls.

    PubMed

    Wania, Frank; Su, Yushan

    2004-05-01

    Due to the wide range of their physical-chemical properties, polychlorinated biphenyls (PCBs) have played an important role in the derivation of the global fractionation hypothesis, which predicts changes in the composition of persistent organic pollutant mixtures with latitude. Recent historical emission estimates, the derivation of an internally consistent property data set, in combination with a zonally averaged global fate and transport model, allow a quantitative investigation of the compositional shifts PCBs experience as a function of environmental compartment, latitude and time. Model simulations reproduce the higher relative abundance of lighter PCB congeners with increasing latitude, observed in air and soil, and quantify the relative importance of partitioning, persistence and emissions in establishing PCB patterns. Compositional variations consistent with global fractionation, as well as inverted concentration profiles with higher levels in the Arctic than at lower latitudes, are consistent with only minor fractions of the global PCB inventory being transferred northward. PMID:15151387

  4. Quantifying morphogenesis in plants in 4D.

    PubMed

    Bassel, George W; Smith, Richard S

    2016-02-01

    Plant development occurs in 3D space over time (4D). Recent advances in image acquisition and computational analysis are now enabling development to be visualized and quantified in its entirety at the cellular level. The simultaneous quantification of reporter abundance and 3D cell shape change enables links between signaling processes and organ morphogenesis to be accomplished organ-wide and at single cell resolution. Current work to integrate this quantitative 3D image data with computational models is enabling causal relationships between gene expression and organ morphogenesis to be uncovered. Further technical advances in imaging and image analysis will enable this approach to be applied to a greater diversity of plant organs and will become a key tool to address many questions in plant development. PMID:26748353

  5. Crowdsourcing for quantifying transcripts: An exploratory study.

    PubMed

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. PMID:26519690

  6. QUANTIFYING SPECTRAL FEATURES OF TYPE Ia SUPERNOVAE

    SciTech Connect

    Wagers, A.; Wang, L.; Asztalos, S.

    2010-03-10

    We introduce a new technique to quantify highly structured spectra for which the definition of continua or spectral features in the observed flux spectra is difficult. The method employs wavelet transformations to decompose the observed spectra into different scales. A procedure is formulated to define the strength of spectral features so that the measured spectral indices are independent of the flux levels and are insensitive to the definition of continuum and also to reddening. This technique is applied to Type Ia supernovae (SNe) spectra, where correlations are revealed between luminosity and spectral features. The current technique may allow for luminosity corrections based on spectral features in the use of Type Ia SNe as cosmological probe.

  7. STAR Trial Shows Lower Toxicities from Raloxifene

    Cancer.gov

    Initial results in 2006 of the NCI-sponsored Study of Tamoxifen and Raloxifene (STAR) showed that a common osteoporosis drug, raloxifene, prevented breast cancer to the same degree, but with fewer serious side-effects, than the drug tamoxifen that had bee

  8. Olaparib shows promise in multiple tumor types.

    PubMed

    2013-07-01

    A phase II study of the PARP inhibitor olaparib (AstraZeneca) for cancer patients with inherited BRCA1 and BRCA2 gene mutations confirmed earlier results showing clinical benefit for advanced breast and ovarian cancers, and demonstrated evidence of effectiveness against pancreatic and prostate cancers. PMID:23847380

  9. Quantifying touch feel perception: tribological aspects

    NASA Astrophysics Data System (ADS)

    Liu, X.; Yue, Z.; Cai, Z.; Chetwynd, D. G.; Smith, S. T.

    2008-08-01

    We report a new investigation into how surface topography and friction affect human touch-feel perception. In contrast with previous work based on micro-scale mapping of surface mechanical and tribological properties, this investigation focuses on the direct measurement of the friction generated when a fingertip is stroked on a test specimen. A special friction apparatus was built for the in situ testing, based on a linear flexure mechanism with both contact force and frictional force measured simultaneously. Ten specimens, already independently assessed in a 'perception clinic', with materials including natural wood, leather, engineered plastics and metal were tested and the results compared with the perceived rankings. Because surface geometrical features are suspected to play a significant role in perception, a second set of samples, all of one material, were prepared and tested in order to minimize the influence of properties such as hardness and thermal conductivity. To minimize subjective effects, all specimens were also tested in a roller-on-block configuration based upon the same friction apparatus, with the roller materials being steel, brass and rubber. This paper reports the detailed design and instrumentation of the friction apparatus, the experimental set-up and the friction test results. Attempts have been made to correlate the measured properties and the perceived feelings for both roughness and friction. The results show that the measured roughness and friction coefficient both have a strong correlation with the rough-smooth and grippy-slippery feelings.

  10. Quantifying uncertainty in observational rainfall datasets

    NASA Astrophysics Data System (ADS)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10.1175/JCLI-D-12-00703.1 Kim, J., D. E. Waliser, C. A. Mattmann, C. E. Goodale, A. F. Hart, P. A. Zimdars, D. J. Crichton, C. Jones, G. Nikulin, B. Hewitson, C. Jack, C. Lennard, and A. Favre (2013) Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors. Clim. Dyn. 42:1189-1202. DOI: 10.1007/s00382-013-1751-7 Laprise, R., L. Hernández-Díaz, K. Tete, L. Sushama, L. ?eparovi?, A. Martynov, K. Winger, and M. Valin (2013) Climate projections over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 41:3219-3246. DOI:10.1007/s00382-012-1651-2 Mariotti, L., I. Diallo, E. Coppola, and F. Giorgi (2014) Seasonal and intraseasonal changes of African monsoon climates in 21st century CORDEX projections. Climatic Change, 1-13. DOI: 10.1007/s10584-014-1097-0 Nikulin, G., C. Jones, F. Giorgi, G. Asrar, M. Büchner, R. Cerezo-Mota, O. Bøssing Christensen, M. Déqué, J. Fernandez, A. Hänsler, E.van Meijgaard, P. Samuelsson, M. Bamba Sylla, and L.Sushama (2012) Precipitation Climatology in an Ensemble of CORDEX-Africa Regional Climate Simulations. J. Climate, 25, 6057-6078. 10.1175/JCLI-D-11-00375.1 Panitz, H.-J., , A. Dosio, M. Büchner, D. Lüthi, and K. Keuler (2013) COSMO-CLM (CCLM) climate simulations over CORDEX Africa domain: analysis of the ERA-Interim driven simulations at 0.44 degree and 0.22 degree resolution. Clim. Dyn., DOI:10.1007/s00382-013-1834-5 Sylla, M. B., F. Giorgi, E. Coppola, and L. Mariotti (2012) Uncertainties in daily rainfall over Africa: assessment of gridded observation products and evaluation of a regional climate model simulation. Int. J. Climatol., 33:1805-1817. DOI: 10.1002/joc.3551 Tramblay Y., D. Ruelland, S. Somot, R. Bouaicha, and E. Servat (2013) High-resolution Med-CORDEX regional climate model simulations for hydrological impact studies: a first evaluation of the ALADIN-Climate model in Morocco. Hydrol. Earth Syst. Sci. Discuss., 10, 5687-5737. DOI:10.5194/hessd-10-5687-2013

  11. 3D Wind: Quantifying wind speed and turbulence intensity

    NASA Astrophysics Data System (ADS)

    Barthelmie, R. J.; Pryor, S. C.; Wang, H.; Crippa, P.

    2013-12-01

    Integrating measurements and modeling of wind characteristics for wind resource assessment and wind farm control is increasingly challenging as the scales of wind farms increases. Even offshore or in relatively homogeneous landscapes, there are significant gradients of both wind speed and turbulence intensity on scales that typify large wind farms. Our project is, therefore, focused on (i) improving methods to integrate remote sensing and in situ measurements with model simulations to produce a 3-dimensional view of the flow field on wind farm scales and (ii) investigating important controls on the spatiotemporal variability of flow fields within the coastal zone. The instrument suite deployed during the field experiments includes; 3-D sonic and cup anemometers deployed on meteorological masts and buoys, anemometers deployed on tethersondes and an Unmanned Aerial Vehicle, multiple vertically-pointing continuous-wave lidars and scanning Doppler lidars. We also integrate data from satellite-borne instrumentation - specifically synthetic aperture radar and scatterometers and output from the Weather Research and Forecast (WRF) model. Spatial wind fields and vertical profiles of wind speed from WRF and from the full in situ observational suite exhibit excellent agreement in a proof-of-principle experiment conducted in north Indiana particularly during convective conditions, but showed some discrepancies during the breakdown of the nocturnal stable layer. Our second experiment in May 2013 focused on triangulating a volume above an area of coastal water extending from the port in Cleveland out to an offshore water intake crib (about 5 km) and back to the coast, and includes extremely high resolution WRF simulations designed to characterize the coastal zone. Vertically pointing continuous-wave lidars were operated at each apex of the triangle, while the scanning Doppler lidar scanned out across the water over 90 degrees azimuth angle. Preliminary results pertaining to objective (i) indicate there is good agreement between wind and turbulence profiles to 200 m as measured by the vertically pointing lidars with the expected modification of the offshore profiles relative to the profiles at the coastline. However, these profiles do not always fully agree with wind speed and direction profiles measured by the scanning Doppler lidar. Further investigation is required to elucidate these results and to analyze whether these discrepancies occur during particular atmospheric conditions. Preliminary results regarding controls on flow in the coastal zone (i.e. objective ii) include clear evidence that the wind profile to 200 m was modified due to swell during unstable conditions even under moderate to high wind speed conditions. The measurement campaigns will be described in detail, with a view to evaluating optimal strategies for offshore measurement campaigns and in the context of quantifying wind and turbulence in a 3D volume.

  12. Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback

    SciTech Connect

    Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.; Garcia-Ojalvo, J.

    2010-07-15

    Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensity dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.

  13. A mass-balance model to separate and quantify colloidal and solute redistributions in soil

    USGS Publications Warehouse

    Bern, C.R.; Chadwick, O.A.; Hartshorn, A.S.; Khomo, L.M.; Chorover, J.

    2011-01-01

    Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

  14. Quantifying nonverbal communicative behavior in face-to-face human dialogues

    NASA Astrophysics Data System (ADS)

    Skhiri, Mustapha; Cerrato, Loredana

    2002-11-01

    The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

  15. Can we quantify local groundwater recharge using electrical resistivity tomography?

    NASA Astrophysics Data System (ADS)

    Noell, U.; Gnther, T.; Ganz, C.; Lamparter, A.

    2012-04-01

    Electrical resistivity tomography (ERT) has become a common tool to observe flow processes within the saturated/unsaturated zones. While it is still doubtful whether the method can reliably yield quantitative results the qualitative success has been shown in "numerous" examples. To quantify the rate of rainfall which reaches the groundwater table is still a problematic venture due to a sad combination of several physical and mathematical obstacles that may lead to huge errors. In 2007 an infiltration experiment was performed and observed using 3D array ERT. The site is located close to Hannover, Germany, on a well studied sandy soil. The groundwater table at this site was at a depth of about 1.3 m. The inversion results of the ERT data yield reliably looking pictures of the infiltration process. Later experiments nearby using tracer fluid and combined TDR and resistivity measurements in the subsurface strongly supported the assumption that the resistivity pictures indeed depict the water distributions during infiltration reliably. The quantitative interpretation shows that two days after infiltration about 40% of the water has reached the groundwater. However, the question remains how reliable this quantitative interpretation actually is. The first obstacle: The inversion of the ERT data gives one possible resistivity distribution within the subsurface that can explain the data. It is not necessarily the right one and the result depends on the error model and the inversion parameters and method. For these measurements we assume the same error for every single quadrupole (3%), applied the Gauss-Newton method and minimum length constraints in order to reduce the smoothing to a minimum (very small lambda). Numerical experiments showed little smoothing using this approach, and smoothing must be suppressed if preferential flow is to be seen. The inversion showed artefacts of minor amplitude compared with other inversion parameter settings. The second obstacle: The petrophysical function that relates the resistivity changes to water content changes is doubtful. This relationship was constructed by two ways; firstly by comparing in situ measured water contents and the ERT inversion results, secondly by laboratory measurements of soil samples taken at different depth. The results of these both methods vary; moreover, heterogeneity in the subsurface may cause an even greater variability of this relationship. For the calculation an "average" function was applied. The third obstacle: The pore water conductivity may change during the infiltration due to exchange of pore water. This effect is neglected for this experiment on account of the very similar resistivity of original pore water and infiltrated water. This effect, however, is of great importance if saline water is used for infiltration experiments. It will also hamper the quantitative interpretation if solution and precipitation processes within the soil during the infiltration are expected. The fourth obstacle: The disadvantageous shape of the function relating resistivity and water content. Unfortunately at high water contents only very little change in resistivity is observed if the water content increases or decreases, the function is steep only at small and medium water contents but very flat at high water contents. We conclude from the combination of these four obstacles that quantitative interpretation of recharge with ERT is possible only in fortunate cases. ERT can enable us to actually measure recharge processes. However, if the conditions are not fortunate, the interpretation of the ERT data will permit the conclusion whether there is recharge. The quantitative value will remain doubtful if no additional measurements are taken that narrow the uncertainties. Particularly TDR/resistivity measurements with the same probe are helpful to get the information about the mixing of the pore water.

  16. Mapping the Galactic Halo. VIII. Quantifying Substructure

    NASA Astrophysics Data System (ADS)

    Starkenburg, Else; Helmi, Amina; Morrison, Heather L.; Harding, Paul; van Woerden, Hugo; Mateo, Mario; Olszewski, Edward W.; Sivarani, Thirupathi; Norris, John E.; Freeman, Kenneth C.; Shectman, Stephen A.; Dohm-Palmer, R. C.; Frey, Lucy; Oravetz, Dan

    2009-06-01

    We have measured the amount of kinematic substructure in the Galactic halo using the final data set from the Spaghetti project, a pencil-beam high-latitude sky survey. Our sample contains 101 photometrically selected and spectroscopically confirmed giants with accurate distance, radial velocity, and metallicity information. We have developed a new clustering estimator: the "4distance" measure, which when applied to our data set leads to the identification of one group and seven pairs of clumped stars. The group, with six members, can confidently be matched to tidal debris of the Sagittarius dwarf galaxy. Two pairs match the properties of known Virgo structures. Using models of the disruption of Sagittarius in Galactic potentials with different degrees of dark halo flattening, we show that this favors a spherical or prolate halo shape, as demonstrated by Newberg et al. using the Sloan Digital Sky Survey data. One additional pair can be linked to older Sagittarius debris. We find that 20% of the stars in the Spaghetti data set are in substructures. From comparison with random data sets, we derive a very conservative lower limit of 10% to the amount of substructure in the halo. However, comparison to numerical simulations shows that our results are also consistent with a halo entirely built up from disrupted satellites, provided that the dominating features are relatively broad due to early merging or relatively heavy progenitor satellites.

  17. Quantifying deformation in crystalline magma using EBSD

    NASA Astrophysics Data System (ADS)

    Kendrick, J. E.; Lavallee, Y.; Mariani, E.; Cordonnier, B.; Heap, M. J.; Gaunt, H. E.; Hess, K.; Flaws, A.; Dingwell, D. B.

    2011-12-01

    It is a common phenomenon for volcanoes to rapidly switch from effusive to explosive eruption, aided by the brittle failure of magma at high temperature. Rheology plays an integral part in this transition and is highly dependent upon composition, porosity and crystallinity. This study aims to characterise the mechanical contribution of crystals in magma of known deformation history. High-temperature (at 950 C) uniaxial deformation of andesitic crystal-bearing magma from Volcn de Colima results in an evolution of porosity, permeability and crystal size distribution. Cylindrical samples with different crystallinities and porosities but identical chemical composition were uniaxially deformed at constant applied stress of 12 or 24MPa and to a total strain of 20/30%. The samples displayed a significant range of measured strain rates (10-5-10-3s-1) at a given temperature and applied stress. Acoustic emission monitoring during deformation recorded the timing of fracturing, showing that the ductile-brittle transition occurs at different stress/strain values for each sample. As-collected and experimentally deformed samples were analysed using electron backscatter diffraction, showing that crystallographic alignment is highly dependent on strain, while fracture nucleation and propagation is controlled primarily by stress (higher strain rate in our experiments). At low strain rates, groundmass controls ductile deformation and sporadic fractures occur in phenocrysts, while at high strain rates fractures dominantly initiate in phenocrysts and propagate through the groundmass. Crystal plasticity that resulted from the experiments, evidenced by misorientation of the crystal lattice indicates that, although models of suspension rheology exist, they do not encompass the full complexity of crystal-bearing magmas.

  18. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects

  19. Quantifying Subsidence in the 1999-2000 Arctic Winter Vortex

    NASA Technical Reports Server (NTRS)

    Greenblatt, Jeffery B.; Jost, Hans-juerg; Loewenstein, Max; Podolske, James R.; Bui, T. Paul; Elkins, James W.; Moore, Fred L.; Ray, Eric A.; Sen, Bhaswar; Margitan, James J.; Hipskind, R. Stephen (Technical Monitor)

    2000-01-01

    Quantifying the subsidence of the polar winter stratospheric vortex is essential to the analysis of ozone depletion, as chemical destruction often occurs against a large, altitude-dependent background ozone concentration. Using N2O measurements made during SOLVE on a variety of platforms (ER-2, in-situ balloon and remote balloon), the 1999-2000 Arctic winter subsidence is determined from N2O-potential temperature correlations along several N2O isopleths. The subsidence rates are compared to those determined in other winters, and comparison is also made with results from the SLIMCAT stratospheric chemical transport model.

  20. Quantifying the motion of Kager's fat pad.

    PubMed

    Ghazzawi, Ahmad; Theobald, Peter; Pugh, Neil; Byrne, Carl; Nokes, Len

    2009-11-01

    Kager's fat pad is located in Kager's triangle between the Achilles tendon, the superior cortex of the calcaneus, and flexor hallucis longus (FHL) muscle and tendon. Its biomechanical functions are not yet established, but recent studies suggest it performs important biomechanical roles as it is lined by a synovial membrane and its retrocalcaneal protruding wedge can be observed moving into the bursal space during ankle plantarflexion. Such features have prompted hypotheses that the protruding wedge assists in the lubrication of the Achilles tendon subtendinous area, distributes stress at the Achilles enthesis, and removes debris from within the retrocalcaneal bursa. This study examined the influence of FHL activity and Achilles tendon load on the protruding wedge sliding distance, using both dynamic ultrasound imaging and surface electromyogram. Intervolunteer results showed sliding distance was independent of FHL activity. This study has shown the protruding wedge to slide on average 60% further into the retrocalcaneal bursa when comparing the Achilles tendon loaded versus unloaded, consistently reaching the distal extremity. Sliding distance was dependent on a change in the Achilles tendon insertion angle. Our results support a number of hypothesized biomechanical functions of the protruding wedge including: lubrication of the subtendinous region; reduction of pressure change within the Achilles tendon enthesis organ; and removal of debris from within the retrocalcaneal bursa. PMID:19396861

  1. Quantifying the BICEP2-Planck tension over gravitational waves.

    PubMed

    Smith, Kendrick M; Dvorkin, Cora; Boyle, Latham; Turok, Neil; Halpern, Mark; Hinshaw, Gary; Gold, Ben

    2014-07-18

    The recent BICEP2 measurement of B-mode polarization in the cosmic microwave background (r = 0.2(-0.05)(+0.07)), a possible indication of primordial gravity waves, appears to be in tension with the upper limit from WMAP (r < 0.13 at 95% C.L.) and Planck (r < 0.11 at 95% C.L.). We carefully quantify the level of tension and show that it is very significant (around 0.1% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension and some novel ways to constrain them. PMID:25083631

  2. Quantifying Different Tactile Sensations Evoked by Cutaneous Electrical Stimulation Using Electroencephalography Features.

    PubMed

    Zhang, Dingguo; Xu, Fei; Xu, Heng; Shull, Peter B; Zhu, Xiangyang

    2016-03-01

    Psychophysical tests and standardized questionnaires are often used to analyze tactile sensation based on subjective judgment in conventional studies. In contrast with the subjective evaluation, a novel method based on electroencephalography (EEG) is proposed to explore the possibility of quantifying tactile sensation in an objective way. The proposed experiments adopt cutaneous electrical stimulation to generate two kinds of sensations (vibration and pressure) with three grades (low/medium/strong) on eight subjects. Event-related potentials (ERPs) and event-related synchronization/desynchronization (ERS/ERD) are extracted from EEG, which are used as evaluation indexes to distinguish between vibration and pressure, and also to discriminate sensation grades. Results show that five-phase P1-N1-P2-N2-P3 deflection is induced in EEG. Using amplitudes of latter ERP components (N2 and P3), vibration and pressure sensations can be discriminated on both individual and grand-averaged ERP ([Formula: see text]). The grand-average ERPs can distinguish the three sensations grades, but there is no significant difference on individuals. In addition, ERS/ERD features of mu rhythm (8-13[Formula: see text]Hz) are adopted. Vibration and pressure sensations can be discriminated on grand-average ERS/ERD ([Formula: see text]), but only some individuals show significant difference. The grand-averaged results show that most sensation grades can be differentiated, and most pairwise comparisons show significant difference on individuals ([Formula: see text]). The work suggests that ERP- and ERS/ERD-based EEG features may have potential to quantify tactile sensations for medical diagnosis or engineering applications. PMID:26762865

  3. Quantifying Nitrogen Loss From Flooded Hawaiian Taro Fields

    NASA Astrophysics Data System (ADS)

    Deenik, J. L.; Penton, C. R.; Bruland, G. L.; Popp, B. N.; Engstrom, P.; Mueller, J. A.; Tiedje, J.

    2010-12-01

    In 2004 a field fertilization experiment showed that approximately 80% of the fertilizer nitrogen (N) added to flooded Hawaiian taro (Colocasia esculenta) fields could not be accounted for using classic N balance calculations. To quantify N loss through denitrification and anaerobic ammonium oxidation (anammox) pathways in these taro systems we utilized a slurry-based isotope pairing technique (IPT). Measured nitrification rates and porewater N profiles were also used to model ammonium and nitrate fluxes through the top 10 cm of soil. Quantitative PCR of nitrogen cycling functional genes was used to correlate porewater N dynamics with potential microbial activity. Rates of denitrification calculated using porewater profiles were compared to those obtained using the slurry method. Potential denitrification rates of surficial sediments obtained with the slurry method were found to drastically overestimate the calculated in-situ rates. The largest discrepancies were present in fields greater than one month after initial fertilization, reflecting a microbial community poised to denitrify the initial N pulse. Potential surficial nitrification rates varied between 1.3% of the slurry-measured denitrification potential in a heavily-fertilized site to 100% in an unfertilized site. Compared to the use of urea, fish bone meal fertilizer use resulted in decreased N loss through denitrification in the surface sediment, according to both porewater modeling and IPT measurements. In addition, sub-surface porewater profiles point to root-mediated coupled nitrification/denitrification as a potential N loss pathway that is not captured in surface-based incubations. Profile-based surface plus subsurface coupled nitrification/denitrification estimates were between 1.1 and 12.7 times denitrification estimates from the surface only. These results suggest that the use of a ‘classic’ isotope pairing technique that employs 15NO3- in fertilized agricultural systems can lead to a drastic overestimation of in-situ denitrification rates and that root-associated subsurface coupled nitrification/denitrification may be a major N loss pathway in these flooded agricultural systems.

  4. Quantifying Biofilm in Porous Media Using Rock Physics Models

    NASA Astrophysics Data System (ADS)

    Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

    2012-12-01

    Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include assessing biofilms used as barriers in CO2 sequestration studies as well as assisting in evaluating microbial enhanced oil recovery methods (MEOR), where microorganisms are used to plug highly porous rocks for efficient oil production.

  5. Quantifying alosine prey in the diets of marine piscivores in the Gulf of Maine.

    PubMed

    McDermott, S P; Bransome, N C; Sutton, S E; Smith, B E; Link, J S; Miller, T J

    2015-06-01

    The objectives of this work were to quantify the spatial and temporal distribution of the occurrence of anadromous fishes (alewife Alosa pseudoharengus, blueback herring Alosa aestivalis and American shad Alosa sapidissima) in the stomachs of demersal fishes in coastal waters of the north-west Atlantic Ocean. Results show that anadromous fishes were detectable and quantifiable in the diets of common marine piscivores for every season sampled. Even though anadromous fishes were not the most abundant prey, they accounted for c. 5-10% of the diet by mass for several marine piscivores. Statistical comparisons of these data with fish diet data from a broad-scale survey of the north-west Atlantic Ocean indicate that the frequency of this trophic interaction was significantly higher within spatially and temporally focused sampling areas of this study than in the broad-scale survey. Odds ratios of anadromous predation were as much as 460 times higher in the targeted sampling as compared with the broad-scale sampling. Analyses indicate that anadromous prey consumption was more concentrated in the near-coastal waters compared with consumption of a similar, but more widely distributed species, the Atlantic herring Clupea harengus. In the context of ecosystem-based fisheries management, the results suggest that even low-frequency feeding events may be locally important, and should be incorporated into ecosystem models. PMID:25943427

  6. Quantifying the Robustness of the English Sibilant Fricative Contrast in Children

    PubMed Central

    Reidy, Patrick F.; Beckman, Mary E.; Edwards, Jan

    2015-01-01

    Purpose Four measures of children's developing robustness of phonological contrast were compared to see how they correlated with age, vocabulary size, and adult listeners' correctness ratings. Method Word-initial sibilant fricative productions from eighty-one 2- to 5-year-old children and 20 adults were phonetically transcribed and acoustically analyzed. Four measures of robustness of contrast were calculated for each speaker on the basis of the centroid frequency measured from each fricative token. Productions that were transcribed as correct from different children were then used as stimuli in a perception experiment in which adult listeners rated the goodness of each production. Results Results showed that the degree of category overlap, quantified as the percentage of a child's productions whose category could be correctly predicted from the output of a mixed-effects logistic regression model, was the measure that correlated best with listeners' goodness judgments. Conclusions Even when children's productions have been transcribed as correct, adult listeners are sensitive to within-category variation quantified by the child's degree of category overlap. Further research is needed to explore the relationship between the age of a child and adults' sensitivity to different types of within-category variation in children's speech. PMID:25766040

  7. Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?

    PubMed Central

    Delhey, Kaspar; Peters, Anne

    2008-01-01

    Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766

  8. Quantifying and Mapping Global Data Poverty.

    PubMed

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  9. Quantifying and Mapping Global Data Poverty

    PubMed Central

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this ‘proof of concept’ study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  10. Quantifying Acute Myocardial Injury Using Ratiometric Fluorometry

    PubMed Central

    Ranji, Mahsa; Matsubara, Muneaki; Leshnower, Bradley G.; Hinmon, Robin H.; Jaggard, Dwight L.; Chance, Britton; Gorman, Robert C.

    2011-01-01

    Early reperfusion is the best therapy for myocardial infarction (MI). Effectiveness, however, varies significantly between patients and has implications for long-term prognosis and treatment. A technique to assess the extent of myocardial salvage after reperfusion therapy would allow for high-risk patients to be identified in the early post-MI period. Mitochondrial dysfunction is associated with cell death following myocardial reperfusion and can be quantified by fluorometry. Therefore, we hypothesized that variations in the fluorescence of mitochondrial nicotinamide adenine dinucleotide (NADH) and flavoprotein (FP) can be used acutely to predict the degree of myocardial injury. Thirteen rabbits had coronary occlusion for 30 min followed by 3 h of reperfusion. To produce a spectrum of infarct sizes, six animals were infused cyclosporine A prior to ischemia. Using a specially designed fluorometric probe, NADH and FP fluorescence were measured in the ischemic area. Changes in NADH and FP fluorescence, as early as 15 min after reperfusion, correlated with postmortem assessment infarct size (r = 0.695, p < 0.01). This correlation strengthened with time (r = 0.827, p < 0.001 after 180 min). Clinical application of catheter-based myocardial fluorometry may provide a minimally invasive technique for assessing the early response to reperfusion therapy. PMID:19272908

  11. Quantifying the transmission potential of pandemic influenza

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Nishiura, Hiroshi

    2008-03-01

    This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.

  12. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  13. Polymer Microlenses for Quantifying Cell Sheet Mechanics

    NASA Astrophysics Data System (ADS)

    Miquelard-Garnier, Guillaume; Zimberlin, Jessica; Wadsworth, Patricia; Crosby, Alfred

    2009-03-01

    Mechanical interactions between individual cells and their substrate have been studied extensively over the past decade; however, our understanding of how these interactions change as cells interact with neighboring cells in the development of a cell sheet, or early stage tissue, is less developed. We present a recently developed experimental technique for quantifying the mechanics of confluent cell sheets (Zimberlin J.A., et al., Cell Motility and the Cytoskeleton, 65, 9, 762). Living cells are cultured on a thin film of polystyrene [PS], which is attached to a patterned substrate of crosslinked poly(dimethyl siloxane) microwells. As the cell sheet grows, cells apply sufficient force to buckle the PS film over individual microwells to form a microlens array. The curvature for each microlens is measured by confocal microscopy and can be related to the strain and stress applied by the cell sheet. We demonstrate that this technique can be used to decouple mechanical contributions of intercellular junctions and focal adhesions while also providing insight into the important materials properties and length scales that govern cell sheet responses.

  14. Quantifying traction stresses in adherent cells.

    PubMed

    Kraning-Rush, Casey M; Carey, Shawn P; Califano, Joseph P; Reinhart-King, Cynthia A

    2012-01-01

    Contractile force generation plays a critical role in cell adhesion, migration, and extracellular matrix reorganization in both 2D and 3D environments. Characterization of cellular forces has led to a greater understanding of cell migration, cellular mechanosensing, tissue formation, and disease progression. Methods to characterize cellular traction stresses now date back over 30 years, and they have matured from qualitative comparisons of cell-mediated substrate movements to high-resolution, highly quantitative measures of cellular force. Here, we will provide an overview of common methods used to measure forces in both 2D and 3D microenvironments. Specific focus will be placed on traction force microscopy, which measures the force exerted by cells on 2D planar substrates, and the use of confocal reflectance microscopy, which can be used to quantify collagen fibril compaction as a metric for 3D traction forces. In addition to providing experimental methods to analyze cellular forces, we discuss the application of these techniques to a large range of biomedical problems and some of the significant challenges that still remain in this field. PMID:22482948

  15. How to quantify energy landscapes of solids.

    PubMed

    Oganov, Artem R; Valle, Mario

    2009-03-14

    We explore whether the topology of energy landscapes in chemical systems obeys any rules and what these rules are. To answer this and related questions we use several tools: (i) Reduced energy surface and its density of states, (ii) descriptor of structure called fingerprint function, which can be represented as a one-dimensional function or a vector in abstract multidimensional space, (iii) definition of a "distance" between two structures enabling quantification of energy landscapes, (iv) definition of a degree of order of a structure, and (v) definitions of the quasi-entropy quantifying structural diversity. Our approach can be used for rationalizing large databases of crystal structures and for tuning computational algorithms for structure prediction. It enables quantitative and intuitive representations of energy landscapes and reappraisal of some of the traditional chemical notions and rules. Our analysis confirms the expectations that low-energy minima are clustered in compact regions of configuration space ("funnels") and that chemical systems tend to have very few funnels, sometimes only one. This analysis can be applied to the physical properties of solids, opening new ways of discovering structure-property relations. We quantitatively demonstrate that crystals tend to adopt one of the few simplest structures consistent with their chemistry, providing a thermodynamic justification of Pauling's fifth rule. PMID:19292538

  16. Automated Counting of Particles To Quantify Cleanliness

    NASA Technical Reports Server (NTRS)

    Rhode, James

    2005-01-01

    A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.

  17. Quantifying Climate Risks for Urban Environments

    NASA Astrophysics Data System (ADS)

    Hayhoe, K.; Stoner, A. K.; Dickson, L.

    2013-12-01

    High-density urban areas are both uniquely vulnerable and uniquely able to adapt to climate change. Enabling this potential requires identifying the vulnerabilities, however, and these depend strongly on location: the challenges climate change poses for a southern coastal city such as Miami, for example, have little in common with those facing a northern inland city such as Chicago. By combining local knowledge with climate science, risk assessment, engineering analysis, and adaptation planning, it is possible to develop relevant climate information that feeds directly into vulnerability assessment and long-term planning. Key steps include: developing climate projections tagged to long-term weather stations within the city itself that reflect local characteristics; mining local knowledge to identify existing vulnerabilities to, and costs of, weather and climate extremes; understanding how future projections can be integrated into the planning process; and identifying ways in which the city may adapt. Using examples from our work in the cities of Boston, Chicago, and Mobile we illustrate the practical application of this approach to quantify the impacts of climate change on these cities and identify robust adaptation options as diverse as reducing the urban heat island effect, protecting essential infrastructure, changing design standards and building codes, developing robust emergency management plans, and rebuilding storm sewer systems.

  18. Choosing appropriate techniques for quantifying groundwater recharge

    USGS Publications Warehouse

    Scanlon, B.R.; Healy, R.W.; Cook, P.G.

    2002-01-01

    Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important considerations in choosing a technique include space/time scales, range, and reliability of recharge estimates based on different techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important because it may dictate the required space/time scales of the recharge estimates. Typical study goals include water-resource evaluation, which requires information on recharge over large spatial scales and on decadal time scales; and evaluation of aquifer vulnerability to contamination, which requires detailed information on spatial variability and preferential flow. The range of recharge rates that can be estimated using different approaches should be matched to expected recharge rates at a site. The reliability of recharge estimates using different techniques is variable. Techniques based on surface-water and unsaturated-zone data provide estimates of potential recharge, whereas those based on groundwater data generally provide estimates of actual recharge. Uncertainties in each approach to estimating recharge underscore the need for application of multiple techniques to increase reliability of recharge estimates.

  19. Data Used in Quantified Reliability Models

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  20. Optical metabolic imaging quantifies heterogeneous cell populations

    PubMed Central

    Walsh, Alex J.; Skala, Melissa C.

    2015-01-01

    The genetic and phenotypic heterogeneity of cancers can contribute to tumor aggressiveness, invasion, and resistance to therapy. Fluorescence imaging occupies a unique niche to investigate tumor heterogeneity due to its high resolution and molecular specificity. Here, heterogeneous populations are identified and quantified by combined optical metabolic imaging and subpopulation analysis (OMI-SPA). OMI probes the fluorescence intensities and lifetimes of metabolic enzymes in cells to provide images of cellular metabolism, and SPA models cell populations as mixed Gaussian distributions to identify cell subpopulations. In this study, OMI-SPA is characterized by simulation experiments and validated with cell experiments. To generate heterogeneous populations, two breast cancer cell lines, SKBr3 and MDA-MB-231, were co-cultured at varying proportions. OMI-SPA correctly identifies two populations with minimal mean and proportion error using the optical redox ratio (fluorescence intensity of NAD(P)H divided by the intensity of FAD), mean NAD(P)H fluorescence lifetime, and OMI index. Simulation experiments characterized the relationships between sample size, data standard deviation, and subpopulation mean separation distance required for OMI-SPA to identify subpopulations. PMID:25780745

  1. The Physics of Equestrian Show Jumping

    NASA Astrophysics Data System (ADS)

    Stinner, Art

    2014-04-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this one, but when I searched the Internet for information and looked at YouTube presentations, I could only find simplistic references to Newton's laws and the conservation of mechanical energy principle. Nowhere could I find detailed calculations. On the other hand, there were several biomechanical articles with empirical reports of the results of kinetic and dynamic investigations of show jumping using high-speed digital cameras and force plates. They summarize their results in tables that give information about the motion of a horse jumping over high fences (1.40 m) and the magnitudes of the forces encountered when landing. However, they do not describe the physics of these results.

  2. SANTA: Quantifying the Functional Content of Molecular Networks

    PubMed Central

    Cornish, Alex J.; Markowetz, Florian

    2014-01-01

    Linking networks of molecular interactions to cellular functions and phenotypes is a key goal in systems biology. Here, we adapt concepts of spatial statistics to assess the functional content of molecular networks. Based on the guilt-by-association principle, our approach (called SANTA) quantifies the strength of association between a gene set and a network, and functionally annotates molecular networks like other enrichment methods annotate lists of genes. As a general association measure, SANTA can (i) functionally annotate experimentally derived networks using a collection of curated gene sets and (ii) annotate experimentally derived gene sets using a collection of curated networks, as well as (iii) prioritize genes for follow-up analyses. We exemplify the efficacy of SANTA in several case studies using the S. cerevisiae genetic interaction network and genome-wide RNAi screens in cancer cell lines. Our theory, simulations, and applications show that SANTA provides a principled statistical way to quantify the association between molecular networks and cellular functions and phenotypes. SANTA is available from http://bioconductor.org/packages/release/bioc/html/SANTA.html. PMID:25210953

  3. Model for quantifying photoelastic fringe degradation by imperfect retroreflective backings.

    PubMed

    Woolard, D; Hinders, M

    2000-05-01

    In any automated algorithm for interpreting photoelastic fringe patterns it is necessary to understand and quantify sources of error in the measurement system. We have been considering how the various components of the coating affect the photoelastic measurement, because this source of error has received fairly little attention in the literature. Because the reflective backing is not a perfect retroreflector, it does not preserve the polarization of light and thereby introduces noise into the measurement that depends on the angle of obliqueness and roughness of the reflective surface. This is of particular concern in resolving the stress tensor through the combination of thermoelasticity and photoelasticity where the components are sensitive to errors in the principal angle and difference of the principal stresses. We have developed a physical model that accounts for this and other sources of measurement error to be introduced in a systematic way so that the individual effects on the fringe patterns can be quantified. Simulations show altered photoelastic fringes when backing roughness and oblique incident angles are incorporated into the model. PMID:18345104

  4. Quantifying individual performance in Cricket A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A players average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the quality of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a players performance which in turn paves the way for a balanced team selection for International matches.

  5. Beyond immunity: quantifying the effects of host anti-parasite behavior on parasite transmission.

    PubMed

    Daly, Elizabeth W; Johnson, Pieter T J

    2011-04-01

    A host's first line of defense in response to the threat of parasitic infection is behavior, yet the efficacy of anti-parasite behaviors in reducing infection are rarely quantified relative to immunological defense mechanisms. Larval amphibians developing in aquatic habitats are at risk of infection from a diverse assemblage of pathogens, some of which cause substantial morbidity and mortality, suggesting that behavioral avoidance and resistance could be significant defensive strategies. To quantify the importance of anti-parasite behaviors in reducing infection, we exposed larval Pacific chorus frogs (Pseudacris regilla) to pathogenic trematodes (Ribeiroia and Echinostoma) in one of two experimental conditions: behaviorally active (unmanipulated) or behaviorally impaired (anesthetized). By quantifying both the number of successful and unsuccessful parasites, we show that host behavior reduces infection prevalence and intensity for both parasites. Anesthetized hosts were 20-39% more likely to become infected and, when infected, supported 2.8-fold more parasitic cysts. Echinostoma had a 60% lower infection success relative to the more deadly Ribeiroia and was also more vulnerable to behaviorally mediated reductions in transmission. For Ribeiroia, increases in host mass enhanced infection success, consistent with epidemiological theory, but this relationship was eroded among active hosts. Our results underscore the importance of host behavior in mitigating disease risk and suggest that, in some systems, anti-parasite behaviors can be as or more effective than immune-mediated defenses in reducing infection. Considering the severe pathologies induced by these and other pathogens of amphibians, we emphasize the value of a broader understanding of anti-parasite behaviors and how co-occurring stressors affect them. PMID:20857146

  6. Quantifying magma mixing with the Shannon entropy: Application to simulations and experiments

    NASA Astrophysics Data System (ADS)

    Perugini, D.; De Campos, C. P.; Petrelli, M.; Morgavi, D.; Vetere, F. P.; Dingwell, D. B.

    2015-11-01

    We introduce a new quantity to petrology, the Shannon entropy, as a tool for quantifying mixing as well as the rate of production of hybrid compositions in the mixing system. The Shannon entropy approach is applied to time series numerical simulations and high-temperature experiments performed with natural melts. We note that in both cases the Shannon entropy increases linearly during the initial stages of mixing and then saturates toward constant values. Furthermore, chemical elements with different mobilities display different rates of increase of the Shannon entropy. This indicates that the hybrid composition for the different elements is attained at different times generating a wide range of spatio-compositional domains which further increase the apparent complexity of the mixing process. Results from the application of the Shannon entropy analysis are compared with the concept of Relaxation of Concentration Variance (RCV), a measure recently introduced in petrology to quantify chemical exchanges during magma mixing. We derive a linear expression relating the change of concentration variance during mixing and the Shannon entropy. We show that the combined use of Shannon entropy and RCV provides the most complete information about the space and time complexity of magma mixing. As a consequence, detailed information about this fundamental petrogenetic and volcanic process can be gathered. In particular, the Shannon entropy can be used as complement to the RCV method to quantify the mobility of chemical elements in magma mixing systems, to obtain information about the rate of production of compositional heterogeneities, and to derive empirical relationships linking the rate of chemical exchanges between interacting magmas and mixing time.

  7. Quantifying uncertainty in LCA-modelling of waste management systems

    SciTech Connect

    Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H.

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

  8. The processing of polar quantifiers, and numerosity perception.

    PubMed

    Deschamps, Isabelle; Agmon, Galit; Loewenstein, Yonatan; Grodzinsky, Yosef

    2015-10-01

    We investigated the course of language processing in the context of a verification task that required numerical estimation and comparison. Participants listened to sentences with complex quantifiers that contrasted in Polarity, a logical property (e.g., more-than-half, less-than-half), and then performed speeded verification on visual scenarios that displayed a proportion between 2 discrete quantities. We varied systematically not only the sentences, but also the visual materials, in order to study their effect on the verification process. Next, we used the same visual scenarios with analogous non-verbal probes that featured arithmetical inequality symbols (<, >). This manipulation enabled us to measure not only Polarity effects, but also, to compare the effect of different probe types (linguistic, non-linguistic) on processing. Like many previous studies, our results demonstrate that perceptual difficulty affects error rate and reaction time in keeping with Weber's Law. Interestingly, these performance parameters are also affected by the Polarity of the quantifiers used, despite the fact that sentences had the exact same meaning, sentence structure, number of words, syllables, and temporal structure. Moreover, an analogous contrast between the non-linguistic probes (<, >) had no effect on performance. Finally, we observed no interaction between performance parameters governed by Weber's Law and those affected by Polarity. We consider 4 possible accounts of the results (syntactic, semantic, pragmatic, frequency-based), and discuss their relative merit. PMID:26142825

  9. Quantifying VOC emissions for the strategic petroleum reserve.

    SciTech Connect

    Knowlton, Robert G.; Lord, David L.

    2013-06-01

    A very important aspect of the Department of Energy's (DOE's) Strategic Petroleum Reserve (SPR) program is regulatory compliance. One of the regulatory compliance issues deals with limiting the amount of volatile organic compounds (VOCs) that are emitted into the atmosphere from brine wastes when they are discharged to brine holding ponds. The US Environmental Protection Agency (USEPA) has set limits on the amount of VOCs that can be discharged to the atmosphere. Several attempts have been made to quantify the VOC emissions associated with the brine ponds going back to the late 1970's. There are potential issues associated with each of these quantification efforts. Two efforts were made to quantify VOC emissions by analyzing VOC content of brine samples obtained from wells. Efforts to measure air concentrations were mentioned in historical reports but no data have been located to confirm these assertions. A modeling effort was also performed to quantify the VOC emissions. More recently in 2011- 2013, additional brine sampling has been performed to update the VOC emissions estimate. An analysis of the statistical confidence in these results is presented here. Arguably, there are uncertainties associated with each of these efforts. The analysis herein indicates that the upper confidence limit in VOC emissions based on recent brine sampling is very close to the 0.42 ton/MMB limit used historically on the project. Refining this estimate would require considerable investment in additional sampling, analysis, and monitoring. An analysis of the VOC emissions at each site suggests that additional discharges could be made and stay within current regulatory limits.

  10. Quantifying the Behavioural Relevance of Hippocampal Neurogenesis

    PubMed Central

    Lazic, Stanley E.; Fuss, Johannes; Gass, Peter

    2014-01-01

    Few studies that examine the neurogenesis–behaviour relationship formally establish covariation between neurogenesis and behaviour or rule out competing explanations. The behavioural relevance of neurogenesis might therefore be overestimated if other mechanisms account for some, or even all, of the experimental effects. A systematic review of the literature was conducted and the data reanalysed using causal mediation analysis, which can estimate the behavioural contribution of new hippocampal neurons separately from other mechanisms that might be operating. Results from eleven eligible individual studies were then combined in a meta-analysis to increase precision (representing data from 215 animals) and showed that neurogenesis made a negligible contribution to behaviour (standarised effect  = 0.15; 95% CI  = −0.04 to 0.34; p = 0.128); other mechanisms accounted for the majority of experimental effects (standardised effect  = 1.06; 95% CI  = 0.74 to 1.38; p = 1.7×10−11). PMID:25426717

  11. Quantifying compositional impacts of ambient aerosol on cloud droplet formation

    NASA Astrophysics Data System (ADS)

    Lance, Sara

    It has been historically assumed that most of the uncertainty associated with the aerosol indirect effect on climate can be attributed to the unpredictability of updrafts. In Chapter 1, we analyze the sensitivity of cloud droplet number density, to realistic variations in aerosol chemical properties and to variable updraft velocities using a 1-dimensional cloud parcel model in three important environmental cases (continental, polluted and remote marine). The results suggest that aerosol chemical variability may be as important to the aerosol indirect effect as the effect of unresolved cloud dynamics, especially in polluted environments. We next used a continuous flow streamwise thermal gradient Cloud Condensation Nuclei counter (CCNc) to study the water-uptake properties of the ambient aerosol, by exposing an aerosol sample to a controlled water vapor supersaturation and counting the resulting number of droplets. In Chapter 2, we modeled and experimentally characterized the heat transfer properties and droplet growth within the CCNc. Chapter 3 describes results from the MIRAGE field campaign, in which the CCNc and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA) were deployed at a ground-based site during March, 2006. Size-resolved CCN activation spectra and growth factor distributions of the ambient aerosol in Mexico City were obtained, and an analytical technique was developed to quantify a probability distribution of solute volume fractions for the CCN in addition to the aerosol mixing-state. The CCN were shown to be much less CCN active than ammonium sulfate, with water uptake properties more consistent with low molecular weight organic compounds. The pollution outflow from Mexico City was shown to have CCN with an even lower fraction of soluble material. "Chemical Closure" was attained for the CCN, by comparing the inferred solute volume fraction with that from direct chemical measurements. A clear diurnal pattern was observed for the CCN solute volume fraction, showing that measurable aging of the aerosol population occurs during the day, on the timescale of a few hours. The mixing state of the aerosol, also showing a consistent diurnal pattern, clearly correlates with a chemical tracer for local combustion sources. Chapter 4 describes results from the GoMACCS field study, in which the CCNc was subsequently deployed on an airborne field campaign in Houston, Texas during August-September, 2006. GoMACCS tested our ability to predict CCN for highly polluted conditions with limited chemical information. Assuming the particles were composed purely of ammonium sulfate, CCN closure was obtained with a 10% overprediction bias on average for CCN concentrations ranging from less than 100 cm-3 to over 10,000 cm-3, but with on average 50% variability. Assuming measured concentrations of organics to be internally mixed and insoluble tended to reduce the overprediction bias for less polluted conditions, but led to underprediction bias in the most polluted conditions. A likely explanation is that the high organic concentrations in the polluted environments depress the surface tension of the droplets, thereby enabling activation at lower soluble fractions.

  12. Species determination - Can we detect and quantify meat adulteration?

    PubMed

    Ballin, Nicolai Z; Vogensen, Finn K; Karlsson, Anders H

    2009-10-01

    Proper labelling of meat products is important to help fair-trade, and to enable consumers to make informed choices. However, it has been shown that labelling of species, expressed as weight/weight (w/w), on meat product labels was incorrect in more than 20% of cases. Enforcement of labelling regulations requires reliable analytical methods. Analytical methods are often based on protein or DNA measurements, which are not directly comparable to labelled meat expressed as w/w. This review discusses a wide range of analytical methods with focus on their ability to quantify and their limits of detection (LOD). In particular, problems associated with a correlation from quantitative DNA based results to meat content (w/w) are discussed. The hope is to make researchers aware of the problems of expressing DNA results as meat content (w/w) in order to find better alternatives. One alternative is to express DNA results as genome/genome equivalents. PMID:20416768

  13. Quantifying Flow Resistance of Mountain Streams Using the HHT Approach

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Fu, X.

    2014-12-01

    This study quantifies the flow resistance of mountain streams with gravel bed and remarkable bed forms. The motivation is to follow the previous ideas (Robert, A. 1990) that the bed surface can be divided into micro-scale and macro-scale roughness, respectively. We processed the field data of longitudinal bed profiles of the Longxi River, Sichuan Province, China, using the Hilbert-Huang Transformation Method (HHT). Each longitudinal profile was decomposed into a set of curves with different frequencies of spatial fluctuation. The spectrogram was accordingly obtained. We supposed that a certain high and low frequency curves correspond to the micro- and macro-roughness of stream bed, respectively. We specified the characteristic height and length with the spectrogram, which represent the macro bed form accounting for bed form roughness. We then estimated the bed form roughness as being proportional to the ratio of the height to length multiplied by the height(Yang et al,2005). We also assumed the parameter, Sp, defined as the sinuosity of the highest frequency curve as the measure of the micro-scale roughness. We then took into account the effect of bed material sizes through using the product of d50/R and Sp, where d50 is the sediment median size and R is the hydraulic radius. The macro- and micro-scale roughness parameters were merged together nonlinearly to evaluate the flow resistance caused by the interplaying friction and form drag forces. Validation results show that the square of the determinant coefficient can reach as high as 0.84 in the case of the Longxi River. Future studies will focus on the verification against more field data as well as the combination of skin friction and form drag. Key words: flow resistance; roughness; HHT; spectrogram; form drag Robert, A. (1990), Boundary roughness in coarse-grained channels, Prog. Phys. Geogr., 14(1), 42-69. Yang, S.-Q., S.-K. Tan, and S.-Y. Lim. (2005), Flow resistance and bed form geometry in a wide alluvial channel, Water Resour. Res., 41, W09419.

  14. Quantifying spore viability of the honey bee pathogen Nosema apis using flow cytometry.

    PubMed

    Peng, Yan; Lee-Pullen, Tracey F; Heel, Kathy; Millar, A Harvey; Baer, Boris

    2014-05-01

    Honey bees are hosts to more than 80 different parasites, some of them being highly virulent and responsible for substantial losses in managed honey bee populations. The study of honey bee pathogens and their interactions with the bees' immune system has therefore become a research area of major interest. Here we developed a fast, accurate and reliable method to quantify the viability of spores of the honey bee gut parasite Nosema apis. To verify this method, a dilution series with 0, 25, 50, 75, and 100% live N. apis was made and SYTO 16 and Propidium Iodide (n = 35) were used to distinguish dead from live spores. The viability of spores in each sample was determined by flow cytometry and compared with the current method based on fluorescence microscopy. Results show that N. apis viability counts using flow cytometry produced very similar results when compared with fluorescence microscopy. However, we found that fluorescence microscopy underestimates N. apis viability in samples with higher percentages of viable spores, the latter typically being what is found in biological samples. A series of experiments were conducted to confirm that flow cytometry allows the use of additional fluorescent dyes such as SYBR 14 and SYTOX Red (used in combination with SYTO 16 or Propidium Iodide) to distinguish dead from live spores. We also show that spore viability quantification with flow cytometry can be undertaken using substantially lower dye concentrations than fluorescence microscopy. In conclusion, our data show flow cytometry to be a fast, reliable method to quantify N. apis spore viabilities, which has a number of advantages compared with existing methods. PMID:24339267

  15. Quantifying Riverscape Connectivity with Graph Theory

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the connectivity structure of the Gangetic riverscape with fluvial remote sensing. Our study reach extends from the heavily dammed headwaters of the Bhagirathi, Mandakini and Alaknanda rivers which form the source of the Ganga to Allahabad ~900 km downstream on the main stem. We use Landsat-8 imagery as the baseline dataset. Channel width along the Ganga (i.e. Ganges) is often several kilometres. Therefore, the pan-sharpened 15m pixels of Landsat-8 are in fact capable of resolving inner channel features for over 80% of the channel length thus allowing a riverscape approach to be adopted. We examine the following connectivity metrics: size distribution of connected components, betweeness centrality and the integrated index of connectivity. A geographic perspective is added by mapping local (25 km-scale) values for these metrics in order to examine spatial patterns of connectivity. This approach allows us to map impacts of dam construction and has the potential to inform policy decisions in the area as well as open-up new avenues of investigation.

  16. A new model for quantifying climate episodes

    NASA Astrophysics Data System (ADS)

    Biondi, Franco; Kozubowski, Tomasz J.; Panorska, Anna K.

    2005-07-01

    When long records of climate (precipitation, temperature, stream runoff, etc.) are available, either from instrumental observations or from proxy records, the objective evaluation and comparison of climatic episodes becomes necessary. Such episodes can be quantified in terms of duration (the number of time intervals, e.g. years, the process remains continuously above or below a reference level) and magnitude (the sum of all series values for a given duration). The joint distribution of duration and magnitude is represented here by a stochastic model called BEG, for bivariate distribution with exponential and geometric marginals. The model is based on the theory of random sums, and its mathematical derivation confirms and extends previous empirical findings. Probability statements that can be obtained from the model are illustrated by applying it to a 2300-year dendroclimatic reconstruction of water-year precipitation for the eastern Sierra Nevada-western Great Basin. Using the Dust Bowl drought period as an example, the chance of a longer or greater drought is 8%. Conditional probabilities are much higher, i.e. a drought of that magnitude has a 62% chance of lasting for 11 years or longer, and a drought that lasts 11 years has a 46% chance of having an equal or greater magnitude. In addition, because of the bivariate model, we can estimate a 6% chance of witnessing a drought that is both longer and greater. Additional examples of model application are also provided. This type of information provides a way to place any climatic episode in a temporal perspective, and such numerical statements help with reaching science-based management and policy decisions.

  17. Quantifying Collective Attention from Tweet Stream

    PubMed Central

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive digital fossil of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of collective attention on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or tweets. Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. Retweet networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  18. Quantifying human vitamin kinetics using AMS

    SciTech Connect

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  19. Quantifying landscape resilience using vegetation indices

    NASA Astrophysics Data System (ADS)

    Eddy, I. M. S.; Gergel, S. E.

    2014-12-01

    Landscape resilience refers to the ability of systems to adapt to and recover from disturbance. In pastoral landscapes, degradation can be measured in terms of increased desertification and/or shrub encroachment. In many countries across Central Asia, the use and resilience of pastoral systems has changed markedly over the past 25 years, influenced by centralized Soviet governance, private property rights and recently, communal resource governance. In Kyrgyzstan, recent governance reforms were in response to the increasing degradation of pastures attributed to livestock overgrazing. Our goal is to examine and map the landscape-level factors that influence overgrazing throughout successive governance periods. Here, we map and examine some of the spatial factors influencing landscape resilience in agro-pastoral systems in the Kyrgyzstan Republic where pastures occupy >50% of the country's area. We ask three questions: 1) which mechanisms of pasture degradation (desertification vs. shrub encroachment), are detectable using remote sensing vegetation indices?; 2) Are these degraded pastures associated with landscape features that influence herder mobility and accessibility (e.g., terrain, distance to other pastures)?; and 3) Have these patterns changed through successive governance periods? Using a chronosequence of Landsat imagery (1999-2014), NDVI and other VIs were used to identify trends in pasture condition during the growing season. Least-cost path distances as well as graph theoretic indices were derived from topographic factors to assess landscape connectivity (from villages to pastures and among pastures). Fieldwork was used to assess the feasibility and accuracy of this approach using the most recent imagery. Previous research concluded that low herder mobility hindered pasture use, thus we expect the distance from pasture to village to be an important predictor of pasture condition. This research will quantify the magnitude of pastoral degradation and test assumptions regarding sustainable pastoral management. As grazing is the most extensive land use on Earth, understanding the broad-scale factors that influence the resilience of pastoral systems is an important issue globally.

  20. Quantifying the effect of environment stability on the transcription factor repertoire of marine microbes

    PubMed Central

    2011-01-01

    Background DNA-binding transcription factors (TFs) regulate cellular functions in prokaryotes, often in response to environmental stimuli. Thus, the environment exerts constant selective pressure on the TF gene content of microbial communities. Recently a study on marine Synechococcus strains detected differences in their genomic TF content related to environmental adaptation, but so far the effect of environmental parameters on the content of TFs in bacterial communities has not been systematically investigated. Results We quantified the effect of environment stability on the transcription factor repertoire of marine pelagic microbes from the Global Ocean Sampling (GOS) metagenome using interpolated physico-chemical parameters and multivariate statistics. Thirty-five percent of the difference in relative TF abundances between samples could be explained by environment stability. Six percent was attributable to spatial distance but none to a combination of both spatial distance and stability. Some individual TFs showed a stronger relationship to environment stability and space than the total TF pool. Conclusions Environmental stability appears to have a clearly detectable effect on TF gene content in bacterioplanktonic communities described by the GOS metagenome. Interpolated environmental parameters were shown to compare well to in situ measurements and were essential for quantifying the effect of the environment on the TF content. It is demonstrated that comprehensive and well-structured contextual data will strongly enhance our ability to interpret the functional potential of microbes from metagenomic data. PMID:22587903

  1. An integrated method for quantifying root architecture of field-grown maize

    PubMed Central

    Wu, Jie; Guo, Yan

    2014-01-01

    Background and Aims A number of techniques have recently been developed for studying the root system architecture (RSA) of seedlings grown in various media. In contrast, methods for sampling and analysis of the RSA of field-grown plants, particularly for details of the lateral root components, are generally inadequate. Methods An integrated methodology was developed that includes a custom-made root-core sampling system for extracting intact root systems of individual maize plants, a combination of proprietary software and a novel program used for collecting individual RSA information, and software for visualizing the measured individual nodal root architecture. Key Results Example experiments show that large root cores can be sampled, and topological and geometrical structure of field-grown maize root systems can be quantified and reconstructed using this method. Second- and higher order laterals are found to contribute substantially to total root number and length. The length of laterals of distinct orders varies significantly. Abundant higher order laterals can arise from a single first-order lateral, and they concentrate in the proximal axile branching zone. Conclusions The new method allows more meaningful sampling than conventional methods because of its easily opened, wide corer and sampling machinery, and effective analysis of RSA using the software. This provides a novel technique for quantifying RSA of field-grown maize and also provides a unique evaluation of the contribution of lateral roots. The method also offers valuable potential for parameterization of root architectural models. PMID:24532646

  2. Quantifying the threat of extinction from Muller's ratchet in the diploid Amazon molly (Poecilia formosa)

    PubMed Central

    2008-01-01

    Background The Amazon molly (Poecilia formosa) is a small unisexual fish that has been suspected of being threatened by extinction from the stochastic accumulation of slightly deleterious mutations that is caused by Muller's ratchet in non-recombining populations. However, no detailed quantification of the extent of this threat is available. Results Here we quantify genomic decay in this fish by using a simple model of Muller's ratchet with the most realistic parameter combinations available employing the evolution@home global computing system. We also describe simple extensions of the standard model of Muller's ratchet that allow us to deal with selfing diploids, triploids and mitotic recombination. We show that Muller's ratchet creates a threat of extinction for the Amazon molly for many biologically realistic parameter combinations. In most cases, extinction is expected to occur within a time frame that is less than previous estimates of the age of the species, leading to a genomic decay paradox. Conclusion How then does the Amazon molly survive? Several biological processes could individually or in combination solve this genomic decay paradox, including paternal leakage of undamaged DNA from sexual sister species, compensatory mutations and many others. More research is needed to quantify the contribution of these potential solutions towards the survival of the Amazon molly and other (ancient) asexual species. PMID:18366680

  3. VA-Index: Quantifying Assortativity Patterns in Networks with Multidimensional Nodal Attributes

    PubMed Central

    Pelechrinis, Konstantinos; Wei, Dong

    2016-01-01

    Network connections have been shown to be correlated with structural or external attributes of the network vertices in a variety of cases. Given the prevalence of this phenomenon network scientists have developed metrics to quantify its extent. In particular, the assortativity coefficient is used to capture the level of correlation between a single-dimensional attribute (categorical or scalar) of the network nodes and the observed connections, i.e., the edges. Nevertheless, in many cases a multi-dimensional, i.e., vector feature of the nodes is of interest. Similar attributes can describe complex behavioral patterns (e.g., mobility) of the network entities. To date little attention has been given to this setting and there has not been a general and formal treatment of this problem. In this study we develop a metric, the vector assortativity index (VA-index for short), based on network randomization and (empirical) statistical hypothesis testing that is able to quantify the assortativity patterns of a network with respect to a vector attribute. Our extensive experimental results on synthetic network data show that the VA-index outperforms a baseline extension of the assortativity coefficient, which has been used in the literature to cope with similar cases. Furthermore, the VA-index can be calibrated (in terms of parameters) fairly easy, while its benefits increase with the (co-)variance of the vector elements, where the baseline systematically over(under)estimate the true mixing patterns of the network. PMID:26816262

  4. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar.

    PubMed

    Wallace, Andrea P C; Milner-Gulland, E J; Jones, Julia P G; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  5. Quantifying Land Use Impacts on Biodiversity: Combining Species-Area Models and Vulnerability Indicators.

    PubMed

    Chaudhary, Abhishek; Verones, Francesca; de Baan, Laura; Hellweg, Stefanie

    2015-08-18

    Habitat degradation and subsequent biodiversity damage often take place far from the place of consumption because of globalization and the increasing level of international trade. Informing consumers and policy makers about the biodiversity impacts "hidden" in the life cycle of imported products is an important step toward achieving sustainable consumption patterns. Spatially explicit methods are needed in life cycle assessment to accurately quantify biodiversity impacts of products and processes. We use the Countryside species-area relationship (SAR) to quantify regional species loss due to land occupation and transformation for five taxa and six land use types in 804 terrestrial ecoregions. Further, we calculate vulnerability scores for each ecoregion based on the fraction of each species' geographic range (endemic richness) hosted by the ecoregion and the IUCN assigned threat level of each species. Vulnerability scores are multiplied with SAR-predicted regional species loss to estimate potential global extinctions per unit of land use. As a case study, we assess the land use biodiversity impacts of 1 kg of bioethanol produced using six different feed stocks in different parts of the world. Results show that the regions with highest biodiversity impacts differed markedly when the vulnerability of species was included. PMID:26197362

  6. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper

    NASA Astrophysics Data System (ADS)

    Hirn, Ulrich; Schennach, Robert

    2015-05-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption.

  7. Quantifying temporal bone morphology of great apes and humans: an approach using geometric morphometrics

    PubMed Central

    Lockwood, Charles A; Lynch, John M; Kimbel, William H

    2002-01-01

    The hominid temporal bone offers a complex array of morphology that is linked to several different functional systems. Its frequent preservation in the fossil record gives the temporal bone added significance in the study of human evolution, but its morphology has proven difficult to quantify. In this study we use techniques of 3D geometric morphometrics to quantify differences among humans and great apes and discuss the results in a phylogenetic context. Twenty-three landmarks on the ectocranial surface of the temporal bone provide a high level of anatomical detail. Generalized Procrustes analysis (GPA) is used to register (adjust for position, orientation and scale) landmark data from 405 adults representing Homo, Pan, Gorilla and Pongo. Principal components analysis of residuals from the GPA shows that the major source of variation is between humans and apes. Human characteristics such as a coronally orientated petrous axis, a deep mandibular fossa, a projecting mastoid process, and reduced lateral extension of the tympanic element strongly impact the analysis. In phenetic cluster analyses, gorillas and orangutans group together with respect to chimpanzees, and all apes group together with respect to humans. Thus, the analysis contradicts depictions of African apes as a single morphotype. Gorillas and orangutans lack the extensive preglenoid surface of chimpanzees, and their mastoid processes are less medially inflected. These and other characters shared by gorillas and orangutans are probably primitive for the African hominid clade. PMID:12489757

  8. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  9. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar

    PubMed Central

    Wallace, Andrea P. C.; Milner-Gulland, E. J.; Jones, Julia P. G.; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  10. Quantifying the universality of the stellar initial mass function in old star clusters

    NASA Astrophysics Data System (ADS)

    Leigh, Nathan; Umbreit, Stefan; Sills, Alison; Knigge, Christian; de Marchi, Guido; Glebbeek, Evert; Sarajedini, Ata

    2012-05-01

    We present a new technique to quantify cluster-to-cluster variations in the observed present-day stellar mass functions of a large sample of star clusters. Our method quantifies these differences as a function of both the stellar mass and the total cluster mass, and offers the advantage that it is insensitive to the precise functional form of the mass function. We applied our technique to data taken from the Advanced Camera for Surveys (ACS) Survey for Globular Clusters, from which we obtained completeness-corrected stellar mass functions in the mass range 0.25-0.75 M? for a sample of 27 clusters. The results of our observational analysis were then compared to Monte Carlo simulations for globular cluster evolution spanning a range of initial mass functions, total number of stars, concentrations and virial radii. We show that the present-day mass functions of the clusters in our sample can be reproduced by assuming a universal initial mass function for all clusters, and that the cluster-to-cluster differences are consistent with what is expected from two-body relaxation. A more complete exploration of the initial cluster conditions will be needed in future studies to better constrain the precise functional form of the initial mass function. This study is a first step towards using our technique to constrain the dynamical histories of a large sample of old Galactic star clusters and, by extension, star formation in the early Universe.

  11. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper.

    PubMed

    Hirn, Ulrich; Schennach, Robert

    2015-01-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption. PMID:26000898

  12. Quantifying Recent Changes in Earth's Radiation Budget

    NASA Astrophysics Data System (ADS)

    Loeb, N. G.; Kato, S.; Lyman, J. M.; Johnson, G. C.; Doelling, D.; Wong, T.; Allan, R.; Soden, B. J.; Stephens, G. L.

    2011-12-01

    The radiative energy balance between the solar or shortwave (SW) radiation absorbed by Earth and the thermal infrared or longwave (LW) radiation emitted back to space is fundamental to climate. An increase in the net radiative flux into the system (e.g., due to external forcing) is primarily stored as heat in the ocean, and can resurface at a later time to affect weather and climate on a global scale. The associated changes in the components of the Earth-atmosphere such as clouds, the surface and the atmosphere further alter the radiative balance, leading to further changes in weather and climate. Observations from instruments aboard Aqua and other satellites clearly show large interannual and decadal variability in the Earth's radiation budget associated with the major modes of climate variability (e.g., ENSO, NAO, etc.). We present results from CERES regarding variations in the net radiation imbalance of the planet during the past decade, comparing them with independent estimates of ocean heating rates derived from in-situ observations of ocean heat content. We combine these two data sets to calculate that during the past decade Earth has been accumulating energy at the rate 0.540.43 Wm-2, suggesting that while Earth's surface has not warmed significantly during the 2000s, energy is continuing to accumulate in the sub-surface ocean. Our observations do not support previous claims of "missing energy" in the system. We exploit data from other instruments such as MODIS, AIRS, CALIPSO and CloudSat to examine how clouds and atmospheric temperature/humidity vary both at regional and global scales during ENSO events. Finally, we present a revised representation of the global mean Earth radiation budget derived from gridded monthly mean TOA and surface radiative fluxes (EBAF-TOA and EBAF-SFC) that are based on a radiative assimilation analysis of observations from Aqua, Terra, geostationary satellites, CALIPSO and CloudSat.

  13. Quantifying the Magnitude of Anomalous Solar Absorption

    SciTech Connect

    Ackerman, Thomas P.; Flynn, Donna M.; Marchand, Roger T.

    2003-05-16

    The data set from ARESE II, sponsored by the Atmospheric Radiation Measurement Program, provides a unique opportunity to understand solar absorption in the atmosphere because of the combination of three sets of broadband solar radiometers mounted on the Twin Otter aircraft and the ground based instruments at the ARM Southern Great Plains facility. In this study, we analyze the measurements taken on two clear sky days and three cloudy days and model the solar radiative transfer in each case with two different models. On the two clear days, the calculated and measured column absorptions agree to better than 10 Wm-2, which is about 10% of the total column absorption. Because both the model fluxes and the individual radiometer measurements are accurate to no better than 10 Wm-2, we conclude that the models and measurements are essentially in agreement. For the three cloudy days, the model calculations agree very well with each other and on two of the three days agree with the measurements to 20 Wm-2 or less out of a total column absorption of more than 200 Wm-2, which is again agreement at better than 10%. On the third day, the model and measurements agree to either 8% or 14% depending on which value of surface albedo is used. Differences exceeding 10% represent a significant absorption difference between model and observations. In addition to the uncertainty in absorption due to surface albedo, we show that including aerosol with an optical depth similar to that found on clear days can reduce the difference between model and measurement by 5% or more. Thus, we conclude that the ARESE II results are incompatible with previous studies reporting extreme anomalous absorption and can be modeled with our current understanding of radiative transfer.

  14. A simple diffusion model showing anomalous scaling

    SciTech Connect

    Rowlands, G.; Sprott, J. C.

    2008-08-15

    A number of iterated maps and one flow, which show chaotic behavior, have been studied numerically and their time evolution expressed in terms of higher-order moments M{sub m}(t). All the cases show anomalous behavior with M{sub m}(t){approx}t{sup g(m)}, with g(m){ne}{alpha}m. A simple analytic treatment is given based on an effective diffusion that is dependent on both space and time. This leads to a form for g(m)/m=a-b/m, which is in good agreement with numerical results. This behavior is attributed to the presence of convective motion superimposed on the background diffusion, and hence this behavior is expected in a wide variety of maps and flows.

  15. Quantifiable outcomes from corporate and higher education learning collaborations

    NASA Astrophysics Data System (ADS)

    Devine, Thomas G.

    The study investigated the existence of measurable learning outcomes that emerged out of the shared strengths of collaborating sponsors. The study identified quantifiable learning outcomes that confirm corporate, academic and learner participation in learning collaborations. Each of the three hypotheses and the synergy indicator quantitatively and qualitatively confirmed learning outcomes benefiting participants. The academic-indicator quantitatively confirmed that learning outcomes attract learners to the institution. The corporate-indicator confirmed that learning outcomes include knowledge exchange and enhanced workforce talents for careers in the energy-utility industry. The learner-indicator confirmed that learning outcomes provide professional development opportunities for employment. The synergy-indicator confirmed that best learning practices in learning collaborations emanate out of the sponsors' shared strengths, and that partnerships can be elevated to strategic alliances, going beyond response to the desires of sponsors to create learner-centered cultures. The synergy-indicator confirmed the value of organizational processes that elevate sponsors' interactions to sharing strength, to create a learner-centered culture. The study's series of qualitative questions confirmed prior success factors, while verifying the hypothesis results and providing insight not available from quantitative data. The direct benefactors of the study are the energy-utility learning-collaboration participants of the study, and corporation, academic institutions, and learners of the collaboration. The indirect benefactors are the stakeholders of future learning collaborations, through improved knowledge of the existence or absence of quantifiable learning outcomes.

  16. Quantifying the auditory saltation illusion: An objective psychophysical methodology

    NASA Astrophysics Data System (ADS)

    Kidd, Joanna C.; Hogben, John H.

    2004-08-01

    Under conditions of rapid presentation, brief acoustic stimuli repeatedly delivered first at one location, then at another, are systematically mislocalized, with stimuli perceived as traveling smoothly between the two locations. This robust illusory motion percept is termed ``auditory saltation.'' Currently, the characteristics and mechanisms of auditory saltation are not well understood. The lack of objective methods capable of quantifying the illusion on an individual basis seems a limiting factor for this area of research. In this study, we outline an objective psychophysical task that estimates the interstimulus interval at which the saltation illusion is reliably distinguishable from simulated motion. Experiment 1 examined the psychophysical function relating task performance to ISI and addressed the suitability of the task for use with adaptive psychophysical procedures. Experiment 2 directly compared performance on the task with that of another quantification method. The results suggested that this objective approach to the study of auditory saltation overcomes difficulties associated with more subjective methods, and provides a reliable paradigm within which to quantify the temporal parameters of saltation on an individual basis.

  17. Methods for quantifying uncertainty in fast reactor analyses.

    SciTech Connect

    Fanning, T. H.; Fischer, P. F.

    2008-04-07

    Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

  18. Quantified PIRT and Uncertainty Quantification for Computer Code Validation

    NASA Astrophysics Data System (ADS)

    Luo, Hu

    This study is intended to investigate and propose a systematic method for uncertainty quantification for the computer code validation application. Uncertainty quantification has gained more and more attentions in recent years. U.S. Nuclear Regulatory Commission (NRC) requires the use of realistic best estimate (BE) computer code to follow the rigorous Code Scaling, Application and Uncertainty (CSAU) methodology. In CSAU, the Phenomena Identification and Ranking Table (PIRT) was developed to identify important code uncertainty contributors. To support and examine the traditional PIRT with quantified judgments, this study proposes a novel approach, the Quantified PIRT (QPIRT), to identify important code models and parameters for uncertainty quantification. Dimensionless analysis to code field equations to generate dimensionless groups (pi groups) using code simulation results serves as the foundation for QPIRT. Uncertainty quantification using DAKOTA code is proposed in this study based on the sampling approach. Nonparametric statistical theory identifies the fixed number of code run to assure the 95 percent probability and 95 percent confidence in the code uncertainty intervals.

  19. Quantifying relative diver effects in underwater visual censuses.

    PubMed

    Dickens, Luke C; Goatley, Christopher H R; Tanner, Jennifer K; Bellwood, David R

    2011-01-01

    Diver-based Underwater Visual Censuses (UVCs), particularly transect-based surveys, are key tools in the study of coral reef fish ecology. These techniques, however, have inherent problems that make it difficult to collect accurate numerical data. One of these problems is the diver effect (defined as the reaction of fish to a diver). Although widely recognised, its effects have yet to be quantified and the extent of taxonomic variation remains to be determined. We therefore examined relative diver effects on a reef fish assemblage on the Great Barrier Reef. Using common UVC methods, the recorded abundance of seven reef fish groups were significantly affected by the ongoing presence of SCUBA divers. Overall, the diver effect resulted in a 52% decrease in the mean number of individuals recorded, with declines of up to 70% in individual families. Although the diver effect appears to be a significant problem, UVCs remain a useful approach for quantifying spatial and temporal variation in relative fish abundances, especially if using methods that minimise the exposure of fishes to divers. Fixed distance transects using tapes or lines deployed by a second diver (or GPS-calibrated timed swims) would appear to maximise fish counts and minimise diver effects. PMID:21533039

  20. Live Cell Interferometry Quantifies Dynamics of Biomass Partitioning during Cytokinesis

    PubMed Central

    Zangle, Thomas A.; Teitell, Michael A.; Reed, Jason

    2014-01-01

    The equal partitioning of cell mass between daughters is the usual and expected outcome of cytokinesis for self-renewing cells. However, most studies of partitioning during cell division have focused on daughter cell shape symmetry or segregation of chromosomes. Here, we use live cell interferometry (LCI) to quantify the partitioning of daughter cell mass during and following cytokinesis. We use adherent and non-adherent mouse fibroblast and mouse and human lymphocyte cell lines as models and show that, on average, mass asymmetries present at the time of cleavage furrow formation persist through cytokinesis. The addition of multiple cytoskeleton-disrupting agents leads to increased asymmetry in mass partitioning which suggests the absence of active mass partitioning mechanisms after cleavage furrow positioning. PMID:25531652

  1. Quantifying chaotic dynamics from integrate-and-fire processes

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-01

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  2. Quantifying Spin Hall Angles from Spin Pumping: Experiments and Theory

    NASA Astrophysics Data System (ADS)

    Mosendz, O.; Pearson, J. E.; Fradin, F. Y.; Bauer, G. E. W.; Bader, S. D.; Hoffmann, A.

    2010-01-01

    Spin Hall effects intermix spin and charge currents even in nonmagnetic materials and, therefore, ultimately may allow the use of spin transport without the need for ferromagnets. We show how spin Hall effects can be quantified by integrating Ni80Fe20|normal metal (N) bilayers into a coplanar waveguide. A dc spin current in N can be generated by spin pumping in a controllable way by ferromagnetic resonance. The transverse dc voltage detected along the Ni80Fe20|N has contributions from both the anisotropic magnetoresistance and the spin Hall effect, which can be distinguished by their symmetries. We developed a theory that accounts for both. In this way, we determine the spin Hall angle quantitatively for Pt, Au, and Mo. This approach can readily be adapted to any conducting material with even very small spin Hall angles.

  3. Gradient approach to quantify the gradation smoothness for output media

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  4. Quantifying chaotic dynamics from integrate-and-fire processes

    SciTech Connect

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  5. Manipulating and quantifying temperature-triggered coalescence with microcentrifugation.

    PubMed

    Feng, Huanhuan; Ershov, Dmitry; Krebs, Thomas; Schroen, Karin; Stuart, Martien A Cohen; van der Gucht, Jasper; Sprakel, Joris

    2015-01-01

    In this paper we describe a new approach to quantify the stability and coalescence kinetics of thermally switchable emulsions using an imaging-based microcentrifugation method. We first show that combining synchronized high-speed imaging with microfluidic centrifugation allows the direct measurement of the thermodynamic stability of emulsions, as expressed by the critical disjoining pressure. We apply this to a thermoresponsive emulsion, allowing us to measure the critical disjoining pressure as a function of temperature. The same method, combined with quantitative image analysis, also gives access to droplet-scale details of the coalescence process. We illustrate this by measuring temperature-dependent coalescence rates and by analysing the temperature-induced switching between two distinct microscopic mechanisms by which dense emulsions can destabilise to form a homogeneous oil phase. PMID:25337820

  6. Quantifying Genome Editing Outcomes at Endogenous Loci using SMRT Sequencing

    PubMed Central

    Clark, Joseph; Punjya, Niraj; Sebastiano, Vittorio; Bao, Gang; Porteus, Matthew H

    2014-01-01

    SUMMARY Targeted genome editing with engineered nucleases has transformed the ability to introduce precise sequence modifications at almost any site within the genome. A major obstacle to probing the efficiency and consequences of genome editing is that no existing method enables the frequency of different editing events to be simultaneously measured across a cell population at any endogenous genomic locus. We have developed a novel method for quantifying individual genome editing outcomes at any site of interest using single molecule real time (SMRT) DNA sequencing. We show that this approach can be applied at various loci, using multiple engineered nuclease platforms including TALENs, RNA guided endonucleases (CRISPR/Cas9), and ZFNs, and in different cell lines to identify conditions and strategies in which the desired engineering outcome has occurred. This approach facilitates the evaluation of new gene editing technologies and permits sensitive quantification of editing outcomes in almost every experimental system used. PMID:24685129

  7. Quantifying Age-dependent Extinction from Species Phylogenies

    PubMed Central

    Alexander, Helen K.; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  8. Quantifying Age-dependent Extinction from Species Phylogenies.

    PubMed

    Alexander, Helen K; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  9. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  10. Quantifying Permafrost Characteristics with DCR-ERT

    NASA Astrophysics Data System (ADS)

    Schnabel, W.; Trochim, E.; Munk, J.; Kanevskiy, M. Z.; Shur, Y.; Fortier, R.

    2012-12-01

    Geophysical methods are an efficient method for quantifying permafrost characteristics for Arctic road design and engineering. In the Alaskan Arctic construction and maintenance of roads requires integration of permafrost; ground that is below 0 degrees C for two or more years. Features such as ice content and temperature are critical for understanding current and future ground conditions for planning, design and evaluation of engineering applications. This study focused on the proposed Foothills West Transportation Access project corridor where the purpose is to construct a new all-season road connecting the Dalton Highway to Umiat. Four major areas were chosen that represented a range of conditions including gravel bars, alluvial plains, tussock tundra (both unburned and burned conditions), high and low centered ice-wedge polygons and an active thermokarst feature. Direct-current resistivity using galvanic contact (DCR-ERT) was applied over transects. In conjunction complimentary site data including boreholes, active layer depths, vegetation descriptions and site photographs was obtained. The boreholes provided information on soil morphology, ice texture and gravimetric moisture content. Horizontal and vertical resolutions in the DCR-ERT were varied to determine the presence or absence of ground ice; subsurface heterogeneity; and the depth to groundwater (if present). The four main DCR-ERT methods used were: 84 electrodes with 2 m spacing; 42 electrodes with 0.5 m spacing; 42 electrodes with 2 m spacing; and 84 electrodes with 1 m spacing. In terms of identifying the ground ice characteristics the higher horizontal resolution DCR-ERT transects with either 42 or 84 electrodes and 0.5 or 1 m spacing were best able to differentiate wedge-ice. This evaluation is based on a combination of both borehole stratigraphy and surface characteristics. Simulated apparent resistivity values for permafrost areas varied from a low of 4582 Ω m to a high of 10034 Ω m. Previous studies found permafrost conditions with corresponding resistivity values as low as 5000 Ω m. This work emphasizes the necessity of tailoring the DCR-ERT survey to verified ground ice characteristics.

  11. Quantifying the impacts of global disasters

    NASA Astrophysics Data System (ADS)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the maximum in Hawaii, and the largest plausible tsunami in southern California. To support the analysis of global impacts, we begin with the Ports of Los Angeles and Long Beach which account for >40% of the imports to the United States. We expand from there throughout California for the first level economic analysis. We are looking to work with Alaska and Hawaii, especially on similar economic issues in ports, over the next year and to expand the analysis to consideration of economic interactions between the regions.

  12. Molecular Marker Approach on Characterizing and Quantifying Charcoal in Environmental Media

    NASA Astrophysics Data System (ADS)

    Kuo, L.; Herbert, B. E.; Louchouarn, P.

    2006-12-01

    Black carbon (BC) is widely distributed in natural environments including soils, sediments, freshwater, seawater and the atmosphere. It is produced mostly from the incomplete combustion of fossil fuels and vegetation. In recent years, increasing attention has been given to BC due to its potential influence in many biogeochemical processes. In the environment, BC exists as a continuum ranging from partly charred plant materials, charcoal residues to highly condensed soot and graphite particles. The heterogeneous nature of black carbon means that BC is always operationally-defined, highlighting the need for standard methods that support data comparisons. Unlike soot and graphite that can be quantified with well-established methods, it is difficult to directly quantify charcoal in geologic media due to its chemical and physical heterogeneity. Most of the available charcoal quantification methods detect unknown fractions of the BC continuum. To specifically identify and quantify charcoal in soils and sediments, we adopted and validated an innovative molecular marker approach that quantifies levoglucosan, a pyrogenic derivative of cellulose, as a proxy of charcoal. Levoglucosan is source-specific, stable and is able to be detected at low concentrations using gas chromatograph-mass spectrometer (GC-MS). In the present study, two different plant species, honey mesquite and cordgrass, were selected as the raw materials to synthesize charcoals. The lab-synthesize charcoals were made under control conditions to eliminate the high heterogeneity often found in natural charcoals. The effects of two major combustion factors, temperature and duration, on the yield of levoglucosan were characterized in the lab-synthesize charcoals. Our results showed that significant levoglucosan production in the two types of charcoal was restricted to relatively low combustion temperatures (150-350 degree C). The combustion duration did not cause significant differences in the yield of levoglucosan in the two charcoals. Interestingly, the low temperature charcoals are undetectable by the acid dichromate oxidation method, a popular soot/charcoal analytical approach. Our study demonstrates that levoglucosan can serve as a proxy of low temperature charcoals that are undetectable using other BC methods. Moreover, our study highlights the limitations of the common BC quantification methods to characterize the entire BC continuum.

  13. Interpreting cortical bone adaptation and load history by quantifying osteon morphotypes in circularly polarized light images.

    PubMed

    Skedros, John G; Mendenhall, Shaun D; Kiser, Casey J; Winet, Howard

    2009-03-01

    Birefringence variations in circularly polarized light (CPL) images of thin plane-parallel sections of cortical bone can be used to quantify regional differences in predominant collagen fiber orientation (CFO). Using CPL images of equine third metacarpals (MC3s), R.B. Martin, V.A. Gibson, S.M. Stover, J.C. Gibeling, and L.V. Griffin. (40) described six secondary osteon variants ('morphotypes') and suggested that differences in their regional prevalence affect fatigue resistance and toughness. They devised a numerical osteon morphotype score (MTS) for quantifying regional differences in osteon morphotypes. We have observed that a modification of this score could significantly improve its use for interpreting load history. We hypothesized that our modified osteon MTS would more accurately reveal differences in osteon MTSs between opposing "tension" and "compression" cortices of diaphyses of habitually bent bones. This was tested using CPL images in transverse sections of calcanei from sheep, deer, and horses, and radii from sheep and horses. Equine MC3s and sheep tibiae were examined as controls because they experience comparatively greater load complexity that, because of increased prevalence of torsion/shear, would not require regional mechanical enhancements provided by different osteon morphotypes. Predominant CFO, which can reliably reflect adaptation for a regionally prevalent strain mode, was quantified as mean gray levels from birefringence of entire images (excluding pore spaces) in anterior, posterior, medial, and lateral cortices. Results showed that, in contrast to the original scoring scheme of Martin et al., the modified scheme revealed significant anterior/posterior differences in osteon MTSs in nearly all "tension/compression" bones (p<0.0001), but not in equine MC3s (p=0.30) and sheep tibiae (p=0.35). Among habitually bent bones, sheep radii were the exception; relatively lower osteon populations and the birefringence of the primary bone contributed to this result. Correlations between osteon MTSs using the scoring scheme of Martin et al. with CFO data from all regions of each bone invariably demonstrated weak-to-moderate negative correlations. This contrasts with typically high positive correlations between modified osteon MTSs and regional CFO. These results show that the modified osteon MTS can be a strong correlate of predominant CFO and of the non-uniform strain distribution produced by habitual bending. PMID:19049911

  14. Quantifying moisture transport in cementitious materials using neutron radiography

    NASA Astrophysics Data System (ADS)

    Lucero, Catherine L.

    A portion of the concrete pavements in the US have recently been observed to have premature joint deterioration. This damage is caused in part by the ingress of fluids, like water, salt water, or deicing salts. The ingress of these fluids can damage concrete when they freeze and expand or can react with the cementitious matrix causing damage. To determine the quality of concrete for assessing potential service life it is often necessary to measure the rate of fluid ingress, or sorptivity. Neutron imaging is a powerful method for quantifying fluid penetration since it can describe where water has penetrated, how quickly it has penetrated and the volume of water in the concrete or mortar. Neutrons are sensitive to light atoms such as hydrogen and thus clearly detect water at high spatial and temporal resolution. It can be used to detect small changes in moisture content and is ideal for monitoring wetting and drying in mortar exposed to various fluids. This study aimed at developing a method to accurately estimate moisture content in mortar. The common practice is to image the material dry as a reference before exposing to fluid and normalizing subsequent images to the reference. The volume of water can then be computed using the Beer-Lambert law. This method can be limiting because it requires exact image alignment between the reference image and all subsequent images. A model of neutron attenuation in a multi-phase cementitious composite was developed to be used in cases where a reference image is not available. The attenuation coefficients for water, un-hydrated cement, and sand were directly calculated from the neutron images. The attenuation coefficient for the hydration products was then back-calculated. The model can estimate the degree of saturation in a mortar with known mixture proportions without using a reference image for calculation. Absorption in mortars exposed to various fluids (i.e., deionized water and calcium chloride solutions) were investigated. It has been found through this study that small pores, namely voids created by chemical shrinkage, gel pores, and capillary pores, ranging from 0.5 nm to 50 microm, fill quickly through capillary action. However, large entrapped and entrained air voids ranging from 0.05 to 1.25 mm remain empty during the initial filling process. In mortar exposed to calcium chloride solution, a decrease in sorptivity was observed due to an increase in viscosity and surface tension of the solution as proposed by Spragg et al 2011. This work however also noted a decrease in the rate of absorption due to a reaction between the salt and matrix which results in the filling of the pores in the concrete. The results from neutron imaging can help in the interpretation of standard absorption tests. ASTM C1585 test results can be further analyzed in several ways that could give an accurate indication of the durability of the concrete. Results can be reported in depth of penetration versus the square root of time rather than mm3 of fluid per mm2 of exposed surface area. Since a known fraction of pores are initially filling before reaching the edge of the sample, the actual depth of penetration can be calculated. This work is compared with an 'intrinsic sorptivity' that can be used to interpret mass measurements. Furthermore, the influence of shrinkage reducing admixtures (SRAs) on drying was studied. Neutron radiographs showed that systems saturated in water remain "wetter" than systems saturated in 5% SRA solution. The SRA in the system reduces the moisture diffusion coefficient due an increase in viscosity and decrease in surface tension. Neutron radiography provided spatial information of the drying front that cannot be achieved using other methods.

  15. Quantifying uncertainty in earthquake rupture models

    NASA Astrophysics Data System (ADS)

    Page, Morgan T.

    Using dynamic and kinematic models, we analyze the ability of GPS and strong-motion data to recover the rupture history of earthquakes. By analyzing the near-source ground-motion generated by earthquake ruptures through barriers and asperities, we determine that both the prestress and yield stress of a frictional inhomogeneity can be recovered. In addition, we find that models with constraints on rupture velocity have less ground motion than constraint-free, spontaneous dynamic models with equivalent stress drops. This suggests that kinematic models with such constraints overestimate the actual stress heterogeneity of earthquakes. We use GPS data from the well-recorded 2004 Mw6.0 Parkfield Earthquake to further probe uncertainties in kinematic models. We find that the inversion for this data set is poorly resolved at depth and near the edges of the fault. In such an underdetermined inversion, it is possible to obtain spurious structure in poorly resolved areas. We demonstrate that a nonuniform grid with grid spacing matching the local resolution length on the fault outperforms small uniform grids, which generate spurious structure in poorly resolved regions, and large uniform grids, which lose recoverable information in well-resolved areas of the fault. The nonuniform grid correctly averages out large-scale structure in poorly resolved areas while recovering small-scale structure near the surface. In addition to probing model uncertainties in earthquake source models, we also examine the effect of model uncertainty in Probabilistic Seismic Hazard Analysis (PSHA). While methods for incorporating parameter uncertainty of a particular model in PSHA are well-understood, methods for incorporating model uncertainty are more difficult to implement due to the high degree of dependence between different earthquake-recurrence models. We show that the method used by the 2002 Working Group on California Earthquake Probabilities (WGCEP-2002) to combine the probability distributions given by multiple earthquake recurrence models has several adverse effects on their result. In particular, WGCEP-2002 uses a linear combination of the models which ignores model dependence and leads to large uncertainty in the final hazard estimate. In addition to analyzing current statistical problems, we present alternative methods for rigorously incorporating model uncertainty into PSHA.

  16. Quantifying the complexity of human colonic pressure signals using an entropy measure.

    PubMed

    Xu, Fei; Yan, Guozheng; Zhao, Kai; Lu, Li; Wang, Zhiwu; Gao, Jinyang

    2016-02-01

    Studying the complexity of human colonic pressure signals is important in understanding this intricate, evolved, dynamic system. This article presents a method for quantifying the complexity of colonic pressure signals using an entropy measure. As a self-adaptive non-stationary signal analysis algorithm, empirical mode decomposition can decompose a complex pressure signal into a set of intrinsic mode functions (IMFs). Considering that IMF2, IMF3, and IMF4 represent crucial characteristics of colonic motility, a new signal was reconstructed with these three signals. Then, the time entropy (TE), power spectral entropy (PSE), and approximate entropy (AE) of the reconstructed signal were calculated. For subjects with constipation and healthy individuals, experimental results showed that the entropies of reconstructed signals between these two classes were distinguishable. Moreover, the TE, PSE, and AE can be extracted as features for further subject classification. PMID:26043437

  17. Life cycle assessment of urban wastewater systems: Quantifying the relative contribution of sewer systems.

    PubMed

    Risch, Eva; Gutierrez, Oriol; Roux, Philippe; Boutin, Catherine; Corominas, Lluís

    2015-06-15

    This study aims to propose a holistic, life cycle assessment (LCA) of urban wastewater systems (UWS) based on a comprehensive inventory including detailed construction and operation of sewer systems and wastewater treatment plants (WWTPs). For the first time, the inventory of sewers infrastructure construction includes piping materials and aggregates, manholes, connections, civil works and road rehabilitation. The operation stage comprises energy consumption in pumping stations together with air emissions of methane and hydrogen sulphide, and water emissions from sewer leaks. Using a real case study, this LCA aims to quantify the contributions of sewer systems to the total environmental impacts of the UWS. The results show that the construction of sewer infrastructures has an environmental impact (on half of the 18 studied impact categories) larger than both the construction and operation of the WWTP. This study highlights the importance of including the construction and operation of sewer systems in the environmental assessment of centralised versus decentralised options for UWS. PMID:25839834

  18. A methodology to quantify the differences between alternative methods of heart rate variability measurement.

    PubMed

    García-González, M A; Fernández-Chimeno, M; Guede-Fernández, F; Ferrer-Mileo, V; Argelagós-Palau, A; Álvarez-Gómez, L; Parrado, E; Moreno, J; Capdevila, L; Ramos-Castro, J

    2016-01-01

    This work proposes a systematic procedure to report the differences between heart rate variability time series obtained from alternative measurements reporting the spread and mean of the differences as well as the agreement between measuring procedures and quantifying how stationary, random and normal the differences between alternative measurements are. A description of the complete automatic procedure to obtain a differences time series (DTS) from two alternative methods, a proposal of a battery of statistical tests, and a set of statistical indicators to better describe the differences in RR interval estimation are also provided. Results show that the spread and agreement depend on the choice of alternative measurements and that the DTS cannot be considered generally as a white or as a normally distributed process. Nevertheless, in controlled measurements the DTS can be considered as a stationary process. PMID:26657196

  19. Quantifying the effect size of changing environmental controls on carbon release from permafrost-affected soils

    NASA Astrophysics Data System (ADS)

    Schaedel, C.; Bader, M. K. F.; Schuur, E. A. G.; Bracho, R. G.; Capek, P.; De Baets, S. L.; Diakova, K.; Ernakovich, J. G.; Hartley, I. P.; Iversen, C. M.; Kane, E. S.; Knoblauch, C.; Lupascu, M.; Natali, S.; Norby, R. J.; O'Donnell, J. A.; Roy Chowdhury, T.; Santruckova, H.; Shaver, G. R.; Sloan, V. L.; Treat, C. C.; Waldrop, M. P.

    2014-12-01

    High-latitude surface air temperatures are rising twice as fast as the global mean, causing permafrost to thaw and thereby exposing large quantities of previously frozen organic carbon (C) to microbial decomposition. Increasing temperatures in high latitude ecosystems not only increase C emissions from previously frozen C in permafrost but also indirectly affect the C cycle through changes in regional and local hydrology. Warmer temperatures increase thawing of ice-rich permafrost, causing land surface subsidence where soils become waterlogged, anoxic conditions prevail and C is released in form of anaerobic CO2 and CH4. Although substrate quality, physical protection, and nutrient availability affect C decomposition, increasing temperatures and changes in surface and sub-surface hydrology are likely the dominant factors affecting the rate and form of C release from permafrost; however, their effect size on C release is poorly quantified. We have compiled a database of 24 incubation studies with soils from active layer and permafrost from across the entire permafrost zone to quantify a) the effect size of increasing temperatures and b) the changes from aerobic to anaerobic environmental soil conditions on C release. Results from two different meta-analyses show that a 10°C increase in temperature increased C release by a factor of two in boreal forest, peatland and tundra ecosystems. Under aerobic incubation conditions, soils released on average three times more C than under anaerobic conditions with large variation among the different ecosystems. While peatlands showed similar amounts of C release under aerobic and anaerobic soil conditions, tundra and boreal forest ecosystems released up to 8 times more C under anoxic conditions. This pan-arctic synthesis shows that boreal forest and tundra soils will have a larger impact on climate change when newly thawed permafrost C decomposes in an aerobic environment compared to an anaerobic environment even when accounting for the higher heat trapping capacity of CH4 over a 100-year timescale.

  20. Cross-linguistic relations between quantifiers and numerals in language acquisition: evidence from Japanese.

    PubMed

    Barner, David; Libenson, Amanda; Cheung, Pierina; Takasaki, Mayu

    2009-08-01

    A study of 104 Japanese-speaking 2- to 5-year-olds tested the relation between numeral and quantifier acquisition. A first study assessed Japanese children's comprehension of quantifiers, numerals, and classifiers. Relative to English-speaking counterparts, Japanese children were delayed in numeral comprehension at 2 years of age but showed no difference at 3 and 4 years of age. Also, Japanese 2-year-olds had better comprehension of quantifiers, indicating that their delay was specific to numerals. A second study examined the speech of Japanese and English caregivers to explore the syntactic cues that might affect integer acquisition. Quantifiers and numerals occurred in similar syntactic positions and overlapped to a greater degree in English than in Japanese. Also, Japanese nouns were often dropped, and both quantifiers and numerals exhibited variable positions relative to the nouns they modified. We conclude that syntactic cues in English facilitate bootstrapping numeral meanings from quantifier meanings and that such cues are weaker in classifier languages such as Japanese. PMID:19162276

  1. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  2. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways.

    PubMed

    Seyler, Sean L; Kumar, Avishek; Thorpe, M F; Beckstein, Oliver

    2015-10-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Frchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  3. Design and Analysis of a Micromechanical Three-Component Force Sensor for Characterizing and Quantifying Surface Roughness

    NASA Astrophysics Data System (ADS)

    Liang, Q.; Wu, W.; Zhang, D.; Wei, B.; Sun, W.; Wang, Y.; Ge, Y.

    2015-10-01

    Roughness, which can represent the trade-off between manufacturing cost and performance of mechanical components, is a critical predictor of cracks, corrosion and fatigue damage. In order to measure polished or super-finished surfaces, a novel touch probe based on three-component force sensor for characterizing and quantifying surface roughness is proposed by using silicon micromachining technology. The sensor design is based on a cross-beam structure, which ensures that the system possesses high sensitivity and low coupling. The results show that the proposed sensor possesses high sensitivity, low coupling error, and temperature compensation function. The proposed system can be used to investigate micromechanical structures with nanometer accuracy.

  4. Quantifying transmission by stage of infection in the field: the example of SIV-1 and STLV-1 infecting mandrills.

    PubMed

    Roussel, Marion; Pontier, Dominique; Kazanji, Mirdad; Ngoubangoye, Barthlmy; Mahieux, Renaud; Verrier, Delphine; Fouchet, David

    2015-03-01

    The early stage of viral infection is often followed by an important increase of viral load and is generally considered to be the most at risk for pathogen transmission. Most methods quantifying the relative importance of the different stages of infection were developed for studies aimed at measuring HIV transmission in Humans. However, they cannot be transposed to animal populations in which less information is available. Here we propose a general method to quantify the importance of the early and late stages of the infection on micro-organism transmission from field studies. The method is based on a state space dynamical model parameterized using Bayesian inference. It is illustrated by a 28 years dataset in mandrills infected by Simian Immunodeficiency Virus type-1 (SIV-1) and the Simian T-Cell Lymphotropic Virus type-1 (STLV-1). For both viruses we show that transmission is predominant during the early stage of the infection (transmission ratio for SIV-1: 1.16 [0.0009; 18.15] and 9.92 [0.03; 83.8] for STLV-1). However, in terms of basic reproductive number (R0 ), which quantifies the weight of both stages in the spread of the virus, the results suggest that the epidemics of SIV-1 and STLV-1 are mainly driven by late transmissions in this population. PMID:25296992

  5. Quantifying the Relationship Between Financial News and the Stock Market

    NASA Astrophysics Data System (ADS)

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-12-01

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2nd January 2007 until 31st December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

  6. Quantifying bushfire penetration into urban areas in Australia

    NASA Astrophysics Data System (ADS)

    Chen, Keping; McAneney, John

    2004-06-01

    The extent and trajectory of bushfire penetration at the bushland-urban interface are quantified using data from major historical fires in Australia. We find that the maximum distance at which homes are destroyed is typically less than 700 m. The probability of home destruction emerges as a simple linear and decreasing function of distance from the bushland-urban boundary but with a variable slope that presumably depends upon fire regime and human intervention. The collective data suggest that the probability of home destruction at the forest edge is around 60%. Spatial patterns of destroyed homes display significant neighbourhood clustering. Our results provide revealing spatial evidence for estimating fire risk to properties and suggest an ember-attack model.

  7. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  8. Quantifying the Behavior of Stock Correlations Under Market Stress

    NASA Astrophysics Data System (ADS)

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-10-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios.

  9. Quantifying the behavior of stock correlations under market stress.

    PubMed

    Preis, Tobias; Kenett, Dror Y; Stanley, H Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  10. Quantifying the effects of anagenetic and cladogenetic evolution.

    PubMed

    Bartoszek, Krzysztof

    2014-08-01

    An ongoing debate in evolutionary biology is whether phenotypic change occurs predominantly around the time of speciation or whether it instead accumulates gradually over time. In this work I propose a general framework incorporating both types of change, quantify the effects of speciational change via the correlation between species and attribute the proportion of change to each type. I discuss results of parameter estimation of Hominoid body size in this light. I derive mathematical formulae related to this problem, the probability generating functions of the number of speciation events along a randomly drawn lineage and from the most recent common ancestor of two randomly chosen tip species for a conditioned Yule tree. Additionally I obtain in closed form the variance of the distance from the root to the most recent common ancestor of two randomly chosen tip species. PMID:24933475

  11. Quantifying light exposure patterns in young adult students

    PubMed Central

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2014-01-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects’ estimates of time spent indoors and outdoors. Subjects’ estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  12. Quantifying light exposure patterns in young adult students.

    PubMed

    Alvarez, Amanda A; Wildsoet, Christine F

    2013-10-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  13. Identifying and quantifying interactions in a laboratory swarm

    NASA Astrophysics Data System (ADS)

    Puckett, James; Kelley, Douglas; Ouellette, Nicholas

    2013-03-01

    Emergent collective behavior, such as in flocks of birds or swarms of bees, is exhibited throughout the animal kingdom. Many models have been developed to describe swarming and flocking behavior using systems of self-propelled particles obeying simple rules or interacting via various potentials. However, due to experimental difficulties and constraints, little empirical data exists for characterizing the exact form of the biological interactions. We study laboratory swarms of flying Chironomus riparius midges, using stereoimaging and particle tracking techniques to record three-dimensional trajectories for all the individuals in the swarm. We describe methods to identify and quantify interactions by examining these trajectories, and report results on interaction magnitude, frequency, and mutuality.

  14. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.; Mili, Ali

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability. This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.

  15. Quantifying the role of forest soil and bedrock in the acid neutralization of surface water in steep hillslopes.

    PubMed

    Asano, Yuko; Uchida, Taro

    2005-02-01

    The role of soil and bedrock in acid neutralizing processes has been difficult to quantify because of hydrological and biogeochemical uncertainties. To quantify those roles, hydrochemical observations were conducted at two hydrologically well-defined, steep granitic hillslopes in the Tanakami Mountains of Japan. These paired hillslopes are similar except for their soils; Fudoji is leached of base cations (base saturation <6%), while Rachidani is covered with fresh soil (base saturation >30%), because the erosion rate is 100-1000 times greater. The results showed that (1) soil solution pH at the soil-bedrock interface at Fudoji (4.3) was significantly lower than that of Rachidani (5.5), (2) the hillslope discharge pH in both hillslopes was similar (6.7-6.8), and (3) at Fudoji, 60% of the base cations leaching from the hillslope were derived from bedrock, whereas only 20% were derived from bedrock in Rachidani. Further, previously published results showed that the stream pH could not be predicted from the acid deposition rate and soil base saturation status. These results demonstrate that bedrock plays an especially important role when the overlying soil has been leached of base cations. These results indicate that while the status of soil acidification is a first-order control on vulnerability to surface water acidification, in some cases such as at Fudoji, subsurface interaction with the bedrock determines the sensitivity of surface water to acidic deposition. PMID:15519722

  16. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate

  17. Quantifying induced effects of subsurface renewable energy storage

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry of Education and Research (BMBF).

  18. The SEGUE K Giant Survey. III. Quantifying Galactic Halo Substructure

    NASA Astrophysics Data System (ADS)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo; Rockosi, Constance; Starkenburg, Else; Xue, Xiang Xiang; Rix, Hans-Walter; Harding, Paul; Beers, Timothy C.; Johnson, Jennifer; Lee, Young Sun; Schneider, Donald P.

    2016-01-01

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5-125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position-velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (˜33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.

  19. Quantifying circular-linear associations: hippocampal phase precession.

    PubMed

    Kempter, Richard; Leibold, Christian; Buzski, Gyrgy; Diba, Kamran; Schmidt, Robert

    2012-05-30

    When a rat crosses the place field of a hippocampal pyramidal cell, this cell typically fires a series of spikes. Spike phases, measured with respect to theta oscillations of the local field potential, on average decrease as a function of the spatial distance traveled. This relation between phase and position of spikes might be a neural basis for encoding and is called phase precession. The degree of association between the circular phase variable and the linear spatial variable is commonly quantified through, however, a linear-linear correlation coefficient where the circular variable is converted to a linear variable by restricting the phase to an arbitrarily chosen range, which may bias the estimated correlation. Here we introduce a new measure to quantify circular-linear associations. This measure leads to a robust estimate of the slope and phase offset of the regression line, and it provides a correlation coefficient for circular-linear data that is a natural analog of Pearson's product-moment correlation coefficient for linear-linear data. Using surrogate data, we show that the new method outperforms the standard linear-linear approach with respect to estimates of the regression line and the correlation, and that the new method is less dependent on noise and sample size. We confirm these findings in a large data set of experimental recordings from hippocampal place cells and theta oscillations, and we discuss remaining problems that are relevant for the analysis and interpretation of phase precession. In summary, we provide a new method for the quantification of circular-linear associations. PMID:22487609

  20. An organotypic spinal cord slice culture model to quantify neurodegeneration.

    PubMed

    Ravikumar, Madhumitha; Jain, Seema; Miller, Robert H; Capadona, Jeffrey R; Selkirk, Stephen M

    2012-11-15

    Activated microglia cells have been implicated in the neurodegenerative process of Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, and multiple sclerosis; however, the precise roles of microglia in disease progression are unclear. Despite these diseases having been described for more than a century, current FDA approved therapeutics are symptomatic in nature with little evidence to supporting a neuroprotective effect. Furthermore, identifying novel therapeutics remains challenging due to undetermined etiology, a variable disease course, and the paucity of validated targets. Here, we describe the use of a novel ex vivo spinal cord culture system that offers the ability to screen potential neuroprotective agents, while maintaining the complexity of the in vivo environment. To this end, we treated spinal cord slice cultures with lipopolysaccharide and quantified neuron viability in culture using measurements of axon length and FluoroJadeC intensity. To simulate a microglia-mediated response to cellular debris, antigens, or implanted materials/devices, we supplemented the culture media with increasing densities of microspheres, facilitating microglia-mediated phagocytosis of the particles, which demonstrated a direct correlation between the phagocytic activities of microglia and neuronal health. To validate our model's capacity to accurately depict neuroprotection, cultures were treated with resveratrol, which demonstrated enhanced neuronal health. Our results successfully demonstrate the use of this model to reproducibly quantify the extent of neurodegeneration through the measurement of axon length and FluoroJadeC intensity, and we suggest this model will allow for accurate, high-throughput screening, which could result in expedited success in translational efficacy of therapeutic agents to clinical trials. PMID:22975474

  1. Quantifying the Nonlinear, Anisotropic Material Response of Spinal Ligaments

    NASA Astrophysics Data System (ADS)

    Robertson, Daniel J.

    Spinal ligaments may be a significant source of chronic back pain, yet they are often disregarded by the clinical community due to a lack of information with regards to their material response, and innervation characteristics. The purpose of this dissertation was to characterize the material response of spinal ligaments and to review their innervation characteristics. Review of relevant literature revealed that all of the major spinal ligaments are innervated. They cause painful sensations when irritated and provide reflexive control of the deep spinal musculature. As such, including the neurologic implications of iatrogenic ligament damage in the evaluation of surgical procedures aimed at relieving back pain will likely result in more effective long-term solutions. The material response of spinal ligaments has not previously been fully quantified due to limitations associated with standard soft tissue testing techniques. The present work presents and validates a novel testing methodology capable of overcoming these limitations. In particular, the anisotropic, inhomogeneous material constitutive properties of the human supraspinous ligament are quantified and methods for determining the response of the other spinal ligaments are presented. In addition, a method for determining the anisotropic, inhomogeneous pre-strain distribution of the spinal ligaments is presented. The multi-axial pre-strain distributions of the human anterior longitudinal ligament, ligamentum flavum and supraspinous ligament were determined using this methodology. Results from this work clearly demonstrate that spinal ligaments are not uniaxial structures, and that finite element models which account for pre-strain and incorporate ligament's complex material properties may provide increased fidelity to the in vivo condition.

  2. Quantifying Local Radiation-Induced Lung Damage From Computed Tomography

    SciTech Connect

    Ghobadi, Ghazaleh; Hogeweg, Laurens E.; Brandenburg, Sytze; Langendijk, Johannes A.

    2010-02-01

    Purpose: Optimal implementation of new radiotherapy techniques requires accurate predictive models for normal tissue complications. Since clinically used dose distributions are nonuniform, local tissue damage needs to be measured and related to local tissue dose. In lung, radiation-induced damage results in density changes that have been measured by computed tomography (CT) imaging noninvasively, but not yet on a localized scale. Therefore, the aim of the present study was to develop a method for quantification of local radiation-induced lung tissue damage using CT. Methods and Materials: CT images of the thorax were made 8 and 26 weeks after irradiation of 100%, 75%, 50%, and 25% lung volume of rats. Local lung tissue structure (S{sub L}) was quantified from local mean and local standard deviation of the CT density in Hounsfield units in 1-mm{sup 3} subvolumes. The relation of changes in S{sub L} (DELTAS{sub L}) to histologic changes and breathing rate was investigated. Feasibility for clinical application was tested by applying the method to CT images of a patient with non-small-cell lung carcinoma and investigating the local dose-effect relationship of DELTAS{sub L}. Results: In rats, a clear dose-response relationship of DELTAS{sub L} was observed at different time points after radiation. Furthermore, DELTAS{sub L} correlated strongly to histologic endpoints (infiltrates and inflammatory cells) and breathing rate. In the patient, progressive local dose-dependent increases in DELTAS{sub L} were observed. Conclusion: We developed a method to quantify local radiation-induced tissue damage in the lung using CT. This method can be used in the development of more accurate predictive models for normal tissue complications.

  3. Prostatic ductal adenocarcinoma showing Bcl-2 expression.

    PubMed

    Tulunay, Ozden; Orhan, Diclehan; Baltaci, Smer; Gg?, Cagatay; Mftoglu, Yusuf Z

    2004-09-01

    Prostatic ductal adenocarcinoma represents a rare histological variant of prostatic carcinoma with features of a papillary lesion at cystoscopy. There are conflicts regarding the existence, origin, staging, grading, treatment and clinical behavior of this tumor. The aim of the present study is to examine the expression of Bcl-2 and p53 in prostatic ductal adenocarcinoma and to evaluate its origin by analyzing prostate specific antigen, prostate specific acid phosphatase, cytokeratins, epithelial membrane antigen and carcinoembryonic antigen expressions. The results confirmed the expression of prostate specific antigen and prostate specific acid phosphatase in prostatic ductal adenocarcinoma. The demonstrated expression of Bcl-2 was predominant in the better-differentiated tumor. Bcl-2 expression appears not to be associated with neuroendocrine differentiation as assessed by chromogranin A reactivity. Thus, the first case of a prostatic ductal adenocarcinoma showing Bcl-2 expression is presented. The tumor was negative for p53. PMID:15379952

  4. Lemurs and macaques show similar numerical sensitivity

    PubMed Central

    Jones, Sarah M.; Pearson, John; DeWind, Nicholas K.; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M.

    2013-01-01

    We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order. PMID:24068469

  5. Lemurs and macaques show similar numerical sensitivity.

    PubMed

    Jones, Sarah M; Pearson, John; DeWind, Nicholas K; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M

    2014-05-01

    We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order. PMID:24068469

  6. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    SciTech Connect

    Krummel, J.R.; Su, Haiping; Fox, J.; Yarnasan, S.; Ekasingh, M.

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  7. Quantifying the Benefits of Active Debris Removal in a Range of Scenarios

    NASA Astrophysics Data System (ADS)

    White, Adam E.; Lewis, Hugh G.

    2013-08-01

    Long-term space debris modelling studies have suggested that the ?10 cm low Earth orbit debris population will continue to grow even with the widespread adoption of mitigation measures recommended by the Inter-Agency Space Debris Coordination Committee. However, a number of recent studies have shown that, with additional removal of a small number of debris objects, it is possible to prevent the growth of debris in LEO. These modelling studies were based on assumptions constraining future launch and explosion rates, solar activity and mitigation, amongst others, to a limited number of cases. As a result, the effectiveness of Active Debris Removal (ADR) has only been established and quantified for a narrow range of possible outcomes. Therefore, the potential benefits of ADR, in practice, remain uncertain and there is a need to investigate a wider range of potential future scenarios to help establish ADR requirements. In this paper, we present results of a study to model and quantify the influence of four essential assumptions on the effectiveness of ADR: (1) launch activity, (2) explosion activity, (3) solar activity and (4) compliance with post-mission disposal. Each assumption is given a realistic range based upon historic worst-case data and an optimistic best-case. Using the University of Southampton's Debris Analysis and Monitoring Architecture to the Geosynchronous Environment (DAMAGE) tool, these assumptions were modelled randomly from their permitted range in Monte Carlo projections from 2009 to 2209 of the ?5 cm LEO debris environment. In addition, two yearly ADR rates were investigated: five and ten objects per year. The results show an increase in the variance of the mean LEO debris population at the 2209 epoch. The uncertainty is such that, in some cases, ADR was not sufficient to prevent the long-term growth of the population, whilst in others ADR is not required to prevent population growth.

  8. Quantifying commuter exposures to volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the laboratory using standard BTEX gases. The LODs for the Tenax TA sampling tubes (determined with a sample volume of 1,000 standard cubic centimeters which is close to the approximate commuter sample volumes collected) were orders of magnitude lower (0.04 to 0.7 parts per billion (ppb) for individual compounds of BTEX) compared to the PIDs' LODs (9.3 to 15 ppb of a BTEX mixture), which makes the Tenax TA sampling method more suitable to measure BTEX concentrations in the sub-parts per billion (ppb) range. PID and Tenax TA data for commuter exposures were inversely related. The concentrations of VOCs measured by the PID were substantially higher than BTEX concentrations measured by collocated Tenax TA samplers. The inverse trend and the large difference in magnitude between PID responses and Tenax TA BTEX measurements indicates the two methods may have been measuring different air pollutants that are negatively correlated. Drivers in Fort Collins, Colorado with closed windows experienced greater time-weighted average BTEX exposures than cyclists (p: 0.04). Commuter BTEX exposures measured in Fort Collins were lower than commuter exposures measured in prior studies that occurred in larger cities (Boston and Copenhagen). Although route and intake may affect a commuter's BTEX dose, these variables are outside of the scope of this study. Within the limitations of this study (including: small sample size, small representative area of Fort Collins, and respiration rates not taken into account), it appears health risks associated with traffic-induced BTEX exposures may be reduced by commuting via cycling instead of driving with windows closed and living in a less populous area that has less vehicle traffic. Although the PID did not reliably measure low-level commuter BTEX exposures, the Tenax TA sampling method did. The PID measured BTEX concentrations reliably in a controlled environment, at high concentrations (300-800 ppb), and in the absence of other air pollutants. In environments where there could be multiple chemicals present that may produce a PID signal (such as nitrogen dioxide), Tenax TA samplers may be a better choice for measuring BTEX. Tenax TA measurements were the only suitable method within this study to measure commuter's BTEX exposure in Fort Collins, Colorado.

  9. Gait stability and variability measures show effects of impaired cognition and dual tasking in frail people

    PubMed Central

    2011-01-01

    Background Falls in frail elderly are a common problem with a rising incidence. Gait and postural instability are major risk factors for falling, particularly in geriatric patients. As walking requires attention, cognitive impairments are likely to contribute to an increased fall risk. An objective quantification of gait and balance ability is required to identify persons with a high tendency to fall. Recent studies have shown that stride variability is increased in elderly and under dual task condition and might be more sensitive to detect fall risk than walking speed. In the present study we complemented stride related measures with measures that quantify trunk movement patterns as indicators of dynamic balance ability during walking. The aim of the study was to quantify the effect of impaired cognition and dual tasking on gait variability and stability in geriatric patients. Methods Thirteen elderly with dementia (mean age: 82.6 4.3 years) and thirteen without dementia (79.4 5.55) recruited from a geriatric day clinic, walked at self-selected speed with and without performing a verbal dual task. The Mini Mental State Examination and the Seven Minute Screen were administered. Trunk accelerations were measured with an accelerometer. In addition to walking speed, mean, and variability of stride times, gait stability was quantified using stochastic dynamical measures, namely regularity (sample entropy, long range correlations) and local stability exponents of trunk accelerations. Results Dual tasking significantly (p < 0.05) decreased walking speed, while stride time variability increased, and stability and regularity of lateral trunk accelerations decreased. Cognitively impaired elderly showed significantly (p < 0.05) more changes in gait variability than cognitive intact elderly. Differences in dynamic parameters between groups were more discerned under dual task conditions. Conclusions The observed trunk adaptations were a consistent instability factor. These results support the concept that changes in cognitive functions contribute to changes in the variability and stability of the gait pattern. Walking under dual task conditions and quantifying gait using dynamical parameters can improve detecting walking disorders and might help to identify those elderly who are able to adapt walking ability and those who are not and thus are at greater risk for falling. PMID:21241487

  10. QUANTIFYING DIFFUSIVE MASS TRANSFER IN FRACTURED SHALEBEDROCK

    EPA Science Inventory

    A significant limitation in defining remediation needs at contaminated sites often results from aninsufficient understanding of the transport processes that control contaminant migration. Theobjectives of this research were to help resolve this dilemma by providing an improved...

  11. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    SciTech Connect

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at large spatial scales as to generalize the directionality of hydrologic responses.

  12. Hyperspectral remote sensing tools for quantifying plant litter and invasive species in arid ecosystems

    USGS Publications Warehouse

    Nagler, Pamela L.; Sridhar, B.B. Maruthi; Olsson, Aaryn Dyami; Glenn, Edward P.; van Leeuwen, Willem J.D.

    2012-01-01

    Green vegetation can be distinguished using visible and infrared multi-band and hyperspectral remote sensing methods. The problem has been in identifying and distinguishing the non-photosynthetically active radiation (PAR) landscape components, such as litter and soils, and from green vegetation. Additionally, distinguishing different species of green vegetation is challenging using the relatively few bands available on most satellite sensors. This chapter focuses on hyperspectral remote sensing characteristics that aim to distinguish between green vegetation, soil, and litter (or senescent vegetation). Quantifying litter by remote sensing methods is important in constructing carbon budgets of natural and agricultural ecosystems. Distinguishing between plant types is important in tracking the spread of invasive species. Green leaves of different species usually have similar spectra, making it difficult to distinguish between species. However, in this chapter we show that phenological differences between species can be used to detect some invasive species by their distinct patterns of greening and dormancy over an annual cycle based on hyperspectral data. Both applications require methods to quantify the non-green cellulosic fractions of plant tissues by remote sensing even in the presence of soil and green plant cover. We explore these methods and offer three case studies. The first concerns distinguishing surface litter from soil using the Cellulose Absorption Index (CAI), as applied to no-till farming practices where plant litter is left on the soil after harvest. The second involves using different band combinations to distinguish invasive saltcedar from agricultural and native riparian plants on the Lower Colorado River. The third illustrates the use of the CAI and NDVI in time-series analyses to distinguish between invasive buffelgrass and native plants in a desert environment in Arizona. Together the results show how hyperspectral imagery can be applied to solve problems that are not amendable to solution by the simple band combinations normally used in remote sensing.

  13. A LC-MS method to quantify tenofovir urinary concentrations in treated patients.

    PubMed

    Simiele, Marco; Carcieri, Chiara; De Nicol, Amedeo; Ariaudo, Alessandra; Sciandra, Mauro; Calcagno, Andrea; Bonora, Stefano; Di Perri, Giovanni; D'Avolio, Antonio

    2015-10-10

    Tenofovir disoproxil fumarate is a prodrug of tenofovir used in the treatment of HIV and HBV infections: it is the most used antiretroviral worldwide. Tenofovir is nucleotidic HIV reverse trascriptase inhibitor that showed excellent long-term efficacy and tolerability. However renal and bone complications (proximal tubulopathy, hypophosphatemia, decreased bone mineral density, and reduced creatinine clearance) limit its use. Tenofovir renal toxicity has been suggested as the consequence of drug entrapment in proximal tubular cells: measuring tenofovir urinary concentrations may be a proxy of this event and it may be used as predictor of tenofovir side effects. No method is currently available for quantifying tenofovir in this matrix: then, the aim of this work was to validate a new LC-MS method for the quantification of urinary tenofovir. Chromatographic separation was achieved with a gradient (acetonitrile and water with formic acid 0.05%) on an Atlantis 5 ?m T3, 4.6 mm 150 mm, reversed phase analytical column. Detection of tenofovir and internal standard was achieved by electrospray ionization mass spectrometry in the positive ion mode. Calibration ranged from 391 to 100,000 ng/mL. The limit of quantification was 391 ng/mL and the limit of detection was 195 ng/mL. Mean recovery of tenofovir and internal standard were consistent and stable, while matrix effect resulted low and stable. The method was tested on 35 urine samples from HIV-positive patients treated with tenofovir-based HAARTs and did not show any significant interference with antiretrovirals or other concomitantly administered drugs. All the observed concentrations in real samples fitted the calibration range, confirming the capability of this method for the use in clinical routine. Whether confirmed in ad hoc studies this method may be used for quantifying tenofovir urinary concentrations and help managing HIV-positive patients treated with tenofovir. PMID:25997174

  14. Using `LIRA' To Quantify Diffuse Structure Around X-ray and Gamma-Ray Pulsars

    NASA Astrophysics Data System (ADS)

    Connors, Alanna; Stein, Nathan M.; van Dyk, David; Siemiginowska, Aneta; Kashyap, Vinay; Roberts, Mallory

    2009-09-01

    In this poster, we exploit several capabilities of a Low-count Image Restoration and Analysis (LIRA) package, to quantify details of faint ``scruffy'' emission, consistent with PWN around X-ray and gamma-ray pulsars. Our preliminary results show evidence for irregular structure on scales of 1''-10'' or less (i.e. <500 pc), rather than larger smooth loops. Additionally, we can show this to be visible across several energy bands.LIRA grew out of work by the California-Boston Astro-Statistics Collaboration (CBASC) on analyzing high resolution, high energy Poisson images from X-ray and gamma-ray telescopes (see Stein et. al. these proceedings; also Esch et al 2004; and Connors and van Dyk in SCMAIV). LIRA fits: a ``Null'' or background model shape, times a scale factor; plus a flexible Multi-Scale (MS) model; folded though an instrument response (PSF, exposure). Embedding this in a fully Poisson probability structure allows us to map out uncertainties in our image analysis and reconstruction, via many MCMC samples. Specifically, for quantifying irregular nebular structure, we exploit the Multi-Scale model's smoothing parameters at each length-scale, as ``Summary Statistics'' (i.e low-dimensional summaries of the probability space). When distributions of these summary statistics, from analysis of simulated ``Null'' data sets, are compared with those from the actual Chandra data, we can set quantitative limits on structures at different length scales. Since one can do this for very low counts, one is able to analyze and compare structure in several energy slices. This work is supported by NSF and AISR funds.

  15. Children's Knowledge of the Quantifier "Dou" in Mandarin Chinese

    ERIC Educational Resources Information Center

    Zhou, Peng; Crain, Stephen

    2011-01-01

    The quantifier "dou" (roughly corresponding to English "all") in Mandarin Chinese has been the topic of much discussion in the theoretical literature. This study investigated children's knowledge of this quantifier using a new methodological technique, which we dubbed the Question-Statement Task. Three questions were addressed: (i) whether young

  16. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking

  17. Shortcuts to Quantifier Interpretation in Children and Adults

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Sekerina, Irina

    2006-01-01

    Errors involving universal quantification are common in contexts depicting sets of individuals in partial, one-to-one correspondence. In this article, we explore whether quantifier-spreading errors are more common with distributive quantifiers each and every than with all. In Experiments 1 and 2, 96 children (5- to 9-year-olds) viewed pairs of

  18. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  19. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    NASA Astrophysics Data System (ADS)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  20. Quantifying selective pressures driving bacterial evolution using lineage analysis

    PubMed Central

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population’s rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages –i.e. the life-histories of individuals and their ancestors– to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to E. coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life-history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection, and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems. PMID:26213639

  1. Monitoring Microemboli During Cardiopulmonary Bypass with the EDAC Quantifier

    PubMed Central

    Lynch, John E.; Wells, Christopher; Akers, Tom; Frantz, Paul; Garrett, Donna; Scott, M. Lance; Williamson, Lisa; Agnew, Barbara; Lynch, John K.

    2010-01-01

    Abstract: Gaseous emboli may be introduced into the bypass circuit both from the surgical field and during perfusionist interventions. While circuits provide good protection against massive air embolism, they do not remove gaseous microemboli (GME) from the bypass circuit. The purpose of this preliminary study is to assess the incidence of GME during bypass surgery and determine if increased GME counts were associated with specific events during bypass surgery. In 30 cases divided between 15 coronary artery bypass grafts and 15 valve repairs, GME were counted and sized at the three locations on the bypass circuit using the EDAC Quantifier (Luna Innovations, Roanoke, VA). A mean of 45,276 GME were detected after the arterial line filter during these 30 cases, with significantly more detected (p = .04) post filter during valve cases (mean = 72,137 22,113) than coronary artery bypass graft cases (mean = 18,416 7831). GME detected post filter were significantly correlated in time with counts detected in the venous line (p < .001). Specific events associated with high counts included the initiation of cardiopulmonary bypass, heart manipulations, insertion and removal of clamps, and the administration of drugs. Global factors associated with increased counts post filter included higher venous line counts and higher post reservoir/bubble trap counts. The mean number of microemboli detected during bypass surgery was much higher than reported in other studies of emboli incidence, most likely due to the increased sensitivity of the EDAC Quantifier compared to other detection modalities. The results furthermore suggest the need for further study of the clinical significance of these microemboli and what practices may be used to reduce GME incidence. Increased in vitro testing of the air handling capability of different circuit designs, along with more clinical studies assessing best clinical practices for reducing GME activity, is recommended. PMID:21114224

  2. A new device to quantify tactile sensation in neuropathy

    PubMed Central

    Selim, M.M.; Brink, T.S.; Hodges, J.S.; Wendelschafer-Crabb, G.; Foster, S.X.Y.-L.; Nolano, M.; Provitera, V.; Simone, D.A.

    2011-01-01

    Objective: To devise a rapid, sensitive method to quantify tactile threshold of finger pads for early detection and staging of peripheral neuropathy and for use in clinical trials. Methods: Subjects were 166 healthy controls and 103 patients with, or at risk for, peripheral neuropathy. Subjects were screened by questionnaire. The test device, the Bumps, is a checkerboard-like smooth surface with 12 squares; each square encloses 5 colored circles. The subject explores the circles of each square with the index finger pad to locate the one circle containing a small bump. Bumps in different squares have different heights. Detection threshold is defined as the smallest bump height detected. In some subjects, a 3-mm skin biopsy from the tested finger pad was taken to compare density of Meissner corpuscles (MCs) to bump detection thresholds. Results: The mean (±SEM) bump detection threshold for control subjects was 3.3 ± 0.10 μm. Threshold and test time were age related, older subjects having slightly higher thresholds and using more time. Mean detection threshold of patients with neuropathy (6.2 ± 0.35 μm) differed from controls (p < 0.001). A proposed threshold for identifying impaired sensation had a sensitivity of 71% and specificity of 74%. Detection threshold was higher when MC density was decreased. Conclusions: These preliminary studies suggest that the Bumps test is a rapid, sensitive, inexpensive method to quantify tactile sensation of finger pads. It has potential for early diagnosis of tactile deficiency in subjects suspected of having neuropathy, for staging degree of tactile deficit, and for monitoring change over time. PMID:21555731

  3. Quantifying the prevalence of frailty in English hospitals

    PubMed Central

    Soong, J; Poots, AJ; Scott, S; Donald, K; Woodcock, T; Lovett, D; Bell, D

    2015-01-01

    Objectives Population ageing has been associated with an increase in comorbid chronic disease, functional dependence, disability and associated higher health care costs. Frailty Syndromes have been proposed as a way to define this group within older persons. We explore whether frailty syndromes are a reliable methodology to quantify clinically significant frailty within hospital settings, and measure trends and geospatial variation using English secondary care data set Hospital Episode Statistics (HES). Setting National English Secondary Care Administrative Data HES. Participants All 50?540?141 patient spells for patients over 65?years admitted to acute provider hospitals in England (January 2005March 2013) within HES. Primary and secondary outcome measures We explore the prevalence of Frailty Syndromes as coded by International Statistical Classification of Diseases, Injuries and Causes of Death (ICD-10) over time, and their geographic distribution across England. We examine national trends for admission spells, inpatient mortality and 30-day readmission. Results A rising trend of admission spells was noted from January 2005 to March 2013(daily average admissions for month rising from over 2000 to over 4000). The overall prevalence of coded frailty is increasing (64?559 spells in January 2005 to 150?085 spells by Jan 2013). The majority of patients had a single frailty syndrome coded (10.2% vs total burden of 13.9%). Cognitive impairment and falls (including significant fracture) are the most common frailty syndromes coded within HES. Geographic variation in frailty burden was in keeping with known distribution of prevalence of the English elderly population and location of National Health Service (NHS) acute provider sites. Overtime, in-hospital mortality has decreased (>65?years) whereas readmission rates have increased (esp.>85?years). Conclusions This study provides a novel methodology to reliably quantify clinically significant frailty. Applications include evaluation of health service improvement over time, risk stratification and optimisation of services. PMID:26490097

  4. Monitoring microemboli during cardiopulmonary bypass with the EDAC quantifier.

    PubMed

    Lynch, John E; Wells, Christopher; Akers, Tom; Frantz, Paul; Garrett, Donna; Scott, M Lance; Williamson, Lisa; Agnew, Barbara; Lynch, John K

    2010-09-01

    Gaseous emboli may be introduced into the bypass circuit both from the surgical field and during perfusionist interventions. While circuits provide good protection against massive air embolism, they do not remove gaseous microemboli (GME) from the bypass circuit. The purpose of this preliminary study is to assess the incidence of GME during bypass surgery and determine if increased GME counts were associated with specific events during bypass surgery. In 30 cases divided between 15 coronary artery bypass grafts and 15 valve repairs, GME were counted and sizedt the three locations on the bypass circuit using the EDAC" Quantifier (Luna Innovations, Roanoke, VA). A mean of 45,276 GME were detected after the arterial line filter during these 30 cases, with significantly more detected (p = .04) post filter during valve cases (mean = 72,137 +/- 22,113) than coronary artery bypass graft cases (mean = 18,416 +/- 7831). GME detected post filter were significantly correlated in time with counts detected in the venous line (p < .001). Specific events associated with high counts included the initiation of cardiopulmonary bypass, heart manipulations, insertion and removal of clamps, and the administration of drugs. Global factors associated with increased counts post filter included higher venous line counts and higher post reservoir/bubble trap counts. The mean number of microemboli detected during bypass surgery was much higher than reported in other studies of emboli incidence, most likely due to the increased sensitivity of the EDAC Quantifier compared to other detection modalities. The results furthermore suggest the need for further study of the clinical significance of these microemboli and what practices may be used to reduce GME incidence. Increased in vitro testing of the air handling capability of different circuit designs, along with more clinical studies assessing best clinical practices for reducing GME activity, is recommended. PMID:21114224

  5. Quantifying tissue mechanical properties using photoplethysmography

    SciTech Connect

    Akl, Tony; Wilson, Mark A.; Ericson, Milton Nance; Cote, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance.

  6. Quantifying tissue mechanical properties using photoplethysmography

    PubMed Central

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; Coté, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young’s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance. PMID:25071970

  7. Quantifying MCMC Exploration of Phylogenetic Tree Space

    PubMed Central

    Whidden, Chris; Matsen, Frederick A.

    2015-01-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  8. Quantifying the Ease of Scientific Discovery

    PubMed Central

    Arbesman, Samuel

    2012-01-01

    It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines mammalian species, chemical elements, and minor planets I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science. PMID:22328796

  9. Quantifying MCMC exploration of phylogenetic tree space.

    PubMed

    Whidden, Chris; Matsen, Frederick A

    2015-05-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  10. A method to quantify transesterification activities of lipases in litters.

    PubMed

    Goujard, L; Ferre, E; Gil, G; Ruaudel, F; Farnet, A M

    2009-08-01

    Lipases are glycerol ester hydrolases (EC 3.1.1.3) produced by a wide range of microorganisms. They catalyse the hydrolysis of different esters but this reaction is reversible, depending on the water content of the reaction medium, via esterification and transesterification. The synthetic activity of lipases can be of major importance in natural ecosystems since it can be involved in carbon stockage in soils or litters. Here, the detection of transesterification activities of lipases in litter is reported for the first time. We used two different litters: litter of Quercus pubescens (QP) and litter of both Q. pubescens and Q. ilex. Different p-nitrophenyl esters and pentanol were used to test transesterification in a reaction medium with an organic solvent (heptane). We showed that these activities were proportional to the amount of litter, the incubation time and the substrate concentration and that they increased with temperature. Furthermore, the lipases from the litters studied were very thermostable since they were still active after 2 h at 70 degrees C. These activities showed common properties of lipases: the highest activities were obtained with a medium acyl-chain substrate p-nitrophenyl caprylate and transesterification activities were correlated to water activity, a(w). The following parameters are recommended to quantify transesterification activities in litter: 10 mM of p-nitrophenyl caprylate, 1 g of litter, 500 microL of pentanol, q.s.p. 4 mL of heptane incubated at 30 degrees C for 2 h. PMID:19426767

  11. Quantifying higher-order correlations in a neuronal pool

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2015-03-01

    Recent experiments involving a relatively large population of neurons have shown a very significant amount of higher-order correlations. However, little is known of how these affect the integration and firing behavior of a population of neurons beyond the second order statistics. To investigate how higher-order inputs statistics can shape beyond pairwise spike correlations and affect information coding in the brain, we consider a neuronal pool where each neuron fires stochastically. We develop a simple mathematically tractable model that makes it feasible to account for higher-order spike correlations in a neuronal pool with highly interconnected common inputs beyond second order statistics. In our model, correlations between neurons appear from q-Gaussian inputs into threshold neurons. The approach constitutes the natural extension of the Dichotomized Gaussian model, where the inputs to the model are just Gaussian distributed and therefore have no input interactions beyond second order. We obtain an exact analytical expression for the joint distribution of firing, quantifying the degree of higher-order spike correlations, truly emphasizing the functional aspects of higher-order statistics, as we account for beyond second order inputs correlations seen by each neuron within the pool. We determine how higher-order correlations depend on the interaction structure of the input, showing that the joint distribution of firing is skewed as the parameter q increases inducing larger excursions of synchronized spikes. We show how input nonlinearities can shape higher-order correlations and enhance coding performance by neural populations.

  12. Quantifying Security Threats and Their Impact

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2009-01-01

    In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper we illustrate this infrastructure by means of a sample example involving an e-commerce application.

  13. Quantifying Effects Of Water Stress On Sunflowers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This poster presentation describes the data collection and analysis procedures and results for 2009 from a research grant funded by the National Sunflower Association. The primary objective was to evaluate the use of crop canopy temperature measured with infrared temperature sensors, as a more time ...

  14. Quantifying Non-Markovianity with Temporal Steering.

    PubMed

    Chen, Shin-Liang; Lambert, Neill; Li, Che-Ming; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2016-01-15

    Einstein-Podolsky-Rosen (EPR) steering is a type of quantum correlation which allows one to remotely prepare, or steer, the state of a distant quantum system. While EPR steering can be thought of as a purely spatial correlation, there does exist a temporal analogue, in the form of single-system temporal steering. However, a precise quantification of such temporal steering has been lacking. Here, we show that it can be measured, via semidefinite programing, with a temporal steerable weight, in direct analogy to the recently proposed EPR steerable weight. We find a useful property of the temporal steerable weight in that it is a nonincreasing function under completely positive trace-preserving maps and can be used to define a sufficient and practical measure of strong non-Markovianity. PMID:26824533

  15. Quantifying Non-Markovianity with Temporal Steering

    NASA Astrophysics Data System (ADS)

    Chen, Shin-Liang; Lambert, Neill; Li, Che-Ming; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2016-01-01

    Einstein-Podolsky-Rosen (EPR) steering is a type of quantum correlation which allows one to remotely prepare, or steer, the state of a distant quantum system. While EPR steering can be thought of as a purely spatial correlation, there does exist a temporal analogue, in the form of single-system temporal steering. However, a precise quantification of such temporal steering has been lacking. Here, we show that it can be measured, via semidefinite programing, with a temporal steerable weight, in direct analogy to the recently proposed EPR steerable weight. We find a useful property of the temporal steerable weight in that it is a nonincreasing function under completely positive trace-preserving maps and can be used to define a sufficient and practical measure of strong non-Markovianity.

  16. Quantifying non-Gaussianity for quantum information

    SciTech Connect

    Genoni, Marco G.; Paris, Matteo G. A.

    2010-11-15

    We address the quantification of non-Gaussianity (nG) of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in detail the properties and the relationships of two recently proposed measures of nG based on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behavior in most of the examples taken into account. However, we also show that they introduce a different relation of order; that is, they are not strictly monotone to each other. We exploit the nG measures for states in order to introduce a measure of nG for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in detail the role played by nG in entanglement-distillation protocols. Besides, we exploit the QRE-based nG measure to provide different insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information, and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nG to the quantum Fisher information. Finally, since evaluation of the QRE nG measure requires the knowledge of the full density matrix, we derive some experimentally friendly lower bounds to nG for some classes of states and by considering the possibility of performing on the states only certain efficient or inefficient measurements.

  17. UV Photography Shows Hidden Sun Damage

    MedlinePLUS

    ... skin UV photography shows hidden sun damage UV photography shows hidden sun damage A UV photograph gives ... developing skin cancer and prematurely aged skin. Normal photography UV photography 18 months of age: This boy's ...

  18. Quantifying vertical and horizontal stand structure using terrestrial LiDAR in Pacific Northwest forests

    NASA Astrophysics Data System (ADS)

    Kazakova, Alexandra N.

    Stand level spatial distribution is a fundamental part of forest structure that influences many ecological processes and ecosystem functions. Vertical and horizontal spatial structure provides key information for forest management. Although horizontal stand complexity can be measured through stem mapping and spatial analysis, vertical complexity within the stand remains a mostly visual and highly subjective process. Tools and techniques in remote sensing, specifically LiDAR, provide three dimensional datasets that can help get at three dimensional forest stand structure. Although aerial LiDAR (ALS) is the most widespread form of remote sensing for measuring forest structure, it has a high omission rate in dense and structurally complex forests. In this study we used terrestrial LiDAR (TLS) to obtain high resolution three dimensional point clouds of plots from stands that vary by density and composition in the second-growth Pacific Northwest forest ecosystem. We used point cloud slicing techniques and object-based image analysis (OBIA) to produce canopy profiles at multiple points of vertical gradient. At each height point we produced segments that represented canopies or parts of canopies for each tree within the dataset. The resulting canopy segments were further analyzed using landscape metrics to quantify vertical canopy complexity within a single stand. Based on the developed method, we have successfully created a tool that utilizes three dimensional spatial information to accurately quantify the vertical structure of forest stands. Results show significant differences in the number and the total area of the canopy segments and gap fraction between each vertical slice within and between individual forest management plots. We found a significant relationship between the stand density and composition and the vertical canopy complexity. The methods described in this research make it possible to create horizontal stand profiles at any point along the vertical gradient of forest stands with high frequency, therefore providing ecologists with measures of horizontal and vertical stand structure. Key Words: Terrestrial laser scanning, canopy structure, landscape metrics, aerial laser scanning, lidar, calibration, Pacific Northwest.

  19. Quantifying the impact of metamorphic reactions on strain localization in the mantle

    NASA Astrophysics Data System (ADS)

    Huet, Benjamin; Yamato, Philippe

    2014-05-01

    Metamorphic reactions are most often considered as a passive record of changes in pressure, temperature and fluid conditions that rocks experience. In that way, they provide key constraints on the tectonic evolution of the crust and the mantle. However, natural examples show that metamorphism can also modify the strength of rocks and affect the strain localization in ductile shear zones. Hence, metamorphic reactions have an active role in tectonics by inducing softening and/or hardening depending on the involved reactions. Quantifying the mechanical effect of such metamorphic reactions is, therefore, a crucial task for determining both the strength distribution in the lithosphere and its evolution. However, the estimate of the effective strength of such polyphase rocks remains still an open issue. Some flow laws (determined experimentally) already exist for monophase aggregates and polyphase rocks for rheologically important materials. They provide good constraints on lithology-controlled lithospheric strength variations. Unfortunately, since the whole range of mineralogical and chemical rock compositions cannot be experimentally tested, the variations of strength due to in metamorphism reaction cannot be systematically and fully characterized. In order to tackle this issue, we here present the results of a study coupling thermodynamical and mechanical modeling that allows us to predict the mechanical impact of metamorphic reactions on the strength of the mantle. Thermodynamic modeling (using Theriak-Domino) is used for calculating the mineralogical composition of a typical peridotite as a function of pressure, temperature and water content. The calculated modes and flow laws parameters for monophase aggregates are then used as input of the Minimized Power Geometric model for predicting the polyphase aggregate strength. Our results are then used to quantify the strength evolution of the mantle as a function of pressure, temperature and water content in two characteristic tectonic contexts by following P-T evolutions underwent by the lithospheric mantle in both subduction zones and rifts. The mechanical consequences of metamorphic reactions at the convergent and divergent plate boundaries are finally discussed.

  20. A new metric for quantifying performance impairment on the psychomotor vigilance test.

    PubMed

    Rajaraman, Srinivasan; Ramakrishnan, Sridhar; Thorsley, David; Wesensten, Nancy J; Balkin, Thomas J; Reifman, Jaques

    2012-12-01

    We have developed a new psychomotor vigilance test (PVT) metric for quantifying the effects of sleep loss on performance impairment. The new metric quantifies performance impairment by estimating the probability density of response times (RTs) in a PVT session, and then considering deviations of the density relative to that of a baseline-session density. Results from a controlled laboratory study involving 12 healthy adults subjected to 85 h of extended wakefulness, followed by 12 h of recovery sleep, revealed that the group performance variability based on the new metric remained relatively uniform throughout wakefulness. In contrast, the variability of PVT lapses, mean RT, median RT and (to a lesser extent) mean speed showed strong time-of-day effects, with the PVT lapse variability changing with time of day depending on the selected threshold. Our analysis suggests that the new metric captures more effectively the homeostatic and circadian process underlying sleep regulation than the other metrics, both directly in terms of larger effect sizes (4-61% larger) and indirectly through improved fits to the two-process model (9-67% larger coefficient of determination). Although the trend of the mean speed results followed those of the new metric, we found that mean speed yields significantly smaller (?50%) intersubject performance variance than the other metrics. Based on these findings, and that the new metric considers performance changes based on the entire set of responses relative to a baseline, we conclude that it provides a number of potential advantages over the traditional PVT metrics. PMID:22436093

  1. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    SciTech Connect

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S. These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4) {forall} {epsilon} > 0 the problems MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S), are PSPACE-hard to approximate within a factor of n{sup {epsilon}} times optimum. These results significantly extend the earlier results by (i) Papadimitriou [Pa851] on complexity of stochastic satisfiability, (ii) Condon, Feigenbaum, Lund and Shor [CF+93, CF+94] by identifying natural classes of PSPACE-hard optimization problems with provably PSPACE-hard {epsilon}-approximation problems. Moreover, most of our results hold not just for Boolean relations: most previous results were done only in the context of Boolean domains. The results also constitute as a significant step towards obtaining a dichotomy theorems for the problems MAX-S-SAT(S) and MAX-Q-SAT(S): a research area of recent interest [CF+93, CF+94, Cr95, KSW97, LMP99].

  2. Children with Autism Show Reduced Somatosensory Response: An MEG Study

    PubMed Central

    Marco, Elysa J.; Khatibi, Kasra; Hill, Susanna S.; Siegel, Bryna; Arroyo, Monica S.; Dowling, Anne F.; Neuhaus, John M.; Sherr, Elliott H.; Hinkley, Leighton N. B.; Nagarajan, Srikantan S.

    2012-01-01

    Lay Abstract Autism spectrum disorders are reported to affect nearly one out of every one hundred children, with over 90% of these children showing behavioral disturbances related to the processing of basic sensory information. Behavioral sensitivity to light touch, such as profound discomfort with clothing tags and physical contact, is a ubiquitous finding in children on the autism spectrum. In this study, we investigate the strength and timing of brain activity in response to simple, light taps to the fingertip. Our results suggest that children with autism show a diminished early response in the primary somatosensory cortex (S1). This finding is most evident in the left hemisphere. In exploratory analysis, we also show that tactile sensory behavior, as measured by the Sensory Profile, may be a better predictor of the intensity and timing of brain activity related to touch than a clinical autism diagnosis. We report that children with atypical tactile behavior have significantly lower amplitude somatosensory cortical responses in both hemispheres. Thus sensory behavioral phenotype appears to be a more powerful strategy for investigating neural activity in this cohort. This study provides evidence for atypical brain activity during sensory processing in autistic children and suggests that our sensory behavior based methodology may be an important approach to investigating brain activity in people with autism and neurodevelopmental disorders. Scientific Abstract The neural underpinnings of sensory processing differences in autism remain poorly understood. This prospective magnetoencephalography (MEG) study investigates whether children with autism show atypical cortical activity in the primary somatosensory cortex (S1) in comparison to matched controls. Tactile stimuli were clearly detectable, painless taps applied to the distal phalanx of the second (D2) and third (D3) fingers of the right and left hands. Three tactile paradigms were administered: an oddball paradigm (standard taps to D3 at an inter-stimulus interval (ISI) of 0.33 and deviant taps to D2 with ISI ranging from 1.32–1.64s); a slow-rate paradigm (D2) with an ISI matching the deviant taps in the oddball paradigm; and a fast-rate paradigm (D2) with an ISI matching the standard taps in the oddball. Study subjects were boys (age 7–11 years) with and without autism disorder. Sensory behavior was quantified using the Sensory Profile questionnaire. Boys with autism exhibited smaller amplitude left hemisphere S1 response to slow and deviant stimuli during the right hand paradigms. In post-hoc analysis, tactile behavior directly correlated with the amplitude of cortical response. Consequently, the children were re-categorized by degree of parent-report tactile sensitivity. This regrouping created a more robust distinction between the groups with amplitude diminution in the left and right hemispheres and latency prolongation in the right hemisphere in the deviant and slow-rate paradigms for the affected children. This study suggests that children with autism have early differences in somatosensory processing, which likely influence later stages of cortical activity from integration to motor response. PMID:22933354

  3. Quantifying human response capabilities towards tsunami threats at community level

    NASA Astrophysics Data System (ADS)

    Post, J.; Mck, M.; Zosseder, K.; Wegscheider, S.; Taubenbck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience besides others play a role. An attempt to quantify this variable under high uncertainty is also presented. Quantifying ET is based on a GIS modelling using a Cost Weighted Distance approach. Basic principle is to define the best evacuation path from a given point to the next safe area (shelter location). Here the fastest path from that point to the shelter location has to be found. Thereby the impact of land cover, slope, population density, population age and gender distribution are taken into account as literature studies prove these factors as highly important. Knowing the fastest path and the distance to the next safe area together with a spatially distributed pattern of evacuation speed delivers the time needed from each location to a safe area. By considering now the obtained time value for RsT the coverage area of an evacuation target point (safe area) can be assigned. Incorporating knowledge on people capacity of an evacuation target point the respective coverage area is refined. Hence areas with weak, moderate and good human response capabilities can be detected. This allows calculation of potential amount of people affected (dead or injured) and amount of people dislocated. First results for Kuta (Bali) for a worst case tsunami event deliver people affected of approx. 25 000 when RT = 0 minutes (direct evacuation when receiving a tsunami warning to 120 000 when RT > ETA (no evacuation action until tsunami hits the land). Additionally fastest evacuation routes to the evacuation target points can be assigned. Areas with weak response capabilities can be assigned as priority areas to install e.g. additional evacuation target points or to increase tsunami knowledge and awareness to promote a faster reaction time. Especially in analyzing underlying socio-economic properties causing deficiencies in responding to a tsunami threat can lead to valuable information and direct planning of adaptation measures. Keywords: Community level, Risk and vulnerability assessment, Early warning, Disaster management, Tsunami, Indonesia

  4. Quantifying the Electrocatalytic Turnover of Vitamin?B12 -Mediated Dehalogenation on Single Soft Nanoparticles.

    PubMed

    Cheng, Wei; Compton, Richard G

    2016-02-01

    We report the electrocatalytic dehalogenation of trichloroethylene (TCE) by single soft nanoparticles in the form of Vitamin?B12 -containing droplets. We quantify the turnover number of the catalytic reaction at the single soft nanoparticle level. The kinetic data shows that the binding of TCE with the electro-reduced vitamin in the Co(I) oxidation state is chemically reversible. PMID:26806226

  5. Ancient bacteria show evidence of DNA repair

    PubMed Central

    Johnson, Sarah Stewart; Hebsgaard, Martin B.; Christensen, Torben R.; Mastepanov, Mikhail; Nielsen, Rasmus; Munch, Kasper; Brand, Tina; Gilbert, M. Thomas P.; Zuber, Maria T.; Bunce, Michael; Rnn, Regin; Gilichinsky, David; Froese, Duane; Willerslev, Eske

    2007-01-01

    Recent claims of cultivable ancient bacteria within sealed environments highlight our limited understanding of the mechanisms behind long-term cell survival. It remains unclear how dormancy, a favored explanation for extended cellular persistence, can cope with spontaneous genomic decay over geological timescales. There has been no direct evidence in ancient microbes for the most likely mechanism, active DNA repair, or for the metabolic activity necessary to sustain it. In this paper, we couple PCR and enzymatic treatment of DNA with direct respiration measurements to investigate long-term survival of bacteria sealed in frozen conditions for up to one million years. Our results show evidence of bacterial survival in samples up to half a million years in age, making this the oldest independently authenticated DNA to date obtained from viable cells. Additionally, we find strong evidence that this long-term survival is closely tied to cellular metabolic activity and DNA repair that over time proves to be superior to dormancy as a mechanism in sustaining bacteria viability. PMID:17728401

  6. Quantified EEG in different G situations

    NASA Astrophysics Data System (ADS)

    de Metz, K.; Quadens, O.; De Graeve, M.

    The electrical activity of the brain (EEG) has been recorded during parabolic flights in trained astronauts and non trained volunteers as well. The Fast Fourier analysis of the EEG activity evidenced more asymmetry between the two brain hemispheres in the subjects who suffered from motion sickness than in the others. However, such a FFT classification does not lead to a discrimination between deterministic and stochastic events. Therefore, a first attempt was made to calculate the dimensionality of "chaotic attractors" in the EEG patterns as a function of the different g-epochs of one parabola. Very preliminary results are given here.

  7. Quantifying oil filtration effects on bearing life

    SciTech Connect

    Needelman, W.M.; Zaretsky, E.V.

    1991-06-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L{sub 10} or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  8. Quantifying oil filtration effects on bearing life

    NASA Technical Reports Server (NTRS)

    Needelman, William M.; Zaretsky, Erwin V.

    1991-01-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L(sub 10) or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  9. Quantifying Differential Rotation Across the Main Sequence

    NASA Astrophysics Data System (ADS)

    Ule, Nicholas M.

    We have constructed a sample of eight stars from the Kepler field covering a broad range of spectral types, from F7 to K3. These stars have well defined rotation rates and show evidence of differential rotation in their lightcurves. In order to robustly determine differential rotation the inclination of a star must first be known. Thus, we have obtained moderate resolution spectra of these targets and obtained their radial velocities (v sin i), which is then used to determine inclinations. The photometric variations often seen in stars are created by star spots which we model in order to determine differential rotation. We have adapted the starspotz model developed by Croll (2006) with an asexual genetic algorithm to measure the strength of differential rotation (described with the parameter k). The photometric data was broken into 167 segments which were modeled for 6--8 values of k, with each model producing 50,000+ solutions. The value of k with a solution which produced the closest fit to the data was determined to be the most correct value of k for that lightcurve segment. With this data we also performed signal analysis which indicated the presence of long lived, latitudinally dependant active regions on stars. For our eight targets we successfully determined differential rotation rates and evaluated those values in relation to stellar temperature and rotational period. Coupled with previously published values for nine additional targets we find no temperature relation with differential rotation, but we do find a strong trend with rotation rates.

  10. Quantifying the origin of metallic glass formation.

    PubMed

    Johnson, W L; Na, J H; Demetriou, M D

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum ?X* at a 'nose temperature' T(*) located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that ?X* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless 'fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining ?X*. Here we survey and assess reported data for TL, Tg, trg, m and ?X* for a broad range of metallic glasses with widely varying ?X*. By analysing this database, we derive a simple empirical expression for ?X*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict ?X* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for ?X*. PMID:26786966

  11. Stretching DNA to quantify nonspecific protein binding.

    PubMed

    Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura

    2012-07-01

    Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of ? DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (~100 nM), and to obtain a measurement of the induced DNA compaction (~10%) by CI. PMID:23005450

  12. Stretching DNA to quantify nonspecific protein binding

    NASA Astrophysics Data System (ADS)

    Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura

    2012-07-01

    Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of λ DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (˜100 nM), and to obtain a measurement of the induced DNA compaction (˜10%) by CI.

  13. Quantifying the origin of metallic glass formation

    PubMed Central

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a ‘nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless ‘fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*. PMID:26786966

  14. Quantifying the origin of metallic glass formation

    NASA Astrophysics Data System (ADS)

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum ?X* at a `nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that ?X* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless `fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining ?X*. Here we survey and assess reported data for TL, Tg, trg, m and ?X* for a broad range of metallic glasses with widely varying ?X*. By analysing this database, we derive a simple empirical expression for ?X*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict ?X* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for ?X*.

  15. Quantifying Transmission Investment in Malaria Parasites.

    PubMed

    Greischar, Megan A; Mideo, Nicole; Read, Andrew F; Bjørnstad, Ottar N

    2016-02-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  16. Quantifying Transmission Investment in Malaria Parasites

    PubMed Central

    Greischar, Megan A.; Mideo, Nicole; Read, Andrew F.; Bjørnstad, Ottar N.

    2016-01-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  17. A graph-theoretic method to quantify the airline route authority

    NASA Technical Reports Server (NTRS)

    Chan, Y.

    1979-01-01

    The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

  18. Quantifying uncertainty in material damage from vibrational data

    SciTech Connect

    Butler, T.; Huhtala, A.; Juntunen, M.

    2015-02-15

    The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.

  19. Quantifying the Consistency of Scientific Databases

    PubMed Central

    Šubelj, Lovro; Bajec, Marko; Mileva Boshkoska, Biljana; Kastrin, Andrej; Levnajić, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies. PMID:25984946

  20. Quantifying uncertainties in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Patlakas, Platon; Galanis, George; Kallos, George

    2015-04-01

    The constant rise of wind energy production and the subsequent penetration in global energy markets during the last decades resulted in new sites selection with various types of problems. Such problems arise due to the variability and the uncertainty of wind speed. The study of the wind speed distribution lower and upper tail may support the quantification of these uncertainties. Such approaches focused on extreme wind conditions or periods below the energy production threshold are necessary for a better management of operations. Towards this direction, different methodologies are presented for the credible evaluation of potential non-frequent/extreme values for these environmental conditions. The approaches used, take into consideration the structural design of the wind turbines according to their lifespan, the turbine failures, the time needed for repairing as well as the energy production distribution. In this work, a multi-parametric approach for studying extreme wind speed values will be discussed based on tools of Extreme Value Theory. In particular, the study is focused on extreme wind speed return periods and the persistence of no energy production based on a weather modeling system/hind cast/10-year dataset. More specifically, two methods (Annual Maxima and Peaks Over Threshold) were used for the estimation of extreme wind speeds and their recurrence intervals. Additionally, two different methodologies (intensity given duration and duration given intensity, both based on Annual Maxima method) were implied to calculate the extreme events duration, combined with their intensity as well as the event frequency. The obtained results prove that the proposed approaches converge, at least on the main findings, for each case. It is also remarkable that, despite the moderate wind speed climate of the area, several consequent days of no energy production are observed.

  1. Quantifying the biodiversity value of tropical primary, secondary, and plantation forests

    PubMed Central

    Barlow, J.; Gardner, T. A.; Araujo, I. S.; Ávila-Pires, T. C.; Bonaldo, A. B.; Costa, J. E.; Esposito, M. C.; Ferreira, L. V.; Hawes, J.; Hernandez, M. I. M.; Hoogmoed, M. S.; Leite, R. N.; Lo-Man-Hung, N. F.; Malcolm, J. R.; Martins, M. B.; Mestre, L. A. M.; Miranda-Santos, R.; Nunes-Gutjahr, A. L.; Overal, W. L.; Parry, L.; Peters, S. L.; Ribeiro-Junior, M. A.; da Silva, M. N. F.; da Silva Motta, C.; Peres, C. A.

    2007-01-01

    Biodiversity loss from deforestation may be partly offset by the expansion of secondary forests and plantation forestry in the tropics. However, our current knowledge of the value of these habitats for biodiversity conservation is limited to very few taxa, and many studies are severely confounded by methodological shortcomings. We examined the conservation value of tropical primary, secondary, and plantation forests for 15 taxonomic groups using a robust and replicated sample design that minimized edge effects. Different taxa varied markedly in their response to patterns of land use in terms of species richness and the percentage of species restricted to primary forest (varying from 5% to 57%), yet almost all between-forest comparisons showed marked differences in community structure and composition. Cross-taxon congruence in response patterns was very weak when evaluated using abundance or species richness data, but much stronger when using metrics based upon community similarity. Our results show that, whereas the biodiversity indicator group concept may hold some validity for several taxa that are frequently sampled (such as birds and fruit-feeding butterflies), it fails for those exhibiting highly idiosyncratic responses to tropical land-use change (including highly vagile species groups such as bats and orchid bees), highlighting the problems associated with quantifying the biodiversity value of anthropogenic habitats. Finally, although we show that areas of native regeneration and exotic tree plantations can provide complementary conservation services, we also provide clear empirical evidence demonstrating the irreplaceable value of primary forests. PMID:18003934

  2. Quantifying changes in groundwater level and chemistry in Shahrood, northeastern Iran

    NASA Astrophysics Data System (ADS)

    Ajdary, Khalil; Kazemi, Gholam A.

    2014-03-01

    Temporal changes in the quantity and chemical status of groundwater resources must be accurately quantified to aid sustainable management of aquifers. Monitoring data show that the groundwater level in Shahrood alluvial aquifer, northeastern Iran, continuously declined from 1993 to 2009, falling 11.4 m in 16 years. This constitutes a loss of 216 million m3 from the aquifer's stored groundwater reserve. Overexploitation and reduction in rainfall intensified the declining trend. In contrast, the reduced abstraction rate, the result of reduced borehole productivity (related to the reduction in saturated-zone thickness over time), slowed down the declining trend. Groundwater salinity varied substantially showing a minor rising trend. For the same 16-year period, increases were recorded in the order of 24% for electrical conductivity, 12.4% for major ions, and 9.9% for pH. This research shows that the groundwater-level declining trend was not interrupted by fluctuation in rainfall and it does not necessarily lead to water-quality deterioration. Water-level drop is greater near the aquifer's recharging boundary, while greater rates of salinity rise occur around the end of groundwater flow lines. Also, fresher groundwater experiences a greater rate of salinity increase. These findings are of significance for predicting the groundwater level and salinity of exhausted aquifers.

  3. Quantifying the Fate of Stablised Criegee Intermediates under Atmospheric Conditions

    NASA Astrophysics Data System (ADS)

    Newland, Mike; Rickard, Andrew; Alam, Mohammed; Vereecken, Luc; Muoz, Amalia; Rdenas, Milagros; Bloss, William

    2014-05-01

    The products of alkene ozonolysis have been shown in field experiments to convert SO2 to H2SO4. One fate of H2SO4 formed in the atmosphere is the formation of sulphate aerosol. This has been reported to contribute - 0.4 W m-2 to anthropogenic radiative forcing via the direct aerosol effect and can also contribute to the indirect aerosol effect, currently one of the greatest uncertainties in climate modelling. The observed SO2 oxidation has been proposed to arise from reactions of the carbonyl oxide, or Criegee Intermediate (CI), formed during alkene ozonolysis reactions, with SO2. Direct laboratory experiments have confirmed that stabilised CIs (SCIs) react more quickly with SO2 (k > 10-11 cm3 s-1) than was previously thought. The major sink for SCI in the troposphere is reaction with water vapour. The importance of the SO2 + SCI reaction in H2SO4 formation has been shown in modelling work to be critically dependent on the ratio of the rate constants for the reaction of the SCI with SO2 and with H2O. Such modelling work has suggested that the SCI + SO2 reaction is only likely to be important in regions with high alkene emissions, e.g. forests. Here we present results from a series of ozonolysis experiments performed at the EUPHORE atmospheric simulation chamber, Valencia. These experiments measure the loss of SO2, in the presence of an alkene (ethene, cis-but-2-ene and 2,3-dimethyl butene), as a function of water vapour. From these experiments we quantify the relative rates of reaction of the three smallest SCI with water and SO2 and their decomposition rates. In addition the results appear to suggest that the conversion of SO2 to H2SO4 during alkene ozonolysis may be inconsistent with the SCI + SO2 mechanism alone, particularly at high relative humidities. The results suggest that SCI are likely to provide at least an equivalent sink for SO2 to that of OH in the troposphere, in agreement with field observations. This work highlights the importance of alkene ozonolysis not only as a non-photolytic source of HOx but additionally as a source of other important atmospheric oxidants and moves towards quantifying some of the important sinks of SCI in the atmosphere.

  4. Quantifying the Chemical Weathering Efficiency of Basaltic Catchments

    NASA Astrophysics Data System (ADS)

    Ibarra, D. E.; Caves, J. K.; Thomas, D.; Chamberlain, C. P.; Maher, K.

    2014-12-01

    The geographic distribution and areal extent of rock type, along with the hydrologic cycle, influence the efficiency of global silicate weathering. Here we define weathering efficiency as the production of HCO3- for a given land surface area. Modern basaltic catchments located on volcanic arcs and continental flood basalts are particularly efficient, as they account for <5% of sub-aerial bedrock but produce ~30% of the modern global weathering flux. Indeed, changes in this weathering efficiency are thought to play an important role in modulating Earth's past climate via changes in the areal extent and paleo-latitude of basaltic catchments (e.g., Deccan and Ethiopian Traps, southeast Asia basaltic terranes). We analyze paired river discharge and solute concentration data for basaltic catchments from both literature studies and the USGS NWIS database to mechanistically understand geographic and climatic influences on weathering efficiency. To quantify the chemical weathering efficiency of modern basalt catchments we use solute production equations and compare the results to global river datasets. The weathering efficiency, quantified via the Damkhler coefficient (Dw [m/yr]), is calculated from fitting concentration-discharge relationships for catchments with paired solute and discharge measurements. Most basalt catchments do not demonstrate 'chemostatic' behavior. The distribution of basalt catchment Dw values (0.194 0.176 (1?)), derived using SiO2(aq) concentrations, is significantly higher than global river Dw values (mean Dw of 0.036), indicating a greater chemical weathering efficiency. Despite high Dw values and total weathering fluxes per unit area, many basaltic catchments are producing near their predicted weathering flux limit. Thus, weathering fluxes from basaltic catchments are proportionally less responsive to increases in runoff than other lithologies. The results of other solute species (Mg2+ and Ca2+) are comparable, but are influenced both by the stoichiometry of local primary minerals and secondary clays. Our results provide a framework to interpret how small changes in the areal extent or geographic distribution of basaltic catchments may markedly influence the silicate weathering feedback.

  5. Quantified energy dissipation rates in the terrestrial bow shock: 1. Analysis techniques and methodology

    NASA Astrophysics Data System (ADS)

    Wilson, L. B.; Sibeck, D. G.; Breneman, A. W.; Contel, O. Le; Cully, C.; Turner, D. L.; Angelopoulos, V.; Malaspina, D. M.

    2014-08-01

    We present a detailed outline and discussion of the analysis techniques used to compare the relevance of different energy dissipation mechanisms at collisionless shock waves. We show that the low-frequency, quasi-static fields contribute less to ohmic energy dissipation, (-jE), than their high-frequency counterparts. In fact, we found that high-frequency, large-amplitude (>100 mV/m and/or >1 nT) waves are ubiquitous in the transition region of collisionless shocks. We quantitatively show that their fields, through wave-particle interactions, cause enough energy dissipation to regulate the global structure of collisionless shocks. The purpose of this paper, part one of two, is to outline and describe in detail the background, analysis techniques, and theoretical motivation for our new results presented in the companion paper. The companion paper presents the results of our quantitative energy dissipation rate estimates and discusses the implications. Together, the two manuscripts present the first study quantifying the contribution that high-frequency waves provide, through wave-particle interactions, to the total energy dissipation budget of collisionless shock waves.

  6. Quantifying the leakage of quantum protocols for classical two-party cryptography

    NASA Astrophysics Data System (ADS)

    Salvail, Louis; Schaffner, Christian; Sotáková, Miroslava

    2015-12-01

    We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer.

  7. Quantifying the digestibility of dietary protein.

    PubMed

    Darragh, A J; Hodgkinson, S M

    2000-07-01

    The current recommendation, when calculating a protein digestibility-corrected amino acid score, is to determine the digestibility of a dietary protein across the entire digestive tract, using the rat as a model animal for humans. This fecal digestibility value is subsequently corrected for endogenous contributions of protein using a metabolic nitrogen value determined by feeding rats a protein-free diet. The limitations inherent with this method are well recognized, however, and determining the digestibility of a dietary protein to the end of the small intestine is the preferred alternative. Unlike the fecal digestibility assay, which has only one basic methodology, ileal digestibility values can be determined in a number of ways. We discuss the various methods available for determining ileal digestibility values and compare results obtained for dietary proteins using both fecal and ileal digestibility assays. The relative value of using individual amino acid digestibility values as opposed to nitrogen digestibility values is reviewed. In addition, we address issues surrounding measurement of endogenous nitrogen flows, and in particular, the relative merits of determining "true" versus "real" digestibility values. PMID:10867062

  8. Quantifying variability in stream channel morphology

    NASA Astrophysics Data System (ADS)

    Trainor, Kristie; Church, Michael

    2003-09-01

    Nine stream channel characteristics (channel unit frequency, channel unit length, pool spacing, depth variability, width variability, large woody debris jam spacing, large woody debris volume, relative roughness, and average bank-full width used as a scale) were measured in 12 reaches in old growth forests on Haida Gwaii and Vancouver Island. They are applied to calculate a Euclidean distance measure of dissimilarity between all possible reach pair combinations. Frequency distributions of the resulting dissimilarity values express the range of variability present in the streams analyzed and enable definition of ranges of favorable and unfavorable comparisons. Reach pairs exhibiting high dissimilarity values have significant differences in several key stream channel characteristics that vary between reach pairs. Those reaches consistently appearing in reach pairs with high dissimilarity values exhibit significant variance from the norm for the group. Dissimilarity distributions provide a basis for appraising the outcome of stream channel manipulation (for example, in channel "restoration" programs) and for selecting channel pairs that are sufficiently similar to act as treatment and control units in experimental manipulations.

  9. Quantifying the power of multiple event interpretations

    NASA Astrophysics Data System (ADS)

    Chien, Yang-Ting; Farhi, David; Krohn, David; Marantan, Andrew; Mateos, David Lopez; Schwartz, Matthew

    2014-12-01

    A number of methods have been proposed recently which exploit multiple highly-correlated interpretations of events, or of jets within an event. For example, Qjets reclusters a jet multiple times and telescoping jets uses multiple cone sizes. Previous work has employed these methods in pseudo-experimental analyses and found that, with a simplified statistical treatment, they give sizable improvements over traditional methods. In this paper, the improvement gain from multiple event interpretations is explored with methods much closer to those used in real experiments. To this end, we derive and study a generalized extended maximum likelihood procedure, and find that using multiple jet radii can provide substantial benefit over a single radius in fitting procedures. Another major concern we address is that multiple event interpretations might be exploiting similar information to that already present in the standard kinematic variables. We perform multivariate analyses (boosted decision trees) on a set of standard kinematic variables, a single observable computed with several different cone sizes, and both sets combined. We find that using multiple radii is still helpful even on top of standard kinematic variables (providing a 12% improvement at low p T and 20% at high p T ). These results suggest that including multiple event interpretations in a realistic search for Higgs to would give additional sensitivity over traditional approaches.

  10. Quantifying the Intercellular Forces during Drosophila Morphogenesis

    NASA Astrophysics Data System (ADS)

    Ma, Xiaoyan; Hutson, M. Shane

    2006-03-01

    In many models of morphogenesis, cellular movements are driven by differences in interfacial tension along cell-cell boundaries. We have developed a microsurgical method to determine these tensions in living fruit fly (Drosophila) embryos. Cell edges in these embryos are labeled with green fluorescent protein chimeras; and line scan images that intersect several cell edges are recorded with a laser-scanning confocal microscope at a time resolution of 2 ms. While recording these scans, a Q-switched Nd:YAG laser is used to cut a single cell edge. The recoil of adjacent cell edges is evident in the line scans and the time-dependent cell edge positions are extracted using custom ImageJ plugins based on the Lucas-Kanade algorithm. The post-incision recoil velocities of cell edges are determined by fitting the cell edge positions to a double exponential function. In addition, a power spectrum analysis of cell-edge position fluctuations is used to determine the viscous damping constant. In the regime of low Reynolds number, the tension along a cell-cell boundary is well-approximated by the product of the viscous damping constant and the initial recoil velocity of adjacent cell edges. We will present initial results from two stages of Drosophila development -- germ band retraction and early dorsal closure.

  11. Quantifying viruses and bacteria in wastewater - results, quality control, and interpretation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes large enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bac...

  12. Quantifying the benefits of vehicle pooling with shareability networks

    PubMed Central

    Santi, Paolo; Resta, Giovanni; Szell, Michael; Sobolevsky, Stanislav; Strogatz, Steven H.; Ratti, Carlo

    2014-01-01

    Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability network, which allows us to model the collective benefits of sharing as a function of passenger inconvenience, and to efficiently compute optimal sharing strategies on massive datasets. We apply this framework to a dataset of millions of taxi trips taken in New York City, showing that with increasing but still relatively low passenger discomfort, cumulative trip length can be cut by 40% or more. This benefit comes with reductions in service cost, emissions, and with split fares, hinting toward a wide passenger acceptance of such a shared service. Simulation of a realistic online system demonstrates the feasibility of a shareable taxi service in New York City. Shareability as a function of trip density saturates fast, suggesting effectiveness of the taxi sharing system also in cities with much sparser taxi fleets or when willingness to share is low. PMID:25197046

  13. Quantifying the benefits of vehicle pooling with shareability networks.

    PubMed

    Santi, Paolo; Resta, Giovanni; Szell, Michael; Sobolevsky, Stanislav; Strogatz, Steven H; Ratti, Carlo

    2014-09-16

    Taxi services are a vital part of urban transportation, and a considerable contributor to traffic congestion and air pollution causing substantial adverse effects on human health. Sharing taxi trips is a possible way of reducing the negative impact of taxi services on cities, but this comes at the expense of passenger discomfort quantifiable in terms of a longer travel time. Due to computational challenges, taxi sharing has traditionally been approached on small scales, such as within airport perimeters, or with dynamical ad hoc heuristics. However, a mathematical framework for the systematic understanding of the tradeoff between collective benefits of sharing and individual passenger discomfort is lacking. Here we introduce the notion of shareability