Science.gov

Sample records for quantified results show

  1. Quantifying causal emergence shows that macro can beat micro

    PubMed Central

    Hoel, Erik P.; Albantakis, Larissa; Tononi, Giulio

    2013-01-01

    Causal interactions within complex systems can be analyzed at multiple spatial and temporal scales. For example, the brain can be analyzed at the level of neurons, neuronal groups, and areas, over tens, hundreds, or thousands of milliseconds. It is widely assumed that, once a micro level is fixed, macro levels are fixed too, a relation called supervenience. It is also assumed that, although macro descriptions may be convenient, only the micro level is causally complete, because it includes every detail, thus leaving no room for causation at the macro level. However, this assumption can only be evaluated under a proper measure of causation. Here, we use a measure [effective information (EI)] that depends on both the effectiveness of a system’s mechanisms and the size of its state space: EI is higher the more the mechanisms constrain the system’s possible past and future states. By measuring EI at micro and macro levels in simple systems whose micro mechanisms are fixed, we show that for certain causal architectures EI can peak at a macro level in space and/or time. This happens when coarse-grained macro mechanisms are more effective (more deterministic and/or less degenerate) than the underlying micro mechanisms, to an extent that overcomes the smaller state space. Thus, although the macro level supervenes upon the micro, it can supersede it causally, leading to genuine causal emergence—the gain in EI when moving from a micro to a macro level of analysis. PMID:24248356

  2. Quantifying disability: data, methods and results.

    PubMed Central

    Murray, C. J.; Lopez, A. D.

    1994-01-01

    Conventional methods for collecting, analysing and disseminating data and information on disability in populations have relied on cross-sectional censuses and surveys which measure prevalence in a given period. While this may be relevant for defining the extent and demographic pattern of disabilities in a population, and thus indicating the need for rehabilitative services, prevention requires detailed information on the underlying diseases and injuries that cause disabilities. The Global Burden of Disease methodology described in this paper provides a mechanism for quantifying the health consequences of the years of life lived with disabilities by first estimating the age-sex-specific incidence rates of underlying conditions, and then mapping these to a single disability index which collectively reflects the probability of progressing to a disability, the duration of life lived with the disability, and the approximate severity of the disability in terms of activity restriction. Detailed estimates of the number of disability-adjusted life years (DALYs) lived are provided in this paper, for eight geographical regions. The results should be useful to those concerned with planning health services for the disabled and, more particularly, with determining policies to prevent the underlying conditions which give rise to serious disabling sequelae. PMID:8062403

  3. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability

    PubMed Central

    Winkelströter, Lizziane Kretli; Martinis, Elaine C.P. De

    2015-01-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  4. 14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    14. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF INADEQUATE TAMPING. THE SIZE OF THE GRANITE AGGREGATE USED IN THE DAMS CONCRETE IS CLEARLY SHOWN. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  5. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  6. Quantifying IOHDR brachytherapy underdosage resulting from an incomplete scatter environment

    SciTech Connect

    Raina, Sanjay; Avadhani, Jaiteerth S.; Oh, Moonseong; Malhotra, Harish K.; Jaggernauth, Wainwright; Kuettel, Michael R.; Podgorsak, Matthew B. . E-mail: matthew.podgorsak@roswellpark.org

    2005-04-01

    Purpose: Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. Dosimetric errors from this assumption are negligible. However, in intraoperative high-dose-rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in underdosage during treatment. This study was carried out to investigate the magnitude of this underdosage. Methods: IOHDR treatment geometries were simulated using a solid water phantom beneath an applicator with varying amounts of bolus material on the top and sides of the applicator to account for missing tissue. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm{sup 2}), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Ionization measurements were made with a liquid-filled ionization chamber linear array with a dedicated electrometer and data acquisition system. Results: Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. Furthermore, for a given prescription depth, measurements in planes parallel to an applicator at distances up to 4.0 cm from the applicator plane showed that the dose delivery error is equal in magnitude throughout the target volume. Conclusion: This study demonstrates the magnitude of underdosage in IOHDR treatments delivered in a geometry that may not result in a full scatter environment around the applicator. This implies that the target volume and, specifically, the prescription depth (tumor bed) may get a dose significantly less than prescribed. It might be clinically relevant to correct for this inaccuracy.

  7. Breast vibro-acoustography: initial results show promise

    PubMed Central

    2012-01-01

    Introduction Vibro-acoustography (VA) is a recently developed imaging modality that is sensitive to the dynamic characteristics of tissue. It detects low-frequency harmonic vibrations in tissue that are induced by the radiation force of ultrasound. Here, we have investigated applications of VA for in vivo breast imaging. Methods A recently developed combined mammography-VA system for in vivo breast imaging was tested on female volunteers, aged 25 years or older, with suspected breast lesions on their clinical examination. After mammography, a set of VA scans was acquired by the experimental device. In a masked assessment, VA images were evaluated independently by 3 reviewers who identified mass lesions and calcifications. The diagnostic accuracy of this imaging method was determined by comparing the reviewers' responses with clinical data. Results We collected images from 57 participants: 7 were used for training and 48 for evaluation of diagnostic accuracy (images from 2 participants were excluded because of unexpected imaging artifacts). In total, 16 malignant and 32 benign lesions were examined. Specificity for diagnostic accuracy was 94% or higher for all 3 reviewers, but sensitivity varied (69% to 100%). All reviewers were able to detect 97% of masses, but sensitivity for detection of calcification was lower (≤ 72% for all reviewers). Conclusions VA can be used to detect various breast abnormalities, including calcifications and benign and malignant masses, with relatively high specificity. VA technology may lead to a new clinical tool for breast imaging applications. PMID:23021305

  8. Quantifying the Variability in Damage of Structures as a Result of Geohazards

    NASA Astrophysics Data System (ADS)

    Latchman, S.; Simic, M.

    2012-04-01

    Uncertainty is ever present in catastrophe modelling and has recently become a popular topic of discussion in insurance media. Each element of a catastrophe model has associated uncertainties whether they be aleatory, epistemic or other. One method of quantifying the uncertainty specific to each peril is to estimate the variation in damage for a given intensity of peril. For example, the proportion of total cost to repair a structure resulting from an earthquake in the regions of the affected area with peak ground acceleration of 0.65g may range from 10% to 100%. This variation in damage for a given intensity needs to be quantified by catastrophe models. Using insurance claims data, we investigate how damage varies for a given peril (e.g. earthquake, tropical cyclone, inland flood) as a function of peril intensity. Probability distributions (including those with a fat tail, i.e. with large probability of high damage) are fitted to the claims data to test a number of perils specific hypotheses, for example that a very large earthquake will cause less variation in losses than a mid-sized earthquake. We also compare the relationship between damage variability and peril intensity for a number of different geohazards. For example, we compare the uncertainty bands for large earthquakes with large hurricanes in an attempt to assess whether loss estimates are more uncertain for hurricanes say, compared to earthquakes. The results of this study represent advances in the appreciation of uncertainty in catastrophe models and of how losses to a notional portfolio and notional event could vary according to the empirical probability distributions found.

  9. Quantifying the offensive sequences that result in goals in elite futsal matches.

    PubMed

    Sarmento, Hugo; Bradley, Paul; Anguera, M Teresa; Polido, Tiago; Resende, Rui; Campaniço, Jorge

    2016-04-01

    The aim of this study was to quantify the type of offensive sequences that result in goals in elite futsal. Thirty competitive games in the Spanish Primera Division de Sala were analysed using computerised notation analysis for patterns of play that resulted in goals. More goals were scored in positional attack (42%) and from set pieces (27%) compared to other activities. The number of defence to offense "transitions" (n = 45) and the start of offensive plays due to the rules of the game (n = 45) were the most common type of sequences that resulted in goals compared to other patterns of play. The central offensive zonal areas were the most common for shots on goal, with 73% of all goals scored from these areas of the pitch compared to defensive and wide zones. The foot was the main part of the body involved in scoring (n = 114). T-pattern analysis of offensive sequences revealed regular patterns of play, which are common in goal scoring opportunities in futsal and are typical movement patterns in this sport. The data demonstrate common offensive sequences and movement patterns related to goals in elite futsal and this could provide important information for the development of physical and technical training drills that replicate important game situations. PMID:26183125

  10. Quantifying Uncertainty in Model Predictions for the Pliocene (Plio-QUMP): Initial results

    USGS Publications Warehouse

    Pope, J.O.; Collins, M.; Haywood, A.M.; Dowsett, H.J.; Hunter, S.J.; Lunt, D.J.; Pickering, S.J.; Pound, M.J.

    2011-01-01

    Examination of the mid-Pliocene Warm Period (mPWP; ~. 3.3 to 3.0. Ma BP) provides an excellent opportunity to test the ability of climate models to reproduce warm climate states, thereby assessing our confidence in model predictions. To do this it is necessary to relate the uncertainty in model simulations of mPWP climate to uncertainties in projections of future climate change. The uncertainties introduced by the model can be estimated through the use of a Perturbed Physics Ensemble (PPE). Developing on the UK Met Office Quantifying Uncertainty in Model Predictions (QUMP) Project, this paper presents the results from an initial investigation using the end members of a PPE in a fully coupled atmosphere-ocean model (HadCM3) running with appropriate mPWP boundary conditions. Prior work has shown that the unperturbed version of HadCM3 may underestimate mPWP sea surface temperatures at higher latitudes. Initial results indicate that neither the low sensitivity nor the high sensitivity simulations produce unequivocally improved mPWP climatology relative to the standard. Whilst the high sensitivity simulation was able to reconcile up to 6 ??C of the data/model mismatch in sea surface temperatures in the high latitudes of the Northern Hemisphere (relative to the standard simulation), it did not produce a better prediction of global vegetation than the standard simulation. Overall the low sensitivity simulation was degraded compared to the standard and high sensitivity simulations in all aspects of the data/model comparison. The results have shown that a PPE has the potential to explore weaknesses in mPWP modelling simulations which have been identified by geological proxies, but that a 'best fit' simulation will more likely come from a full ensemble in which simulations that contain the strengths of the two end member simulations shown here are combined. ?? 2011 Elsevier B.V.

  11. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small volume grab samples were collected for direct-plating analyses for bacterial indicators and coliphage. After filtration, the viruses were eluted from the filter and further concentrated. The final concentrated sample volume (FCSV) was used for enteric virus analysis by use of two methods-cell culture and a molecular method, polymerase chain reaction (PCR). Quantitative PCR (qPCR) for DNA viruses and quantitative reverse-transcriptase PCR (qRT-PCR) for RNA viruses were used in this study. To support data interpretations, the assay limit of detection (ALOD) was set for each virus assay and used to determine sample reporting limits (SRLs). For qPCR and qRT-PCR the ALOD was an estimated value because it was not established according to established method detection limit procedures. The SRLs were different for each sample because effective sample volumes (the volume of the original sample that was actually used in each analysis) were different for each sample. Effective sample volumes were much less than the original sample volumes because of reductions from processing steps and (or) from when dilutions were made to minimize the effects from PCR-inhibiting substances. Codes were used to further qualify the virus data and indicate the level of uncertainty associated with each measurement. Quality-control samples were used to support data interpretations. Field and laboratory blanks for bacteria, coliphage, and enteric viruses were all below detection, indicating that it was unlikely that samples were contaminated from equipment or processing procedures. The absolute value log differences (AVLDs) between concurrent replicate pairs were calculated to identify the variability associated with each measurement. For bacterial indicators and coliphage, the AVLD results indicated that concentrations <10 colony-forming units or plaque-forming units per 100 mL can differ between replicates by as much as 1 log, whereas higher concentrations can differ by as much as 0.3 log. The AVLD results for viruses indicated that differences between replicates can be as great as 1.2 log genomic copies per liter, regardless of the concentration of virus. Relatively large differences in molecular results for viruses between replicate pairs were likely due to lack of precision for samples with small effective volumes. Concentrations of E. coli, fecal coliforms, enterococci, and somatic and F-specific coliphage in post-secondary and post-tertiary samples in conventional plants were higher than those in post-MBR samples. In post-MBR and post-secondary samples, concentrations of somatic coliphage were higher than F-specific coliphage. In post-disinfection samples from two MBR plants (the third MBR plant had operational issues) and the ultraviolet conventional plant, concentrations for all bacterial indicators and coliphage were near or below detection; from the chlorine conventional plant, concentrations in post-disinfection samples were in the single or double digits. All of the plants met the National Pollutant Discharge Elimination System required effluent limits established for fecal coliforms. Norovirus GII and hepatitis A virus were not detected in any samples, and rotavirus was detected in one sample but could not be quantified. Adenovirus was found in 100 percent, enterovirus in over one-half, and norovirus GI in about one-half of post-preliminary wastewater samples. Adenovirus and enterovirus were detected throughout the treatment processes, and norovirus GI was detected less often than the other two enteric viruses. Culturable viruses were detected in post-preliminary samples and in only two post-treatment samples from the plant with operational issues.

  12. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    ERIC Educational Resources Information Center

    LoPresto, Michael C.

    2007-01-01

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered…

  13. Showing Value in Newborn Screening: Challenges in Quantifying the Effectiveness and Cost-Effectiveness of Early Detection of Phenylketonuria and Cystic Fibrosis

    PubMed Central

    Grosse, Scott D.

    2015-01-01

    Decision makers sometimes request information on the cost savings, cost-effectiveness, or cost-benefit of public health programs. In practice, quantifying the health and economic benefits of population-level screening programs such as newborn screening (NBS) is challenging. It requires that one specify the frequencies of health outcomes and events, such as hospitalizations, for a cohort of children with a given condition under two different scenarios—with or without NBS. Such analyses also assume that everything else, including treatments, is the same between groups. Lack of comparable data for representative screened and unscreened cohorts that are exposed to the same treatments following diagnosis can result in either under- or over-statement of differences. Accordingly, the benefits of early detection may be understated or overstated. This paper illustrates these common problems through a review of past economic evaluations of screening for two historically significant conditions, phenylketonuria and cystic fibrosis. In both examples qualitative judgments about the value of prompt identification and early treatment to an affected child were more influential than specific numerical estimates of lives or costs saved. PMID:26702401

  14. Astronomy Diagnostic Test Results Reflect Course Goals and Show Room for Improvement

    NASA Astrophysics Data System (ADS)

    Lopresto, Michael C.

    The results of administering the Astronomy Diagnostic Test (ADT) to introductory astronomy students at Henry Ford Community College over three years have shown gains comparable with national averages. Results have also accurately corresponded to course goals, showing greater gains in topics covered in more detail, and lower gains in topics covered in less detail. Also evident in the results were topics for which improvement of instruction is needed. These factors and the ease with which the ADT can be administered constitute evidence of the usefulness of the ADT as an assessment instrument for introductory astronomy.

  15. Quantifying the effects of root reinforcing on slope stability: results of the first tests with an new shearing device

    NASA Astrophysics Data System (ADS)

    Rickli, Christian; Graf, Frank

    2013-04-01

    The role of vegetation in preventing shallow soil mass movements such as shallow landslides and soil erosion is generally well recognized and, correspondingly, soil bioengineering on steep slopes has been widely used in practice. However, the precise effectiveness of vegetation regarding slope stabilityis still difficult to determine. A recently designed inclinable shearing device for large scale vegetated soil samples allows quantitative evaluation of the additional shear strength provided by roots of specific plant species. In the following we describe the results of a first series of shear strength experiments with this apparatus focusing on root reinforcement of White Alder (Alnus incana) and Silver Birch (Betula pendula) in large soil block samples (500 x 500 x 400 mm). The specimen with partly saturated soil of a maximum grain size of 10 mm were slowly sheared at an inclination of 35 with low normal stresses of 3.2 kPa accounting for natural conditions on a typical slope prone to mass movements. Measurements during the experiments involved shear stress, shear displacement and normal displacement, all recorded with high accuracy. In addition, dry weights of sprout and roots were measured to quantify plant growth of the planted specimen. The results with the new apparatus indicate a considerable reinforcement of the soil due to plant roots, i.e. maximum shear stress of the vegetated specimen were substantially higher compared to non-vegetated soil and the additional strength was a function of species and growth. Soil samples with seedlings planted five months prior to the test yielded an important increase in maximum shear stress of 250% for White Alder and 240% for Silver Birch compared to non-vegetated soil. The results of a second test series with 12 month old plants showed even clearer enhancements in maximum shear stress (390% for Alder and 230% for Birch). Overall the results of this first series of shear strength experiments with the new apparatus using planted and unplanted soil specimen confirm the importance of plants in soil stabilisation. Furthermore, they demonstrate the suitability of the apparatus to quantify the additional strength of specific vegetation as a function of species and growth under clearly defined conditions in the laboratory.

  16. Meta-analysis of aspirin use and risk of lung cancer shows notable results.

    PubMed

    Hochmuth, Friederike; Jochem, Maximilian; Schlattmann, Peter

    2016-07-01

    Aspirin is a promising agent for chemoprevention of lung cancer. We assessed the association of aspirin use and the development of lung cancer, with a focus on heterogeneity between studies. Databases were searched for relevant studies until September 2014. Studies evaluating the relationship of aspirin use and incidence of lung cancer were considered. Relative risks (RR) were extracted and a pooled estimate was calculated. Heterogeneity was assessed by the I measure, random-effects models, and finite-mixture models. Sources of heterogeneity were investigated using a meta-regression. A decreased risk of lung cancer was found including 20 studies [RR=0.87, 95% confidence interval (CI): 0.79-0.95] on the basis of a random-effects model. Strong heterogeneity was observed (τ=0.0258, I=74.4%). As a result, two subpopulations of studies were identified on the basis of a mixture model. The first subpopulation (42%) has an average RR of 0.64. The remaining subpopulation (58%) shows an RR of 1.04. Different results were found for case-control (RR=0.74, 95% CI: 0.60-0.90) and cohort studies (RR=0.99, 95% CI: 0.93-1.06) in a stratified analysis. In a subgroup analysis, use of aspirin was associated with a decreased risk of non-small-cell lung cancer in case-control studies (RR=0.74; 95% CI: 0.58-0.94). At first glance, our meta-analysis shows an average protective effect. A second glance indicates that there is strong heterogeneity. This leads to a subpopulation with considerable benefit and another subpopulation with no benefit. For further investigations, it is important to identify populations that benefit from aspirin use. PMID:26067033

  17. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  18. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  19. A new fracture gradient prediction technique that shows good results in Gulf of Mexico abnormal pressure

    SciTech Connect

    Brennan, R.M.; Annis, M.R.

    1984-09-01

    Accurate formation fracture gradient prediction in abnormally pressured wells is one of the most critical considerations involved in planning a successful drilling operation. Shallow abnormal pressure, coupled with deep geologic objectives, requires an accurate fracture gradient prediction to insure casing setting depths are correctly determined in order to carry sufficient mud weights to successfully reach the deep objectives. In an effort to better meet these requirements, a study of formation fracture gradients was made for the Western and Central Gulf of Mexico. As a result of this study, a new technique was developed which accurately predicts fracture gradients in the abnormally pressured formations. This technique is based on an empirical correlation of Effective Horizontal Stress Gradient to Effective Vertical Stress Gradient.

  20. Aortic emboli show surprising size dependent predilection for cerebral arteries: Results from computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Carr, Ian; Schwartz, Robert; Shadden, Shawn

    2012-11-01

    Cardiac emboli can have devastating consequences if they enter the cerebral circulation, and are the most common cause of embolic stroke. Little is known about relationships of embolic origin/density/size to cerebral events; as these relationships are difficult to observe. To better understand stoke risk from cardiac and aortic emboli, we developed a computational model to track emboli from the heart to the brain. Patient-specific models of the human aorta and arteries to the brain were derived from CT angiography from 10 MHIF patients. Blood flow was modeled by the Navier-Stokes equations using pulsatile inflow at the aortic valve, and physiologic Windkessel models at the outlets. Particulate was injected at the aortic valve and tracked using modified Maxey-Riley equations with a wall collision model. Results demonstrate aortic emboli that entered the cerebral circulation through the carotid or vertebral arteries were localized to specific locations of the proximal aorta. The percentage of released particles embolic to the brain markedly increased with particle size from 0 to ~1-1.5 mm in all patients. Larger particulate became less likely to traverse the cerebral vessels. These findings are consistent with sparse literature based on transesophageal echo measurements. This work was supported in part by the National Science Foundation, award number 1157041.

  1. Animation shows promise in initiating timely cardiopulmonary resuscitation: results of a pilot study.

    PubMed

    Attin, Mina; Winslow, Katheryn; Smith, Tyler

    2014-04-01

    Delayed responses during cardiac arrest are common. Timely interventions during cardiac arrest have a direct impact on patient survival. Integration of technology in nursing education is crucial to enhance teaching effectiveness. The goal of this study was to investigate the effect of animation on nursing students' response time to cardiac arrest, including initiation of timely chest compression. Nursing students were randomized into experimental and control groups prior to practicing in a high-fidelity simulation laboratory. The experimental group was educated, by discussion and animation, about the importance of starting cardiopulmonary resuscitation upon recognizing an unresponsive patient. Afterward, a discussion session allowed students in the experimental group to gain more in-depth knowledge about the most recent changes in the cardiac resuscitation guidelines from the American Heart Association. A linear mixed model was run to investigate differences in time of response between the experimental and control groups while controlling for differences in those with additional degrees, prior code experience, and basic life support certification. The experimental group had a faster response time compared with the control group and initiated timely cardiopulmonary resuscitation upon recognition of deteriorating conditions (P < .0001). The results demonstrated the efficacy of combined teaching modalities for timely cardiopulmonary resuscitation. Providing opportunities for repetitious practice when a patient's condition is deteriorating is crucial for teaching safe practice. PMID:24473120

  2. Mitochondrial DNA transmitted from sperm in the blue mussel Mytilus galloprovincialis showing doubly uniparental inheritance of mitochondria, quantified by real-time PCR.

    PubMed

    Sano, Natsumi; Obata, Mayu; Komaru, Akira

    2010-07-01

    Doubly uniparental inheritance (DUI) of mitochondrial DNA transmission to progeny has been reported in the mussel, Mytilus. In DUI, males have both paternally (M type) and maternally (F type) transmitted mitochondrial DNA (mtDNA), but females have only the F type. To estimate how much M type mtDNA enters the egg with sperm in the DUI system, ratios of M type to F type mtDNA were measured before and after fertilization. M type mtDNA content in eggs increased markedly after fertilization. Similar patterns in M type content changes after fertilization were observed in crosses using the same males. To compare mtDNA quantities, we subsequently measured the ratios of mtDNA to the 28S ribosomal RNA gene (an endogenous control sequence) in sperm or unfertilized eggs using a real-time polymerase chain reaction (PCR) assay. F type content in unfertilized eggs was greater than the M type in sperm by about 1000-fold on average. M type content in spermatozoa was greater than in unfertilized egg, but their distribution overlapped. These results may explain the post-fertilization changes in zygotic M type content. We previously demonstrated that paternal and maternal M type mtDNAs are transmitted to offspring, and hypothesized that the paternal M type contributed to M type transmission to the next generation more than the maternal type did. These quantitative data on M and F type mtDNA in sperm and eggs provide further support for that hypothesis. PMID:20608851

  3. Quantifying geological processes on Mars-Results of the high resolution stereo camera (HRSC) on Mars express

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; de Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K.-D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-07-01

    This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale are the unique strength of the HRSC instrument. The analysis of these data products enabled quantifying geological processes such as effusion rates of lava flows, tectonic deformation, discharge of water in channels, formation timescales of deltas, geometry of sedimentary deposits as well as estimating the age of geological units by crater size-frequency distribution measurements. Both the quantification of geological processes and the age determination allow constraining the evolution of Martian geologic activity in space and time. A second major contribution of HRSC is the discovery of episodicity in the intensity of geological processes on Mars. This has been revealed by comparative age dating of volcanic, fluvial, glacial, and lacustrine deposits. Volcanic processes on Mars have been active over more than 4 Gyr, with peak phases in all three geologic epochs, generally ceasing towards the Amazonian. Fluvial and lacustrine activity phases spread a time span from Noachian until Amazonian times, but detailed studies show that they have been interrupted by multiple and long lasting phases of quiescence. Also glacial activity shows discrete phases of enhanced intensity that may correlate with periods of increased spin-axis obliquity. The episodicity of geological processes like volcanism, erosion, and glaciation on Mars reflects close correlation between surface processes and endogenic activity as well as orbit variations and changing climate condition.

  4. Quantifying microwear on experimental Mistassini quartzite scrapers: preliminary results of exploratory research using LSCM and scale-sensitive fractal analysis.

    PubMed

    Stemp, W James; Lerner, Harry J; Kristant, Elaine H

    2013-01-01

    Although previous use-wear studies involving quartz and quartzite have been undertaken by archaeologists, these are comparatively few in number. Moreover, there has been relatively little effort to quantify use-wear on stone tools made from quartzite. The purpose of this article is to determine the effectiveness of a measurement system, laser scanning confocal microscopy (LSCM), to document the surface roughness or texture of experimental Mistassini quartzite scrapers used on two different contact materials (fresh and dry deer hide). As in previous studies using LSCM on chert, flint, and obsidian, this exploratory study incorporates a mathematical algorithm that permits the discrimination of surface roughness based on comparisons at multiple scales. Specifically, we employ measures of relative area (RelA) coupled with the F-test to discriminate used from unused stone tool surfaces, as well as surfaces of quartzite scrapers used on dry and fresh deer hide. Our results further demonstrate the effect of raw material variation on use-wear formation and its documentation using LSCM and RelA. PMID:22688593

  5. Quantifying Contextuality

    NASA Astrophysics Data System (ADS)

    Grudka, A.; Horodecki, K.; Horodecki, M.; Horodecki, P.; Horodecki, R.; Joshi, P.; Kłobus, W.; Wójcik, A.

    2014-03-01

    Contextuality is central to both the foundations of quantum theory and to the novel information processing tasks. Despite some recent proposals, it still faces a fundamental problem: how to quantify its presence? In this work, we provide a universal framework for quantifying contextuality. We conduct two complementary approaches: (i) the bottom-up approach, where we introduce a communication game, which grasps the phenomenon of contextuality in a quantitative manner; (ii) the top-down approach, where we just postulate two measures, relative entropy of contextuality and contextuality cost, analogous to existent measures of nonlocality (a special case of contextuality). We then match the two approaches by showing that the measure emerging from the communication scenario turns out to be equal to the relative entropy of contextuality. Our framework allows for the quantitative, resource-type comparison of completely different games. We give analytical formulas for the proposed measures for some contextual systems, showing in particular that the Peres-Mermin game is by order of magnitude more contextual than that of Klyachko et al. Furthermore, we explore properties of these measures such as monotonicity or additivity.

  6. Quantifying Reemission Of Mercury From Terrestrial And Aquatic Systems Using Stable Isotopes: Results From The Experimental Lakes Area METAALICUS Study

    NASA Astrophysics Data System (ADS)

    Lindberg, S. E.; Southworth, G.; Peterson, M.; Hintelmann, H.; Graydon, J.; St. Louis, V.; Amyot, M.; Krabbenhoft, D.

    2003-12-01

    This study represents the first attempt to directly quantify the re-emission of deposited Hg. This is crucial for understanding the sources of Hg emitted from natural surfaces as being of either geological origin or through re-emission of recently deposited Hg. Three stable Hg isotopes are being added experimentally to a headwater lake, wetlands, and its watershed in a whole-ecosystem manipulation study at the Experimental Lake Area in Canada. Our overall objective is to determine the link between atmospheric deposition and Hg in fish, but numerous aspects of the biogeochemical cycling of Hg are being addressed during METAALICUS (Mercury Experiment to Assess Atmospheric Loading in Canada and the U.S.), including Hg re-emission. Pilot studies in 1999-2000 applied enriched 200Hg to isolated upland and wetland plots, and to lake enclosures. Fluxes were measured with dynamic chambers for several months. The 200Hg spike was quickly detected in ground-level air (e.g. 5 ng/m3) suggesting rapid initial volatilization of the new Hg. Initial 200Hg fluxes > ambient Hg, but emissions of 200Hg decreased within 3 months to non-detects; about 5% of the applied 200Hg spike was emitted from uplands and about 10% from wetlands. The 200Hg spike (representing new deposition) was generally more readily volatilized than was ambient (old) Hg in both sites. Mercury evasion to the atmosphere from a lake enclosure was also measured and compared with the flux estimated from measured dissolved gaseous mercury (DGM). The introduction of the tracer spike was followed by increased concentrations of DGM and higher fluxes to the atmosphere. In some cases, the observed and calculated fluxes were similar; however, it was common for the observed flux to exceed the calculated flux significantly under some conditions, suggesting that DGM concentration alone in the water column is a poor predictor of gaseous mercury evasion. A substantially larger fraction of the newly deposited Hg was re-emitted from the lake than from wetlands or from upland soils. The whole-ecosystem manipulation is now underway at ELA Lake 658. Addition of 200Hg (to uplands), 202Hg (lake), and 199Hg (wetlands) commenced in 2001 and was completed in June 2003. These data are now being analyzed, and appear to support the behavior seen in the pilot studies; final results will be presented.

  7. Seeking to quantify the ferromagnetic-to-antiferromagnetic interface coupling resulting in exchange bias with various thin-film conformations

    SciTech Connect

    Hsiao, C. H.; Wang, S.; Ouyang, H.; Desautels, R. D.; Lierop, J. van; Lin, K. W.

    2014-08-07

    Ni{sub 3}Fe/(Ni, Fe)O thin films with bilayer and nanocrystallite dispersion morphologies are prepared with a dual ion beam deposition technique permitting precise control of nanocrystallite growth, composition, and admixtures. A bilayer morphology provides a Ni{sub 3}Fe-to-NiO interface, while the dispersion films have different mixtures of Ni{sub 3}Fe, NiO, and FeO nanocrystallites. Using detailed analyses of high resolution transmission electron microscopy images with Multislice simulations, the nanocrystallites' structures and phases are determined, and the intermixing between the Ni{sub 3}Fe, NiO, and FeO interfaces is quantified. From field-cooled hysteresis loops, the exchange bias loop shift from spin interactions at the interfaces are determined. With similar interfacial molar ratios of FM-to-AF, we find the exchange bias field essentially unchanged. However, when the interfacial ratio of FM to AF was FM rich, the exchange bias field increases. Since the FM/AF interface ‘contact’ areas in the nanocrystallite dispersion films are larger than that of the bilayer film, and the nanocrystallite dispersions exhibit larger FM-to-AF interfacial contributions to the magnetism, we attribute the changes in the exchange bias to be from increases in the interfacial segments that suffer defects (such as vacancies and bond distortions), that also affects the coercive fields.

  8. Seeking to quantify the ferromagnetic-to-antiferromagnetic interface coupling resulting in exchange bias with various thin-film conformations

    NASA Astrophysics Data System (ADS)

    Hsiao, C. H.; Desautels, R. D.; Wang, S.; Lin, K. W.; Ouyang, H.; van Lierop, J.

    2014-08-01

    Ni3Fe/(Ni, Fe)O thin films with bilayer and nanocrystallite dispersion morphologies are prepared with a dual ion beam deposition technique permitting precise control of nanocrystallite growth, composition, and admixtures. A bilayer morphology provides a Ni3Fe-to-NiO interface, while the dispersion films have different mixtures of Ni3Fe, NiO, and FeO nanocrystallites. Using detailed analyses of high resolution transmission electron microscopy images with Multislice simulations, the nanocrystallites' structures and phases are determined, and the intermixing between the Ni3Fe, NiO, and FeO interfaces is quantified. From field-cooled hysteresis loops, the exchange bias loop shift from spin interactions at the interfaces are determined. With similar interfacial molar ratios of FM-to-AF, we find the exchange bias field essentially unchanged. However, when the interfacial ratio of FM to AF was FM rich, the exchange bias field increases. Since the FM/AF interface `contact' areas in the nanocrystallite dispersion films are larger than that of the bilayer film, and the nanocrystallite dispersions exhibit larger FM-to-AF interfacial contributions to the magnetism, we attribute the changes in the exchange bias to be from increases in the interfacial segments that suffer defects (such as vacancies and bond distortions), that also affects the coercive fields.

  9. Quantifying saltmarsh vegetation and its effect on wave height dissipation: Results from a UK East coast saltmarsh

    NASA Astrophysics Data System (ADS)

    Möller, I.

    2006-09-01

    The degree to which incident wind waves are attenuated over intertidal surfaces is critical to the development of coastal wetlands, which are, amongst other processes, affected by the delivery, erosion, and/or resuspension of sediment due to wave action. Knowledge on wave attenuation over saltmarsh surfaces is also essential for accurate assessments of their natural sea-defence value to be made and incorporated into sea defence and management schemes. The aim of this paper is to evaluate the use of a digital photographic method for the quantification of marsh vegetation density and then to investigate the relative roles played by hydrodynamic controls and vegetation density/type in causing the attenuation of incident waves over a macro-tidal saltmarsh. Results show that a significant statistical relationship exists between the density of vegetation measured in side-on photographs and the dry biomass of the photographed vegetation determined through direct harvesting. The potential of the digital photographic method for the spatial and temporal comparison of marsh surface vegetation biomass, density, and canopy structure is highlighted and the method was applied to assess spatial and seasonal differences in vegetation density and their effect on wave attenuation at three locations on a macro-tidal saltmarsh on Dengie Peninsula, Essex, UK. In this environmental setting, vegetation density/type did not have a significant direct effect on wave attenuation but modified the process of wave transformation under different hydrodynamic conditions. At the two locations, characterised by a relatively tall canopy (15-26 cm) with biomass values of 430-500 g m -2, dominated by Spartina spp. (>70% of total dry biomass), relative incident wave height (wave height/water depth) is identified as a statistically significant dominant positive control on wave attenuation up to a threshold value of 0.55, beyond which wave attenuation showed no significant further increase. At the third location, characterised by only slightly less biomass (398 g m -2) but a shorter (6 cm) canopy of the annual Salicornia spp., no significant relationship existed between wave attenuation and relative wave height. Seasonally (between September and December) significant temporal increase/decrease in vegetation density occurred in one of the Spartina canopies and in the Salicornia canopy, respectively, and led to an expected (but not statistically significant) increase/decrease in wave attenuation. The wider implications of these findings in the context of form-process interactions on saltmarshes and their effect on marsh evolution are also discussed.

  10. Quantifying stomatal and non-stomatal limitations to carbon assimilation resulting from leaf aging and drought in mature deciduous tree species.

    PubMed

    Wilson, Kell B.; Baldocchi, Dennis D.; Hanson, Paul J.

    2000-06-01

    Gas exchange techniques were used to investigate light-saturated carbon assimilation and its stomatal and non-stomatal limitations over two seasons in mature trees of five species in a closed deciduous forest. Stomatal and non-stomatal contributions to decreases in assimilation resulting from leaf age and drought were quantified relative to the maximum rates obtained early in the season at optimal soil water contents. Although carbon assimilation, stomatal conductance and photosynthetic capacity (V(cmax)) decreased with leaf age, decreases in V(cmax) accounted for about 75% of the leaf-age related reduction in light-saturated assimilation rates, with a secondary role for stomatal conductance (around 25%). However, when considered independently from leaf age, the drought response was dominated by stomatal limitations, accounting for about 75% of the total limitation. Some of the analytical difficulties associated with computing limitation partitioning are discussed, including path dependence, patchy stomatal closure and diffusion in the mesophyll. Although these considerations may introduce errors in our estimates, our analysis establishes some reasonable boundaries on relative limitations and shows differences between drought and non-drought years. Estimating seasonal limitations under natural conditions, as shown in this study, provides a useful basis for comparing limitation processes between years and species. PMID:12651499

  11. Clean Colon Software Program (CCSP), Proposal of a standardized Method to quantify Colon Cleansing During Colonoscopy: Preliminary Results

    PubMed Central

    Rosa-Rizzotto, Erik; Dupuis, Adrian; Guido, Ennio; Caroli, Diego; Monica, Fabio; Canova, Daniele; Cervellin, Erica; Marin, Renato; Trovato, Cristina; Crosta, Cristiano; Cocchio, Silvia; Baldo, Vincenzo; De Lazzari, Franca

    2015-01-01

    Background and study aims: Neoplastic lesions can be missed during colonoscopy, especially when cleansing is inadequate. Bowel preparation scales have significant limitations and no objective and standardized method currently exists to establish colon cleanliness during colonoscopy. The aims of our study are to create a software algorithm that is able to analyze bowel cleansing during colonoscopies and to compare it to a validate bowel preparation scale. Patients and methods: A software application (the Clean Colon Software Program, CCSP) was developed. Fifty colonoscopies were carried out and video-recorded. Each video was divided into 3 segments: cecum-hepatic flexure (1st Segment), hepatic flexure-descending colon (2nd Segment) and rectosigmoid segment (3rd Segment). Each segment was recorded twice, both before and after careful cleansing of the intestinal wall. A score from 0 (dirty) to 3 (clean) was then assigned by CCSP. All the videos were also viewed by four endoscopists and colon cleansing was established using the Boston Bowel Preparation Scale. Interclass correlation coefficient was then calculated between the endoscopists and the software. Results: The cleansing score of the prelavage colonoscopies was 1.56 ± 0.52 and the postlavage one was 2,08 ± 0,59 (P < 0.001) showing an approximate 33.3 % improvement in cleansing after lavage. Right colon segment prelavage (0.99 ± 0.69) was dirtier than left colon segment prelavage (2.07 ± 0.71). The overall interobserver agreement between the average cleansing score for the 4 endoscopists and the software pre-cleansing was 0.87 (95 % CI, 0.84 – 0.90) and post-cleansing was 0.86 (95 % CI, 0.83 – 0.89). Conclusions: The software is able to discriminate clean from non-clean colon tracts with high significance and is comparable to endoscopist evaluation. PMID:26528508

  12. Quantifying the effect of crops surface albedo variability on GHG budgets in a life cycle assessment approach : methodology and results.

    NASA Astrophysics Data System (ADS)

    Ferlicoq, Morgan; Ceschia, Eric; Brut, Aurore; Tallec, Tiphaine

    2013-04-01

    We tested a new method to estimate the radiative forcing of several crops at the annual and rotation scales, using local measurements data from two ICOS experimental sites. We used jointly 1) the radiative forcing caused by greenhouse gas (GHG) net emissions, calculated by using a Life Cycle Analysis (LCA) approach and in situ measurements (Ceschia et al. 2010), and 2) the radiative forcing caused by rapid changes in surface albedo typical from those ecosystems and resulting from management and crop phenology. The carbon and GHG budgets (GHGB) of 2 crop sites with contrasted management located in South West France (Auradé and Lamasquère sites) was estimated over a complete rotation by combining a classical LCA approach with on site flux measurements. At both sites, carbon inputs (organic fertilisation and seeds), carbon exports (harvest) and net ecosystem production (NEP), measured with the eddy covariance technique, were calculated. The variability of the different terms and their relative contributions to the net ecosystem carbon budget (NECB) were analysed for all site-years, and the effect of management on NECB was assessed. To account for GHG fluxes that were not directly measured on site, we estimated the emissions caused by field operations (EFO) for each site using emission factors from the literature. The EFO were added to the NECB to calculate the total GHGB for a range of cropping systems and management regimes. N2O emissions were or calculated following the IPCC (2007) guidelines, and CH4 emissions were assumed to be negligible compared to other contributions to the net GHGB. Additionally, albedo was calculated continuously using the short wave incident and reflected radiation measurements in the field (0.3-3µm) from CNR1 sensors. Mean annual differences in albedo and deduced radiative forcing from a reference value were then compared for all site-years. Mean annual differences in radiative forcing were then converted in g C equivalent m-2 in order to add this effect to the GHG budget (Muñoz et a. 2010). Increasing the length of the vegetative period is considered as one of the main levers for improving the NECB of crop ecosystems. Therefore, we also tested the effect of adding intermediate crops or maintaining crop voluntary re-growth on both the NECB and the radiative forcing caused by the changes in mean annual surface albedo. We showed that the NEP was improved and as a consequence NECB and GHGB too. Intermediate crops also increased the mean annual surface albedo and therefore caused a negative radiative forcing (cooling effect) expressed in g C equivalent m-2 (sink). The use of an intermediate crop could in some cases switch the crop from a positive NEP (source) to a negative one (sink) and the change in radiative forcing (up to -110 g C-eq m-2 yr-1) could overwhelm the NEP term.

  13. Prognostic significance of intraoperative macroscopic serosal invasion finding when it shows a discrepancy in pathologic result gastric cancer

    PubMed Central

    Kang, Sang Yull; Park, Ho Sung

    2016-01-01

    Purpose Depth of wall invasion is an important prognostic factor in patients with gastric cancer, whereas the prognostic significance of intraoperative macroscopic serosal invasion (mSE) findings remain unclear when they show a discrepancy in pathologic findings. This study, therefore, assessed the prognostic significance of mSE. Methods Data from cohort of 2,835 patients with resectable gastric cancer who underwent surgery between 1990 and 2010 were retrospectively reviewed. Results The overall accuracy of mSE and pathologic results was 83.4%. The accuracy of mSE was 75.5% in pT2. On the other hand, the accuracy of pT3 dropped to 24.5%. According to mSE findings (+/–), the 5-year disease-specific survival (DSS) rate differed significantly in patients with pT2 (+; 74.2% vs. –; 92.0%), pT3 (+; 76.7% vs. –; 91.8%) and pT4a (+; 51.3% vs. –; 72.8%) (P < 0.001 each), but not in patients with T1 tumor. Multivariate analysis showed that mSE findings (hazard ratio [HR], 2.275; 95% confidence interval [CI], 1.148–4.509), tumor depth (HR, 6.894; 95% CI, 2.325–20.437), nodal status (HR, 5.206; 95% CI, 2.298–11.791), distant metastasis (HR, 2.881; 95% CI, 1.388–6.209), radical resection (HR, 2.002; 95% CI, 1.017–3.940), and lymphatic invasion (HR, 2.713; 95% CI, 1.424–5.167) were independent predictors of 5-year DSS rate. Conclusion We observed considerable discrepancies between macroscopic and pathologic diagnosis of serosal invasion. However, macroscopic diagnosis of serosal invasion was independently prognostic of 5-year DSS. It suggests that because the pathologic results could not be perfect and the local inflammatory change with mSE(+) could affect survival, a combination of mSE(+/–) and pathologic depth may be predictive of prognosis in patients with gastric cancer. PMID:27186569

  14. Quantifying Quality

    ERIC Educational Resources Information Center

    Kazlauskas, Edward J.; Bennion, Bruce

    1977-01-01

    Speeches and tutorials of the ASIS Workshop "Quantifying Quality" are summarized. Topics include quantitative methods for measuring performance; queueing theory in libraries; data base value analysis; performance standards for libraries; use of Statistical Package for the Social Sciences in decision making; designing optimal information access…

  15. Not all Surface Waters show a Strong Relation between DOC and Hg Species: Results from an Adirondack Mountain Watershed

    NASA Astrophysics Data System (ADS)

    Burns, D. A.; Schelker, J.; Murray, K. R.; Brigham, M. E.; Aiken, G.

    2009-12-01

    Several recent papers have highlighted the strong statistical correlation between dissolved organic carbon (DOC) concentrations and total dissolved mercury (THgd) and/or dissolved methyl Hg (MeHgd). These relations of organic carbon with Hg species are often even stronger when a measurement that reflects some fraction of the DOC is used such as UV absorbance at 254 nm or the hydrophobic acid fraction. These strong relations are not surprising given the pivotal role DOC plays in binding and transporting Hg, which is otherwise relatively insoluble in dilute waters. In this study, we show data collected monthly and during some storms and snowmelt over 2.5 years from the 65 km2 Fishing Brook watershed in the Adirondack Mountains of New York. This dataset is noteworthy because of a weak and statistically non-significant (p > 0.05) relationship between DOC and either of THgd or MeHgd over the entire study period. We believe that the lack of a strong DOC-Hg relation in Fishing Brook reflects the combined effects of the heterogeneous land cover and the presence of three ponds within the watershed. The watershed is dominantly (89.3%) hardwood and coniferous forest with 8% wetland area, and 2.7% open water. Despite the lack of a strong relation between DOC and Hg species across the annual hydrograph, the dataset shows strong within-season correlations that have different y-intercepts and slopes between the growing season (May 1 - Sept. 30) and dormant season (Oct. 1 - April 30), as well as strong, but seasonally varying DOC-Hg correlations at smaller spatial scales in data collected on several occasions in 10 sub-watersheds of Fishing Brook. We hypothesize that a combination of several factors can account for these annually weak, but seasonally and spatially strong DOC-Hg correlations: (1) seasonal variations in runoff generation processes from upland and wetland areas that may yield DOC with varying Hg-binding characteristics, (2) photo-induced losses of Hg species and DOC in ponded areas, and (3) the effects of the widely varying seasonal temperature and snow cover on the rates of microbial processes such as the decomposition of soil organic matter and methylation of Hg. These results emphasize that not all watersheds show simple linear relations between DOC and Hg species on an annual basis, and provide a caution that measurements such as the optical properties of waters are not always a strong surrogate for Hg.

  16. Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides

    PubMed Central

    Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

    2013-01-01

    Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring cellulases and other biomass-degrading enzymes to Bcell itself and in anchoring proteins other Gram-positive organisms. PMID:23593409

  17. Development and application of methods to quantify spatial and temporal hyperpolarized 3He MRI ventilation dynamics: preliminary results in chronic obstructive pulmonary disease

    NASA Astrophysics Data System (ADS)

    Kirby, Miranda; Wheatley, Andrew; McCormack, David G.; Parraga, Grace

    2010-03-01

    Hyperpolarized helium-3 (3He) magnetic resonance imaging (MRI) has emerged as a non-invasive research method for quantifying lung structural and functional changes, enabling direct visualization in vivo at high spatial and temporal resolution. Here we described the development of methods for quantifying ventilation dynamics in response to salbutamol in Chronic Obstructive Pulmonary Disease (COPD). Whole body 3.0 Tesla Excite 12.0 MRI system was used to obtain multi-slice coronal images acquired immediately after subjects inhaled hyperpolarized 3He gas. Ventilated volume (VV), ventilation defect volume (VDV) and thoracic cavity volume (TCV) were recorded following segmentation of 3He and 1H images respectively, and used to calculate percent ventilated volume (PVV) and ventilation defect percent (VDP). Manual segmentation and Otsu thresholding were significantly correlated for VV (r=.82, p=.001), VDV (r=.87 p=.0002), PVV (r=.85, p=.0005), and VDP (r=.85, p=.0005). The level of agreement between these segmentation methods was also evaluated using Bland-Altman analysis and this showed that manual segmentation was consistently higher for VV (Mean=.22 L, SD=.05) and consistently lower for VDV (Mean=-.13, SD=.05) measurements than Otsu thresholding. To automate the quantification of newly ventilated pixels (NVp) post-bronchodilator, we used translation, rotation, and scaling transformations to register pre-and post-salbutamol images. There was a significant correlation between NVp and VDV (r=-.94 p=.005) and between percent newly ventilated pixels (PNVp) and VDP (r=- .89, p=.02), but not for VV or PVV. Evaluation of 3He MRI ventilation dynamics using Otsu thresholding and landmark-based image registration provides a way to regionally quantify functional changes in COPD subjects after treatment with beta-agonist bronchodilators, a common COPD and asthma therapy.

  18. Presentation Showing Results of a Hydrogeochemical Investigation of the Standard Mine Vicinity, Upper Elk Creek Basin, Colorado

    USGS Publications Warehouse

    Manning, Andrew H.; Verplanck, Philip L.; Mast, M. Alisa; Wanty, Richard B.

    2008-01-01

    PREFACE This Open-File Report consists of a presentation given in Crested Butte, Colorado on December 13, 2007 to the Standard Mine Advisory Group. The presentation was paired with another presentation given by the Colorado Division of Reclamation, Mining, and Safety on the physical features and geology of the Standard Mine. The presentation in this Open-File Report summarizes the results and conclusions of a hydrogeochemical investigation of the Standard Mine performed by the U.S. Geological Survey (Manning and others, in press). The purpose of the investigation was to aid the U.S. Environmental Protection Agency in evaluating remediation options for the Standard Mine site. Additional details and supporting data related to the information in this presentation can be found in Manning and others (in press).

  19. QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon

    SciTech Connect

    Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U.H. Eitel; Sebastian Martinuzzi

    2012-04-01

    Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (CO{sub 2}) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools; however, not much is known about the accuracy of estimating biomass change and carbon flux from repeat LiDAR acquisitions containing different data sampling characteristics. In this study, discrete return airborne LiDAR data was collected in 2003 and 2009 across {approx}20,000 hectares (ha) of an actively managed, mixed conifer forest landscape in northern Idaho, USA. Forest inventory plots, established via a random stratified sampling design, were established and sampled in 2003 and 2009. The Random Forest machine learning algorithm was used to establish statistical relationships between inventory data and forest structural metrics derived from the LiDAR acquisitions. Aboveground biomass maps were created for the study area based on statistical relationships developed at the plot level. Over this 6-year period, we found that the mean increase in biomass due to forest growth across the non-harvested portions of the study area was 4.8 metric ton/hectare (Mg/ha). In these non-harvested areas, we found a significant difference in biomass increase among forest successional stages, with a higher biomass increase in mature and old forest compared to stand initiation and young forest. Approximately 20% of the landscape had been disturbed by harvest activities during the six-year time period, representing a biomass loss of >70 Mg/ha in these areas. During the study period, these harvest activities outweighed growth at the landscape scale, resulting in an overall loss in aboveground carbon at this site. The 30-fold increase in sampling density between the 2003 and 2009 did not affect the biomass estimates. Overall, LiDAR data coupled with field reference data offer a powerful method for calculating pools and changes in aboveground carbon in forested systems. The results of our study suggest that multitemporal LiDAR-based approaches are likely to be useful for high quality estimates of aboveground carbon change in conifer forest systems.

  20. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle.

    PubMed

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  1. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle

    PubMed Central

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  2. QUANTIFYING SPICULES

    SciTech Connect

    Pereira, Tiago M. D.; De Pontieu, Bart; Carlsson, Mats

    2012-11-01

    Understanding the dynamic solar chromosphere is fundamental in solar physics. Spicules are an important feature of the chromosphere, connecting the photosphere to the corona, potentially mediating the transfer of energy and mass. The aim of this work is to study the properties of spicules over different regions of the Sun. Our goal is to investigate if there is more than one type of spicule, and how spicules behave in the quiet Sun, coronal holes, and active regions. We make use of high cadence and high spatial resolution Ca II H observations taken by Hinode/Solar Optical Telescope. Making use of a semi-automated detection algorithm, we self-consistently track and measure the properties of 519 spicules over different regions. We find clear evidence of two types of spicules. Type I spicules show a rise and fall and have typical lifetimes of 150-400 s and maximum ascending velocities of 15-40 km s{sup -1}, while type II spicules have shorter lifetimes of 50-150 s, faster velocities of 30-110 km s{sup -1}, and are not seen to fall down, but rather fade at around their maximum length. Type II spicules are the most common, seen in the quiet Sun and coronal holes. Type I spicules are seen mostly in active regions. There are regional differences between quiet-Sun and coronal hole spicules, likely attributable to the different field configurations. The properties of type II spicules are consistent with published results of rapid blueshifted events (RBEs), supporting the hypothesis that RBEs are their disk counterparts. For type I spicules we find the relations between their properties to be consistent with a magnetoacoustic shock wave driver, and with dynamic fibrils as their disk counterpart. The driver of type II spicules remains unclear from limb observations.

  3. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  4. Quantifying economic fluctuations

    NASA Astrophysics Data System (ADS)

    Stanley, H. Eugene; Nunes Amaral, Luis A.; Gabaix, Xavier; Gopikrishnan, Parameswaran; Plerou, Vasiliki

    2001-12-01

    This manuscript is a brief summary of a talk designed to address the question of whether two of the pillars of the field of phase transitions and critical phenomena-scale invariance and universality-can be useful in guiding research on interpreting empirical data on economic fluctuations. Using this conceptual framework as a guide, we empirically quantify the relation between trading activity-measured by the number of transactions N-and the price change G( t) for a given stock, over a time interval [ t, t+Δ t]. We relate the time-dependent standard deviation of price changes-volatility-to two microscopic quantities: the number of transactions N( t) in Δ t and the variance W2( t) of the price changes for all transactions in Δ t. We find that the long-ranged volatility correlations are largely due to those of N. We then argue that the tail-exponent of the distribution of N is insufficient to account for the tail-exponent of P{ G> x}. Since N and W display only weak inter-dependency, our results show that the fat tails of the distribution P{ G> x} arises from W. Finally, we review recent work on quantifying collective behavior among stocks by applying the conceptual framework of random matrix theory (RMT). RMT makes predictions for “universal” properties that do not depend on the interactions between the elements comprising the system, and deviations from RMT provide clues regarding system-specific properties. We compare the statistics of the cross-correlation matrix C-whose elements Cij are the correlation coefficients of price fluctuations of stock i and j-against a random matrix having the same symmetry properties. It is found that RMT methods can distinguish random and non-random parts of C. The non-random part of C which deviates from RMT results, provides information regarding genuine collective behavior among stocks. We also discuss results that are reminiscent of phase transitions in spin systems, where the divergent behavior of the response function at the critical point (zero magnetic field) leads to large fluctuations, and we discuss a curious “symmetry breaking”, a feature qualitatively identical to the behavior of the probability density of the magnetization for fixed values of the inverse temperature.

  5. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

  6. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best

  7. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  8. Quantifying concordance in cosmology

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Grandis, Sebastian; Amara, Adam; Refregier, Alexandre

    2016-05-01

    Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between data sets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13, and Planck 15 constraints on the Λ CDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S =17.6 bits, implying a deviation from consistency at 99.8% confidence) is due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Planck 15 constraints deviate from the Planck 13 results (S =56.3 bits), primarily due to a shift in the same direction. The Surprise between WMAP and Planck consequently disappears when moving to Planck 15 (S =-5.1 bits). This means that, unlike Planck 13, Planck 15 is not in tension with WMAP 9. These results illustrate the advantages of the relative entropy and the Surprise for quantifying the disagreement between cosmological experiments and more generally as an information metric for cosmology.

  9. Two heteronuclear dipolar results at the price of one: Quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite

    NASA Astrophysics Data System (ADS)

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edén, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M2 (S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M2 (S-I) and M2 (I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by 23Na{31P} REDOR and 31P{23Na} REAPDOR NMR applied to Na2O-CaO-SiO2-P2O5 glasses and biomimetic hydroxyapatite, where the M2 (P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M2 (Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M2 (S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures.

  10. Reanalysis of mGWAS results and in vitro validation show that lactate dehydrogenase interacts with branched-chain amino acid metabolism.

    PubMed

    Heemskerk, Mattijs M; van Harmelen, Vanessa Ja; van Dijk, Ko Willems; van Klinken, Jan Bert

    2016-01-01

    The assignment of causative genes to noncoding variants identified in genome-wide association studies (GWASs) is challenging. We show how combination of knowledge from gene and pathway databases and chromatin interaction data leads to reinterpretation of published quantitative trait loci for blood metabolites. We describe a previously unidentified link between the rs2403254 locus, which is associated with the ratio of 3-methyl-2-oxobutanoate and alpha-hydroxyisovalerate levels, and the distal LDHA gene. We confirmed that lactate dehydrogenase can catalyze the conversion between these metabolites in vitro, suggesting that it has a role in branched-chain amino acid metabolism. Examining datasets from the ENCODE project we found evidence that the locus and LDHA promoter physically interact, showing that LDHA expression is likely under control of distal regulatory elements. Importantly, this discovery demonstrates that bioinformatic workflows for data integration can have a vital role in the interpretation of GWAS results. PMID:26014429

  11. Recognition Confusions among Quantifiers

    ERIC Educational Resources Information Center

    Holyoak, Keith J.; Glass, Arnold L.

    1978-01-01

    Subjects listened to a story containing sentences with five quantifiers (all, many, some, a few, and none) and were tested to determine recognition of quantifiers. The degree of confusion between any two quantifiers declined monotonically with the separation of the two terms in a linear order. (SW)

  12. Streamlined system for purifying and quantifying a diverse library of compounds and the effect of compound concentration measurements on the accurate interpretation of biological assay results.

    PubMed

    Popa-Burke, Ioana G; Issakova, Olga; Arroway, James D; Bernasconi, Paul; Chen, Min; Coudurier, Louis; Galasinski, Scott; Jadhav, Ajit P; Janzen, William P; Lagasca, Dennis; Liu, Darren; Lewis, Roderic S; Mohney, Robert P; Sepetov, Nikolai; Sparkman, Darren A; Hodge, C Nicholas

    2004-12-15

    As part of an overall systems approach to generating highly accurate screening data across large numbers of compounds and biological targets, we have developed and implemented streamlined methods for purifying and quantitating compounds at various stages of the screening process, coupled with automated "traditional" storage methods (DMSO, -20 degrees C). Specifically, all of the compounds in our druglike library are purified by LC/MS/UV and are then controlled for identity and concentration in their respective DMSO stock solutions by chemiluminescent nitrogen detection (CLND)/evaporative light scattering detection (ELSD) and MS/UV. In addition, the compound-buffer solutions used in the various biological assays are quantitated by LC/UV/CLND to determine the concentration of compound actually present during screening. Our results show that LC/UV/CLND/ELSD/MS is a widely applicable method that can be used to purify, quantitate, and identify most small organic molecules from compound libraries. The LC/UV/CLND technique is a simple and sensitive method that can be easily and cost-effectively employed to rapidly determine the concentrations of even small amounts of any N-containing compound in aqueous solution. We present data to establish error limits for concentration determination that are well within the overall variability of the screening process. This study demonstrates that there is a significant difference between the predicted amount of soluble compound from stock DMSO solutions following dilution into assay buffer and the actual amount present in assay buffer solutions, even at the low concentrations employed for the assays. We also demonstrate that knowledge of the concentrations of compounds to which the biological target is exposed is critical for accurate potency determinations. Accurate potency values are in turn particularly important for drug discovery, for understanding structure-activity relationships, and for building useful empirical models of protein-ligand interactions. Our new understanding of relative solubility demonstrates that most, if not all, decisions that are made in early discovery are based upon missing or inaccurate information. Finally, we demonstrate that careful control of compound handling and concentration, coupled with accurate assay methods, allows the use of both positive and negative data in analyzing screening data sets for structure-activity relationships that determine potency and selectivity. PMID:15595870

  13. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ernő; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments. PMID:26126688

  14. Simple instruments used in monitoring ionospheric perturbations and some observational results showing the ionospheric responses to the perturbations mainly from the lower atmosphere

    NASA Astrophysics Data System (ADS)

    Xiao, Zuo; Hao, Yongqiang; Zhang, Donghe; Xiao, Sai-Guan; Huang, Weiquan

    Ionospheric disturbances such as SID and acoustic gravity waves in different scales are well known and commonly discussed topics. Some simple ground equipment was designed and used for monitoring continuously the effects of these disturbances, especially, SWF, SFD. Besides SIDs, They also reflect clearly the acoustic gravity waves in different scale and Spread-F and these data are important supplementary to the traditional ionosonde records. It is of signifi-cance in understanding physical essentials of the ionospheric disturbances and applications in SID warning. In this paper, the designing of the instruments is given and results are discussed in detail. Some case studies were introduced as example which showed very clearly not only immediate effects of solar flare, but also the phenomena of ionospheric responses to large scale gravity waves from lower atmosphere such as typhoon, great earthquake and volcano erup-tion. Particularlyresults showed that acoustic gravity waves play significant role in seeding ionospheric Spread-F. These examples give evidence that lower atmospheric activities strongly influence the ionosphere.

  15. How often do German children and adolescents show signs of common mental health problems? Results from different methodological approaches – a cross-sectional study

    PubMed Central

    2014-01-01

    Background Child and adolescent mental health problems are ubiquitous and burdensome. Their impact on functional disability, the high rates of accompanying medical illnesses and the potential to last until adulthood make them a major public health issue. While methodological factors cause variability of the results from epidemiological studies, there is a lack of prevalence rates of mental health problems in children and adolescents according to ICD-10 criteria from nationally representative samples. International findings suggest only a small proportion of children with function impairing mental health problems receive treatment, but information about the health care situation of children and adolescents is scarce. The aim of this epidemiological study was a) to classify symptoms of common mental health problems according to ICD-10 criteria in order to compare the statistical and clinical case definition strategies using a single set of data and b) to report ICD-10 codes from health insurance claims data. Methods a) Based on a clinical expert rating, questionnaire items were mapped on ICD-10 criteria; data from the Mental Health Module (BELLA study) were analyzed for relevant ICD-10 and cut-off criteria; b) Claims data were analyzed for relevant ICD-10 codes. Results According to parent report 7.5% (n = 208) met the ICD-10 criteria of a mild depressive episode and 11% (n = 305) showed symptoms of depression according to cut-off score; Anxiety is reported in 5.6% (n = 156) and 11.6% (n = 323), conduct disorder in 15.2% (n = 373) and 14.6% (n = 357). Self-reported symptoms in 11 to 17 year olds resulted in 15% (n = 279) reporting signs of a mild depression according to ICD-10 criteria (vs. 16.7% (n = 307) based on cut-off) and 10.9% (n = 201) reported symptoms of anxiety (vs. 15.4% (n = 283)). Results from routine data identify 0.9% (n = 1,196) with a depression diagnosis, 3.1% (n = 6,729) with anxiety and 1.4% (n = 3,100) with conduct disorder in outpatient health care. Conclusions Statistical and clinical case definition strategies show moderate concordance in depression and conduct disorder in a German national sample. Comparatively, lower rates of children and adolescents with diagnosed mental health problems in the outpatient health care setting support the assumptions that a small number of children and adolescents in need of treatment receive it. PMID:24597565

  16. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Björn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS. PMID:25877214

  17. Magnetic Sphincter Augmentation for Gastroesophageal Reflux at 5 Years: Final Results of a Pilot Study Show Long-Term Acid Reduction and Symptom Improvement

    PubMed Central

    Saino, Greta; Bonavina, Luigi; Lipham, John C.; Dunn, Daniel

    2015-01-01

    Abstract Background: As previously reported, the magnetic sphincter augmentation device (MSAD) preserves gastric anatomy and results in less severe side effects than traditional antireflux surgery. The final 5-year results of a pilot study are reported here. Patients and Methods: A prospective, multicenter study evaluated safety and efficacy of the MSAD for 5 years. Prior to MSAD placement, patients had abnormal esophageal acid and symptoms poorly controlled by proton pump inhibitors (PPIs). Patients served as their own control, which allowed comparison between baseline and postoperative measurements to determine individual treatment effect. At 5 years, gastroesophageal reflux disease (GERD)-Health Related Quality of Life (HRQL) questionnaire score, esophageal pH, PPI use, and complications were evaluated. Results: Between February 2007 and October 2008, 44 patients (26 males) had an MSAD implanted by laparoscopy, and 33 patients were followed up at 5 years. Mean total percentage of time with pH <4 was 11.9% at baseline and 4.6% at 5 years (P < .001), with 85% of patients achieving pH normalization or at least a 50% reduction. Mean total GERD-HRQL score improved significantly from 25.7 to 2.9 (P < .001) when comparing baseline and 5 years, and 93.9% of patients had at least a 50% reduction in total score compared with baseline. Complete discontinuation of PPIs was achieved by 87.8% of patients. No complications occurred in the long term, including no device erosions or migrations at any point. Conclusions: Based on long-term reduction in esophageal acid, symptom improvement, and no late complications, this study shows the relative safety and efficacy of magnetic sphincter augmentation for GERD. PMID:26437027

  18. Thermosensory reversal effect quantified.

    PubMed

    Bergmann Tiest, Wouter M; Kappers, Astrid M L

    2008-01-01

    At room temperature, some materials feel colder than others due to differences in thermal conductivity, heat capacity and geometry. When the ambient temperature is well above skin temperature, the roles of 'cold' and 'warm' materials are reversed. In this paper, this effect is quantified by measuring discrimination thresholds for subjective coldness at different ambient temperatures using stimuli of different thicknesses. The reversal point was found to be at 34 degrees C, somewhat above skin temperature. At this reversal point, discrimination is quite impossible. At room temperature, subjects were able to discriminate between stimuli of different thickness based on subjective coldness, showing that the sense of touch, unlike vision, can penetrate solid objects. Furthermore, somewhat surprisingly, at ambient temperatures well below normal room temperature, discrimination is worse than at room temperature. PMID:17306203

  19. Rapamycin and chloroquine: the in vitro and in vivo effects of autophagy-modifying drugs show promising results in valosin containing protein multisystem proteinopathy.

    PubMed

    Nalbandian, Angèle; Llewellyn, Katrina J; Nguyen, Christopher; Yazdi, Puya G; Kimonis, Virginia E

    2015-01-01

    Mutations in the valosin containing protein (VCP) gene cause hereditary Inclusion body myopathy (hIBM) associated with Paget disease of bone (PDB), frontotemporal dementia (FTD), more recently termed multisystem proteinopathy (MSP). Affected individuals exhibit scapular winging and die from progressive muscle weakness, and cardiac and respiratory failure, typically in their 40s to 50s. Histologically, patients show the presence of rimmed vacuoles and TAR DNA-binding protein 43 (TDP-43)-positive large ubiquitinated inclusion bodies in the muscles. We have generated a VCPR155H/+ mouse model which recapitulates the disease phenotype and impaired autophagy typically observed in patients with VCP disease. Autophagy-modifying agents, such as rapamycin and chloroquine, at pharmacological doses have previously shown to alter the autophagic flux. Herein, we report results of administration of rapamycin, a specific inhibitor of the mechanistic target of rapamycin (mTOR) signaling pathway, and chloroquine, a lysosomal inhibitor which reverses autophagy by accumulating in lysosomes, responsible for blocking autophagy in 20-month old VCPR155H/+ mice. Rapamycin-treated mice demonstrated significant improvement in muscle performance, quadriceps histological analysis, and rescue of ubiquitin, and TDP-43 pathology and defective autophagy as indicated by decreased protein expression levels of LC3-I/II, p62/SQSTM1, optineurin and inhibiting the mTORC1 substrates. Conversely, chloroquine-treated VCPR155H/+ mice revealed progressive muscle weakness, cytoplasmic accumulation of TDP-43, ubiquitin-positive inclusion bodies and increased LC3-I/II, p62/SQSTM1, and optineurin expression levels. Our in vitro patient myoblasts studies treated with rapamycin demonstrated an overall improvement in the autophagy markers. Targeting the mTOR pathway ameliorates an increasing list of disorders, and these findings suggest that VCP disease and related neurodegenerative multisystem proteinopathies can now be included as disorders that can potentially be ameliorated by rapalogs. PMID:25884947

  20. Prospects of an alternative treatment against Trypanosoma cruzi based on abietic acid derivatives show promising results in Balb/c mouse model.

    PubMed

    Olmo, F; Guardia, J J; Marin, C; Messouri, I; Rosales, M J; Urbanová, K; Chayboun, I; Chahboun, R; Alvarez-Manzaneda, E J; Sánchez-Moreno, M

    2015-01-01

    Chagas disease, caused by the protozoa parasite Trypanosoma cruzi, is an example of extended parasitaemia with unmet medical needs. Current treatments based on old-featured benznidazole (Bz) and nifurtimox are expensive and do not fulfil the criteria of effectiveness, and a lack of toxicity devoid to modern drugs. In this work, a group of abietic acid derivatives that are chemically stable and well characterised were introduced as candidates for the treatment of Chagas disease. In vitro and in vivo assays were performed in order to test the effectiveness of these compounds. Finally, those which showed the best activity underwent additional studies in order to elucidate the possible mechanism of action. In vitro results indicated that some compounds have low toxicity (i.e. >150 μM, against Vero cell) combined with high efficacy (i.e. <20 μM) against some forms of T. cruzi. Further in vivo studies on mice models confirmed the expectations of improvements in infected mice. In vivo tests on the acute phase gave parasitaemia inhibition values higher those of Bz, and a remarkable decrease in the reactivation of parasitaemia was found in the chronic phase after immunosuppression of the mice treated with one of the compounds. The morphological alterations found in treated parasites with our derivatives confirmed extensive damage; energetic metabolism disturbances were also registered by (1)H NMR. The demonstrated in vivo activity and low toxicity, together with the use of affordable starting products and the lack of synthetic complexity, put these abietic acid derivatives in a remarkable position toward the development of an anti-Chagasic agent. PMID:25462275

  1. Quantifiable Lateral Flow Assay Test Strips

    NASA Technical Reports Server (NTRS)

    2003-01-01

    As easy to read as a home pregnancy test, three Quantifiable Lateral Flow Assay (QLFA) strips used to test water for E. coli show different results. The brightly glowing control line on the far right of each strip indicates that all three tests ran successfully. But the glowing test line on the middle left and bottom strips reveal their samples were contaminated with E. coli bacteria at two different concentrations. The color intensity correlates with concentration of contamination.

  2. SENS-IS, a 3D reconstituted epidermis based model for quantifying chemical sensitization potency: Reproducibility and predictivity results from an inter-laboratory study.

    PubMed

    Cottrez, Françoise; Boitel, Elodie; Ourlin, Jean-Claude; Peiffer, Jean-Luc; Fabre, Isabelle; Henaoui, Imène-Sarah; Mari, Bernard; Vallauri, Ambre; Paquet, Agnes; Barbry, Pascal; Auriault, Claude; Aeby, Pierre; Groux, Hervé

    2016-04-01

    The SENS-IS test protocol for the in vitro detection of sensitizers is based on a reconstructed human skin model (Episkin) as the test system and on the analysis of the expression of a large panel of genes. Its excellent performance was initially demonstrated with a limited set of test chemicals. Further studies (described here) were organized to confirm these preliminary results and to obtain a detailed statistical analysis of the predictive capacity of the assay. A ring-study was thus organized and performed within three laboratories, using a test set of 19 blind coded chemicals. Data analysis indicated that the assay is robust, easily transferable and offers high predictivity and excellent within- and between-laboratories reproducibility. To further evaluate the predictivity of the test protocol according to Cooper statistics a comprehensive test set of 150 chemicals was then analyzed. Again, data analysis confirmed the excellent capacity of the SENS-IS assay for predicting both hazard and potency characteristics, confirming that this assay should be considered as a serious alternative to the available in vivo sensitization tests. PMID:26795242

  3. Quantifying and Reducing the Uncertainties in Future Projections of Droughts and Heat Waves for North America that Result from the Diversity of Models in CMIP5

    NASA Astrophysics Data System (ADS)

    Herrera-Estrada, J. E.; Sheffield, J.

    2014-12-01

    There are many sources of uncertainty regarding the future projections of our climate, including the multiple possible Representative Concentration Pathways (RCPs), the variety of climate models used, and the initial and boundary conditions with which they are run. Moreover, it has been shown that the internal variability of the climate system can sometimes be of the same order of magnitude as the climate change signal or even larger for some variables. Nonetheless, in order to help inform stakeholders in water resources and agriculture in North America when developing adaptation strategies, particularly for extreme events such as droughts and heat waves, it is necessary to study the plausible range of changes that the region might experience during the 21st century. We aim to understand and reduce the uncertainties associated with this range of possible scenarios by focusing on the diversity of climate models involved in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Data output from various CMIP5 models is compared against near surface climate and land-surface hydrological data from the North American Land Data Assimilation System (NLDAS)-2 to evaluate how well each climate model represents the land-surface processes associated with droughts and heat waves during the overlapping historical period (1979-2005). These processes include the representation of precipitation and radiation and their partitioning at the land surface, land-atmosphere interactions, and the propagation of signals of these extreme events through the land surface. The ability of the CMIP5 models to reproduce these important physical processes for regions of North America is used to inform a multi-model ensemble in which models that represent the processes relevant to droughts and heat waves better are given more importance. Furthermore, the future projections are clustered to identify possible dependencies in behavior across models. The results indicate a wide range in performance for the historical runs with some models hampered by poor interannual variability in summer precipitation and near surface air temperature, whilst others partition too much precipitation into evapotranspiration with implications for drought and heat wave development.

  4. A high-density wireless underground sensor network (WUSN) to quantify hydro-ecological interactions for a UK floodplain; project background and initial results

    NASA Astrophysics Data System (ADS)

    Verhoef, A.; Choudhary, B.; Morris, P. J.; McCann, J.

    2012-04-01

    Floodplain meadows support some of the most diverse vegetation in the UK, and also perform key ecosystem services, such as flood storage and sediment retention. However, the UK now has less than 1500 ha of this unique habitat remaining. In order to conserve and better exploit the services provided by this grassland, an improved understanding of its functioning is essential. Vegetation functioning and species composition are known to be tightly correlated to the hydrological regime, and related temperature and nutrient regime, but the mechanisms controlling these relationships are not well established. The FUSE* project aims to investigate the spatiotemporal variability in vegetation functioning (e.g. photosynthesis and transpiration) and plant community composition in a floodplain meadow near Oxford, UK (Yarnton Mead), and their relationship to key soil physical variables (soil temperature and moisture content), soil nutrient levels and the water- and energy-balance. A distributed high density Wireless Underground Sensor Network (WUSN) is in the process of being established on Yarnton Mead. The majority, or ideally all, of the sensing and transmitting components will be installed below-ground because Yarnton Mead is a SSSI (Site of Special Scientific Interest, due to its unique plant community) and because occasionally sheep or cattle are grazing on it, and that could damage the nodes. This prerequisite has implications for the maximum spacing between UG nodes and their communications technologies; in terms of signal strength, path losses and requirements for battery life. The success of underground wireless communication is highly dependent on the soil type and water content. This floodplain environment is particularly challenging in this context because the soil contains a large amount of clay near the surface and is therefore less favourable to EM wave propagation than sandy soils. Furthermore, due to high relative saturation levels (as a result of high groundwater levels and occasional overland flooding) considerable path losses are expected. Finally, the long-term below-ground installation of the nodes means that batteries cannot be replaced easily, therefore energy conservation schemes are required to be deployed on the nodes. We present a brief overview of the project and initial findings of the approach we have adopted to address these wireless communication issues. This involves tests covering a range of transmission frequencies, antennae types, and node placements. *FUSE, Floodplain Underground SEnsors, funded by the UK Natural Environment Research Council, NE/I007288/1, start date 1-3-2011)

  5. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the

  6. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  7. Map Showing Earthquake Shaking and Tsunami Hazard in Guadeloupe and Dominica, as a Result of an M8.0 Earthquake on the Lesser Antilles Megathrust

    Earthquake shaking (onland) and tsunami (ocean) hazard in Guadeloupe and Dominica, as a result of anM8.0 earthquake on the Lesser Antilles megathrust adjacent to Guadeloupe. Colors onland represent scenario earthquake shaking intensities calculated in USGS ShakeMap software (Wald et al. 20...

  8. Mathematical modelling in Matlab of the experimental results shows the electrochemical potential difference - temperature of the WC coatings immersed in a NaCl solution

    NASA Astrophysics Data System (ADS)

    Benea, M. L.; Benea, O. D.

    2016-02-01

    The method used for purchasing the corrosion behaviour the WC coatings deposited by plasma spraying, on a martensitic stainless steel substrate consists in measuring the electrochemical potential of the coating, respectively that of the substrate, immersed in a NaCl solution as corrosive agent. The mathematical processing of the obtained experimental results in Matlab allowed us to make some correlations between the electrochemical potential of the coating and the solution temperature is very well described by some curves having equations obtained by interpolation order 4.

  9. Quantifying the adaptive cycle

    USGS Publications Warehouse

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems.

  10. Quantifying the Adaptive Cycle

    PubMed Central

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  11. Quantifying the Adaptive Cycle.

    PubMed

    Angeler, David G; Allen, Craig R; Garmestani, Ahjond S; Gunderson, Lance H; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994-2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  12. A second generation cervico-vaginal lavage device shows similar performance as its preceding version with respect to DNA yield and HPV DNA results

    PubMed Central

    2013-01-01

    Background Attendance rates of cervical screening programs can be increased by offering HPV self-sampling to non-attendees. Acceptability, DNA yield, lavage volumes and choice of hrHPV test can influence effectiveness of the self-sampling procedures and could therefore play a role in recruiting non-attendees. To increase user-friendliness, a frequently used lavage sampler was modified. In this study, we compared this second generation lavage device with the first generation device within similar birth cohorts. Methods Within a large self-sampling cohort-study among non-responders of the Dutch cervical screening program, a subset of 2,644 women received a second generation self-sampling lavage device, while 11,977 women, matched for age and ZIP-code, received the first generation model. The second generation device was different in shape, color, lavage volume, and packaging, in comparison to its first generation model. The Cochran’s test was used to compare both devices for hrHPV positivity rate and response rate. To correct for possible heterogeneity between age and ZIP codes in both groups the Breslow-Day test of homogeneity was used. A T-test was utilized to compare DNA yields of the obtained material in both groups. Results Median DNA yields were 90.4 μg/ml (95% CI 83.2-97.5) and 91.1 μg/ml (95% CI 77.8-104.4, p= 0.726) and hrHPV positivity rates were 8.2% and 6.9% (p= 0.419) per sample self-collected by the second - and the first generation of the device (p= 0.726), respectively. In addition, response rates were comparable for the two models (35.4% versus 34.4%, p= 0.654). Conclusions Replacing the first generation self-sampling device by an ergonomically improved, second generation device resulted in equal DNA yields, comparable hrHPV positivity rates and similar response rates. Therefore, it can be concluded that the clinical performance of the first and second generation models are similar. Moreover, participation of non-attendees in cervical cancer screening is probably not predominantly determined by the type of self-collection device. PMID:23639287

  13. Value of Fused 18F-Choline-PET/MRI to Evaluate Prostate Cancer Relapse in Patients Showing Biochemical Recurrence after EBRT: Preliminary Results

    PubMed Central

    Piccardo, Arnoldo; Paparo, Francesco; Picazzo, Riccardo; Naseri, Mehrdad; Ricci, Paolo; Marziano, Andrea; Bacigalupo, Lorenzo; Biscaldi, Ennio; Rollandi, Gian Andrea; Grillo-Ruggieri, Filippo; Farsad, Mohsen

    2014-01-01

    Purpose. We compared the accuracy of 18F-Choline-PET/MRI with that of multiparametric MRI (mMRI), 18F-Choline-PET/CT, 18F-Fluoride-PET/CT, and contrast-enhanced CT (CeCT) in detecting relapse in patients with suspected relapse of prostate cancer (PC) after external beam radiotherapy (EBRT). We assessed the association between standard uptake value (SUV) and apparent diffusion coefficient (ADC). Methods. We evaluated 21 patients with biochemical relapse after EBRT. Patients underwent 18F-Choline-PET/contrast-enhanced (Ce)CT, 18F-Fluoride-PET/CT, and mMRI. Imaging coregistration of PET and mMRI was performed. Results. 18F-Choline-PET/MRI was positive in 18/21 patients, with a detection rate (DR) of 86%. DRs of 18F-Choline-PET/CT, CeCT, and mMRI were 76%, 43%, and 81%, respectively. In terms of DR the only significant difference was between 18F-Choline-PET/MRI and CeCT. On lesion-based analysis, the accuracy of 18F-Choline-PET/MRI, 18F-Choline-PET/CT, CeCT, and mMRI was 99%, 95%, 70%, and 85%, respectively. Accuracy, sensitivity, and NPV of 18F-Choline-PET/MRI were significantly higher than those of both mMRI and CeCT. On whole-body assessment of bone metastases, the sensitivity of 18F-Choline-PET/CT and 18F-Fluoride-PET/CT was significantly higher than that of CeCT. Regarding local and lymph node relapse, we found a significant inverse correlation between ADC and SUV-max. Conclusion. 18F-Choline-PET/MRI is a promising technique in detecting PC relapse. PMID:24877053

  14. Catalysis: Quantifying charge transfer

    NASA Astrophysics Data System (ADS)

    James, Trevor E.; Campbell, Charles T.

    2016-02-01

    Improving the design of catalytic materials for clean energy production requires a better understanding of their electronic properties, which remains experimentally challenging. Researchers now quantify the number of electrons transferred from metal nanoparticles to an oxide support as a function of particle size.

  15. Resolution of quantifier scope ambiguities.

    PubMed

    Kurtzman, H S; MacDonald, M C

    1993-09-01

    Various processing principles have been suggested to be governing the resolution of quantifier scope ambiguities in sentences such as Every kid climbed a tree. This paper investigates structural principles, that is, those which refer to the syntactic or semantic positions of the quantified phrases. To test these principles, the preferred interpretations for three grammatical constructions were determined in a task in which participants made speeded judgments of whether a sentence following a doubly quantified sentence was a reasonable discourse continuation of the quantified sentence. The observed preferences cannot be explained by any single structural principle, but point instead to the interaction of several principles. Contrary to many proposals, there is little or no effect of a principle that assigns scope according to the linear order of the phrases. The interaction of principles suggests that alternative interpretations of the ambiguity may be initially considered in parallel, followed by selection of the single interpretation that best satisfies the principles. These results are discussed in relation to theories of ambiguity resolution at other levels of linguistic representation. PMID:8269698

  16. Quantifying substructure in galaxy clusters

    NASA Astrophysics Data System (ADS)

    Knebe, Alexander; Müller, Volker

    2000-02-01

    Substructure in galaxy clusters can be quantified with the robust Delta statistics (Dressler and Shectman 1988) which uses velocity kinematics and sky projected positions. We test its sensitivity using dissipationless numerical simulations of cluster formation. As in recent observations, about 30% of the simulated clusters show substructure, but the exact percentage depends on the chosen limit for defining substructure, and a better discriminator is the distribution function of the Delta statistics. The Dressler-Shectman statistics correlate well with other subcluster indicators, but with large scatter due to its sensitivity to small infalling groups and projection effects.

  17. Normalized wavelet packets quantifiers for condition monitoring

    NASA Astrophysics Data System (ADS)

    Feng, Yanhui; Schlindwein, Fernando S.

    2009-04-01

    Normalized wavelet packets quantifiers are proposed and studied as a new tool for condition monitoring. The new quantifiers construct a complete quantitative time-frequency analysis: the Wavelet packets relative energy measures the normalized energy of the wavelet packets node; the Total wavelet packets entropy measures how the normalized energies of the wavelet packets nodes are distributed in the frequency domain; the Wavelet packets node entropy describes the uncertainty of the normalized coefficients of the wavelet packets node. Unlike the feature extraction methods directly using the amplitude of wavelet coefficients, the new quantifiers are derived from probability distributions and are more robust in diagnostic applications. By applying these quantifiers to Acoustic Emission signals from faulty bearings of rotating machines, our study shows that both localized defects and advanced contamination faults can be successfully detected and diagnosed if the appropriate quantifier is chosen. The Bayesian classifier is used to quantitatively analyse and evaluate the performance of the proposed quantifiers. We also show that reducing the Daubechies wavelet order or the length of the segment will deteriorate the performance of the quantifiers. A two-dimensional diagnostic scheme can also help to improve the diagnostic performance but the improvements are only significant when using lower wavelet orders.

  18. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  19. A new index quantifying the precipitation extremes

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Baciu, Madalina; Stoica, Cerasela

    2015-04-01

    Events of extreme precipitation have a great impact on society. They are associated with flooding, erosion and landslides.Various indices have been proposed to quantify these extreme events and they are mainly related to daily precipitation amount, which are usually available for long periods in many places over the world. The climate signal related to changes in the characteristics of precipitation extremes is different over various regions and it is dependent on the season and the index used to quantify the precipitation extremes. The climate model simulations and empirical evidence suggest that warmer climates, due to increased water vapour, lead to more intense precipitation events, even when the total annual precipitation is slightly reduced. It was suggested that there is a shift in the nature of precipitation events towards more intense and less frequent rains and increases in heavy rains are expected to occur in most places, even when the mean precipitation is not increasing. This conclusion was also proved for the Romanian territory in a recent study, showing a significant increasing trend of the rain shower frequency in the warm season over the entire country, despite no significant changes in the seasonal amount and the daily extremes. The shower events counted in that paper refer to all convective rains, including torrential ones giving high rainfall amount in very short time. The problem is to find an appropriate index to quantify such events in terms of their highest intensity in order to extract the maximum climate signal. In the present paper, a new index is proposed to quantify the maximum precipitation intensity in an extreme precipitation event, which could be directly related to the torrential rain intensity. This index is tested at nine Romanian stations (representing various physical-geographical conditions) and it is based on the continuous rainfall records derived from the graphical registrations (pluviograms) available at National Meteorological Administration in Romania. These types of records contain the rainfall intensity (mm/minute) over various intervals for which it remains constant. The maximum intensity for each continuous rain over the May-August interval has been calculated for each year. The corresponding time series over the 1951-2008 period have been analysed in terms of their long term trends and shifts in the mean; the results have been compared to those resulted from other rainfall indices based on daily and hourly data, computed over the same interval such as: total rainfall amount, maximum daily amount, contribution of total hourly amounts exceeding 10mm/day, contribution of daily amounts exceeding the 90th percentile, the 90th, 99th and 99.9th percentiles of 1-hour data . The results show that the proposed index exhibit a coherent and stronger climate signal (significant increase) for all analysed stations compared to the other indices associated to precipitation extremes, which show either no significant change or weaker signal. This finding shows that the proposed index is most appropriate to quantify the climate change signal of the precipitation extremes. We consider that this index is more naturally connected to the maximum intensity of a real rainfall event. The results presented is this study were funded by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the research project CLIMHYDEX, "Changes in climate extremes and associated impact in hydrological events in Romania", code PNII-ID-2011-2-0073 (http://climhydex.meteoromania.ro)

  20. Quantifying the Wave Driving of the Stratosphere

    NASA Technical Reports Server (NTRS)

    Newman, Paul A.; Nash, Eric R.

    1999-01-01

    The zonal mean eddy heat flux is directly proportional to the wave activity that propagates from the troposphere into the stratosphere. This quantity is a simple eddy diagnostic which is easily calculated from conventional meteorological analyses. Because this "wave driving" of the stratosphere has a strong impact on the stratospheric temperature, it is necessary to compare the impact of the flux with respect to stratospheric radiative changes caused by greenhouse gas changes. Hence, we must understand the precision and accuracy of the heat flux derived from our global meteorological analyses. Herein, we quantify the stratospheric heat flux using five different meteorological analyses, and show that there are 30% differences between these analyses during the disturbed conditions of the northern hemisphere winter. Such large differences result from the planetary differences in the stationary temperature and meridional wind fields. In contrast, planetary transient waves show excellent agreement amongst these five analyses, and this transient heat flux appears to have a long term downward trend.

  1. Quantifying the nonclassicality of operations.

    PubMed

    Meznaric, Sebastian; Clark, Stephen R; Datta, Animesh

    2013-02-15

    Deep insight can be gained into the nature of nonclassical correlations by studying the quantum operations that create them. Motivated by this we propose a measure of nonclassicality of a quantum operation utilizing the relative entropy to quantify its commutativity with the completely dephasing operation. We show that our measure of nonclassicality is a sum of two independent contributions, the generating power--its ability to produce nonclassical states out of classical ones, and the distinguishing power--its usefulness to a classical observer for distinguishing between classical and nonclassical states. Each of these effects can be exploited individually in quantum protocols. We further show that our measure leads to an interpretation of quantum discord as the difference in superdense coding capacities between a quantum state and the best classical state when both are produced at a source that makes a classical error during transmission. PMID:25166357

  2. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

  3. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.

  4. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  5. Quantifying traffic exposure.

    PubMed

    Pratt, Gregory C; Parson, Kris; Shinoda, Naomi; Lindgren, Paula; Dunlap, Sara; Yawn, Barbara; Wollan, Peter; Johnson, Jean

    2014-01-01

    Living near traffic adversely affects health outcomes. Traffic exposure metrics include distance to high-traffic roads, traffic volume on nearby roads, traffic within buffer distances, measured pollutant concentrations, land-use regression estimates of pollution concentrations, and others. We used Geographic Information System software to explore a new approach using traffic count data and a kernel density calculation to generate a traffic density surface with a resolution of 50 m. The density value in each cell reflects all the traffic on all the roads within the distance specified in the kernel density algorithm. The effect of a given roadway on the raster cell value depends on the amount of traffic on the road segment, its distance from the raster cell, and the form of the algorithm. We used a Gaussian algorithm in which traffic influence became insignificant beyond 300 m. This metric integrates the deleterious effects of traffic rather than focusing on one pollutant. The density surface can be used to impute exposure at any point, and it can be used to quantify integrated exposure along a global positioning system route. The traffic density calculation compares favorably with other metrics for assessing traffic exposure and can be used in a variety of applications. PMID:24045427

  6. Quantifying T Lymphocyte Turnover

    PubMed Central

    De Boer, Rob J.; Perelson, Alan S.

    2013-01-01

    Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2′-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men. PMID:23313150

  7. Mountain torrents: Quantifying vulnerability and assessing uncertainties

    PubMed Central

    Totschnig, Reinhold; Fuchs, Sven

    2013-01-01

    Vulnerability assessment for elements at risk is an important component in the framework of risk assessment. The vulnerability of buildings affected by torrent processes can be quantified by vulnerability functions that express a mathematical relationship between the degree of loss of individual elements at risk and the intensity of the impacting process. Based on data from the Austrian Alps, we extended a vulnerability curve for residential buildings affected by fluvial sediment transport processes to other torrent processes and other building types. With respect to this goal to merge different data based on different processes and building types, several statistical tests were conducted. The calculation of vulnerability functions was based on a nonlinear regression approach applying cumulative distribution functions. The results suggest that there is no need to distinguish between different sediment-laden torrent processes when assessing vulnerability of residential buildings towards torrent processes. The final vulnerability functions were further validated with data from the Italian Alps and different vulnerability functions presented in the literature. This comparison showed the wider applicability of the derived vulnerability functions. The uncertainty inherent to regression functions was quantified by the calculation of confidence bands. The derived vulnerability functions may be applied within the framework of risk management for mountain hazards within the European Alps. The method is transferable to other mountain regions if the input data needed are available. PMID:27087696

  8. Quantifying lateral tissue heterogeneities in hadron therapy

    SciTech Connect

    Pflugfelder, D.; Wilkens, J. J.; Szymanowski, H.; Oelfke, U.

    2007-04-15

    In radiotherapy with scanned particle beams, tissue heterogeneities lateral to the beam direction are problematic in two ways: they pose a challenge to dose calculation algorithms, and they lead to a high sensitivity to setup errors. In order to quantify and avoid these problems, a heterogeneity number H{sub i} as a method to quantify lateral tissue heterogeneities of single beam spot i is introduced. To evaluate this new concept, two kinds of potential errors were investigated for single beam spots: First, the dose calculation error has been obtained by comparing the dose distribution computed by a simple pencil beam algorithm to more accurate Monte Carlo simulations. The resulting error is clearly correlated with H{sub i}. Second, the analysis of the sensitivity to setup errors of single beam spots also showed a dependence on H{sub i}. From this data it is concluded that H{sub i} can be used as a criterion to assess the risks of a compromised delivered dose due to lateral tissue heterogeneities. Furthermore, a method how to incorporate this information into the inverse planning process for intensity modulated proton therapy is presented. By suppressing beam spots with a high value of H{sub i}, the unfavorable impact of lateral tissue heterogeneities can be reduced, leading to treatment plans which are more robust to dose calculation errors of the pencil beam algorithm. Additional possibilities to use the information of H{sub i} are outlined in the discussion.

  9. Quantifying lateral tissue heterogeneities in hadron therapy.

    PubMed

    Pflugfelder, D; Wilkens, J J; Szymanowski, H; Oelfke, U

    2007-04-01

    In radiotherapy with scanned particle beams, tissue heterogeneities lateral to the beam direction are problematic in two ways: they pose a challenge to dose calculation algorithms, and they lead to a high sensitivity to setup errors. In order to quantify and avoid these problems, a heterogeneity number H(i) as a method to quantify lateral tissue heterogeneities of single beam spot i is introduced. To evaluate this new concept, two kinds of potential errors were investigated for single beam spots: First, the dose calculation error has been obtained by comparing the dose distribution computed by a simple pencil beam algorithm to more accurate Monte Carlo simulations. The resulting error is clearly correlated with H(i). Second, the analysis of the sensitivity to setup errors of single beam spots also showed a dependence on H(i). From this data it is concluded that H(i) can be used as a criterion to assess the risks of a compromised delivered dose due to lateral tissue heterogeneities. Furthermore, a method how to incorporate this information into the inverse planning process for intensity modulated proton therapy is presented. By suppressing beam spots with a high value of H(i), the unfavorable impact of lateral tissue heterogeneities can be reduced, leading to treatment plans which are more robust to dose calculation errors of the pencil beam algorithm. Additional possibilities to use the information of H(i) are outlined in the discussion. PMID:17500481

  10. Quantifying the Arctic methane budget

    NASA Astrophysics Data System (ADS)

    Warwick, Nicola; Cain, Michelle; Pyle, John

    2014-05-01

    The Arctic is a major source of atmospheric methane, containing climate-sensitive emissions from natural wetlands and gas hydrates, as well as the fossil fuel industry. Both wetland and gas hydrate methane emissions from the Arctic may increase with increasing temperature, resulting in a positive feedback leading to enhancement of climate warming. It is important that these poorly-constrained sources are quantified by location and strength and their vulnerability to change be assessed. The MAMM project (Methane and other greenhouse gases in the Arctic: Measurements, process studies and Modelling') addresses these issues as part of the UK NERC Arctic Programme. A global chemistry transport model has been used, along with MAMM and other long term observations, to assess our understanding of the different source and sink terms in the Arctic methane budget. Simulations including methane coloured by source and latitude are used to distinguish between Arctic seasonal variability arising from transport and that arising from changes in Arctic sources and sinks. Methane isotopologue tracers provide a further constraint on modelled methane variability, distinguishing between isotopically light and heavy sources (e.g. wetlands and gas fields). We focus on quantifying the magnitude and seasonal variability of Arctic wetland emissions.

  11. Quantifying actin wave modulation on periodic topography

    NASA Astrophysics Data System (ADS)

    Guven, Can; Driscoll, Meghan; Sun, Xiaoyu; Parker, Joshua; Fourkas, John; Carlsson, Anders; Losert, Wolfgang

    2014-03-01

    Actin is the essential builder of the cell cytoskeleton, whose dynamics are responsible for generating the necessary forces for the formation of protrusions. By exposing amoeboid cells to periodic topographical cues, we show that actin can be directionally guided via inducing preferential polymerization waves. To quantify the dynamics of these actin waves and their interaction with the substrate, we modify a technique from computer vision called ``optical flow.'' We obtain vectors that represent the apparent actin flow and cluster these vectors to obtain patches of newly polymerized actin, which represent actin waves. Using this technique, we compare experimental results, including speed distribution of waves and distance from the wave centroid to the closest ridge, with actin polymerization simulations. We hypothesize the modulation of the activity of nucleation promotion factors on ridges (elevated regions of the surface) as a potential mechanism for the wave-substrate coupling. Funded by NIH grant R01GM085574.

  12. Terahertz spectroscopy for quantifying refined oil mixtures.

    PubMed

    Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui

    2012-08-20

    In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017

  13. Quantifying Information Flow between Two Chaotic Semiconductor Lasers Using Symbolic Transfer Entropy

    NASA Astrophysics Data System (ADS)

    Li, Nian-Qiang; Pan, Wei; Yan, Lian-Shan; Luo, Bin; Xu, Ming-Feng; Tang, Yi-Long

    2012-03-01

    Symbolic transfer entropy (STE) is employed to quantify the dominant direction of information flow between two chaotic-semiconductor-laser time series. The information flow in unidirectionally and bidirectionally coupled systems was analyzed systematically. Numerical results show that the dependence relationship can be revealed if there exists any coupling between two chaotic semiconductor lasers. More importantly, in both unsynchronized and good synchronization regimes, the STE can be used to quantify the direction of information flow between the lasers, although the former case leads to a better identification. The results thus establish STE as an effective tool for quantifying the direction of information flow between chaotic-laser-based systems.

  14. Methods to quantify intermittent exercises.

    PubMed

    Desgorces, François-Denis; Sénégas, Xavier; Garcia, Judith; Decker, Leslie; Noirez, Philippe

    2007-08-01

    The purpose of this study was to quantify intermittent training sessions using different types of exercise. Strength, sprint, and endurance sessions were performed until exhaustion. These sessions were quantified by the product of duration and heart rate (HR) (i.e., training impulse (TRIMP) and HR-zone methods), by the product of duration and rate of perceived exertion (RPE-based method), and a new method (work endurance recovery (WER)). The WER method aims to determine the level of exercise-induced physiological stress using the ratio of cumulated work - endurance limit, which is associated with the naparian logarithm of the ratio of work-recovery. Each session's effects were assessed using blood lactate, delayed onset muscle soreness (DOMS), RPE, and HR. Because sessions were performed until exhaustion, it was assumed that each session would have a similar training load (TL) and there would be low interindividual variability. Each method was used to compare each of the TL quantifications. The endurance session induced the higher HR response (p < 0.001), the sprint session the higher blood lactate increase (p < 0.001), and the strength session the higher DOMS when compared with sprint (p = 0.007). TLs were similar after WER calculations, whereas the HR- and RPE-based methods showed differences between endurance and sprint (p < 0.001), and between endurance and strength TL (p < 0.001 and p < 0.01, respectively). The TLs from WER were correlated to those of the HR-based methods of endurance exercise, for which HR was known to accurately reflect the exercise-induced physiological stress (r = 0.63 and r = 0.64, p < 0.05). In addition, the TL from WER presented low interindividual variability, yet a marked variability was observed in the TLs of HR- and RPE-based methods. As opposed to the latter two methods, WER can quantify varied intermittent exercises and makes it possible to compare the athletes' TL. Furthermore, WER can also assist in comparing athlete responses to training programs. PMID:17622291

  15. Quantifying the quiet epidemic

    PubMed Central

    2014-01-01

    During the late 20th century numerical rating scales became central to the diagnosis of dementia and helped transform attitudes about its causes and prevalence. Concentrating largely on the development and use of the Blessed Dementia Scale, I argue that rating scales served professional ends during the 1960s and 1970s. They helped old age psychiatrists establish jurisdiction over conditions such as dementia and present their field as a vital component of the welfare state, where they argued that ‘reliable modes of diagnosis’ were vital to the allocation of resources. I show how these arguments appealed to politicians, funding bodies and patient groups, who agreed that dementia was a distinct disease and claimed research on its causes and prevention should be designated ‘top priority’. But I also show that worries about the replacement of clinical acumen with technical and depersonalized methods, which could conceivably be applied by anyone, led psychiatrists to stress that rating scales had their limits and could be used only by trained experts. PMID:25866448

  16. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  17. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow

  18. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  19. Quantifying collective effervescence

    PubMed Central

    Konvalinka, Ivana; Bulbulia, Joseph; Roepstorff, Andreas

    2011-01-01

    Collective rituals are ubiquitous and resilient features of all known human cultures. They are also functionally opaque, costly, and sometimes dangerous. Social scientists have speculated that collective rituals generate benefits in excess of their costs by reinforcing social bonding and group solidarity, yet quantitative evidence for these conjectures is scarce. Our recent study measured the physiological effects of a highly arousing Spanish fire-walking ritual, revealing shared patterns in heart-rate dynamics between participants and related spectators. We briefly describe our results, and consider their implications. PMID:22446541

  20. Quantifying Dictyostelium discoideum Aggregation

    NASA Astrophysics Data System (ADS)

    McCann, Colin; Kriebel, Paul; Parent, Carole; Losert, Wolfgang

    2008-03-01

    Upon nutrient deprivation, the social amoebae Dictyostelium discoideum enter a developmental program causing them to aggregate into multicellular organisms. During this process cells sense and secrete chemical signals, often moving in a head-to-tail fashion called a `stream' as they assemble into larger entities. We measure Dictyostelium speed, shape, and directionality, both inside and outside of streams, and develop methods to distinguish group dynamics from behavior of individual cells. We observe an overall increase in speed during aggregation and a decrease in speed fluctuations once a cell joins a stream. Initial results indicate that when cells are in close proximity the trailing cells migrate specifically toward the backs of leading cells.

  1. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  2. Quantifying the seismicity on Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Turcotte, Donald L.; Rundle, John B.

    2013-07-01

    We quantify the seismicity on the island of Taiwan using the frequency-magnitude statistics of earthquakes since 1900. A break in Gutenberg-Richter scaling for large earthquakes in global seismicity has been observed, this break is also observed in our Taiwan study. The seismic data from the Central Weather Bureau Seismic Network are in good agreement with the Gutenberg-Richter relation taking b ≈ 1 when M < 7. For large earthquakes, M ≥ 7, the seismic data fit Gutenberg-Richter scaling with b ≈ 1.5. If the Gutenberg-Richter scaling for M < 7 earthquakes is extrapolated to larger earthquakes, we would expect a M > 8 earthquake in the study region about every 25 yr. However, our analysis shows a lower frequency of occurrence of large earthquakes so that the expected frequency of M > 8 earthquakes is about 200 yr. The level of seismicity for smaller earthquakes on Taiwan is about 12 times greater than in Southern California and the possibility of a M ≈ 9 earthquake north or south of Taiwan cannot be ruled out. In light of the Fukushima, Japan nuclear disaster, we also discuss the implications of our study for the three operating nuclear power plants on the coast of Taiwan.

  3. Quantifying renewable groundwater stress with GRACE

    PubMed Central

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Reager, John T.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human‐dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE‐based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions. PMID:26900185

  4. Quantifying renewable groundwater stress with GRACE

    NASA Astrophysics Data System (ADS)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min-Hui; Reager, John T.; Famiglietti, James S.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-07-01

    Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions.

  5. Ultrasonography-guided core needle biopsy for the thyroid nodule: does the procedure hold any benefit for the diagnosis when fine-needle aspiration cytology analysis shows inconclusive results?

    PubMed Central

    Hahn, S Y; Han, B-K; Ko, E Y; Ko, E S

    2013-01-01

    Objective: We evaluated the diagnostic role of ultrasonography-guided core needle biopsy (CNB) according to ultrasonography features of thyroid nodules that had inconclusive ultrasonography-guided fine-needle aspiration (FNA) results. Methods: A total of 88 thyroid nodules in 88 patients who underwent ultrasonography-guided CNB because of previous inconclusive FNA results were evaluated. The patients were classified into three groups based on ultrasonography findings: Group A, which was suspicious for papillary thyroid carcinoma (PTC); Group B, which was suspicious for follicular (Hurthle cell) neoplasm; and Group C, which was suspicious for lymphoma. The final diagnoses of the thyroid nodules were determined by surgical confirmation or follow-up after ultrasonography-guided CNB. Results: Of the 88 nodules, the malignant rate was 49.1% in Group A, 12.0% in Group B and 90.0% in Group C. The rates of conclusive ultrasonography-guided CNB results after previous incomplete ultrasonography-guided FNA results were 96.2% in Group A, 64.0% in Group B and 90.0% in Group C (p=0.001). 12 cases with inconclusive ultrasonography-guided CNB results were finally diagnosed as 8 benign lesions, 3 PTCs and 1 lymphoma. The number of previous ultrasonography-guided FNA biopsies was not significantly different between the conclusive and the inconclusive result groups of ultrasonography-guided CNB (p=0.205). Conclusion: Ultrasonography-guided CNB has benefit for the diagnosis of thyroid nodules with inconclusive ultrasonography-guided FNA results. However, it is still not helpful for the differential diagnosis in 36% of nodules that are suspicious for follicular neoplasm seen on ultrasonography. Advances in knowledge: This study shows the diagnostic contribution of ultrasonography-guided CNB as an alternative to repeat ultrasonography-guided FNA or surgery. PMID:23564885

  6. Quantifying stressors among Iowa farmers.

    PubMed

    Freeman, S A; Schwab, C V; Jiang, Q

    2008-10-01

    In order to identify events/activities that are particularly stressful for farmers/ranchers, afarm stress survey based on the proportionate scaling method was mailed to a stratified random sample of 3000 Iowa farmers by the USDA National Agricultural Statistics Service. The participants were asked to compare 62 life events and farm activities to a marriage (assigned a baseline rating of 50), decide if it was less stressful or more stressful, and then assign a stress rating between 1 and 100. As expected, the most stressful events were the death of a spouse or child. Other high-stress events were disabling injuries, foreclosure on a mortgage, divorce, machinery breakdown during harvest, and loss of crop to weather. Mean stress ratings varied by age, marital status, and type of farming enterprise. Farmers between the ages of 40-59 and 60-79 had the most items with high stress levels. Females had more high-stress items than males. Divorced farmers had fewer high-stress items than other respondents. Farmer's whose primary focus was raising horses had more high-stress items than other farm types. Significant outcomes of this study go beyond the specific mean stress ratings of the events and activities. The results indicate that farm stressors can be quantified using the proportionate scaling method and that the impact of the stressor is based not just on the event but is also dependent on the characteristics of the farmer (e.g., age, gender, marital status, etc.). PMID:19044170

  7. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2014-07-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples), and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  8. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  9. A flow cytometric approach to quantify biofilms.

    PubMed

    Kerstens, Monique; Boulet, Gaëlle; Van Kerckhoven, Marian; Clais, Sofie; Lanckacker, Ellen; Delputte, Peter; Maes, Louis; Cos, Paul

    2015-07-01

    Since biofilms are important in many clinical, industrial, and environmental settings, reliable methods to quantify these sessile microbial populations are crucial. Most of the currently available techniques do not allow the enumeration of the viable cell fraction within the biofilm and are often time consuming. This paper proposes flow cytometry (FCM) using the single-stain viability dye TO-PRO(®)-3 iodide as a fast and precise alternative. Mature biofilms of Candida albicans and Escherichia coli were used to optimize biofilm removal and dissociation, as a single-cell suspension is needed for accurate FCM enumeration. To assess the feasibility of FCM quantification of biofilms, E. coli and C. albicans biofilms were analyzed using FCM and crystal violet staining at different time points. A combination of scraping and rinsing proved to be the most efficient technique for biofilm removal. Sonicating for 10 min eliminated the remaining aggregates, resulting in a single-cell suspension. Repeated FCM measurements of biofilm samples revealed a good intraday precision of approximately 5 %. FCM quantification and the crystal violet assay yielded similar biofilm growth curves for both microorganisms, confirming the applicability of our technique. These results show that FCM using TO-PRO(®)-3 iodide as a single-stain viability dye is a valid fast alternative for the quantification of viable cells in a biofilm. PMID:25948317

  10. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  11. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  12. Obesity in show cats.

    PubMed

    Corbee, R J

    2014-12-01

    Obesity is an important disease with a high prevalence in cats. Because obesity is related to several other diseases, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain cat breeds has been suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, 268 cats of 22 different breeds investigated by determining their body condition score (BCS) on a nine-point scale by inspection and palpation, at two different cat shows. Overall, 45.5% of the show cats had a BCS > 5, and 4.5% of the show cats had a BCS > 7. There were significant differences between breeds, which could be related to the breed standards. Most overweight and obese cats were in the neutered group. It warrants firm discussions with breeders and cat show judges to come to different interpretations of the standards in order to prevent overweight conditions in certain breeds from being the standard of beauty. Neutering predisposes for obesity and requires early nutritional intervention to prevent obese conditions. PMID:24612018

  13. Show What You Know

    ERIC Educational Resources Information Center

    Eccleston, Jeff

    2007-01-01

    Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…

  14. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  15. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  16. Quantifying coherence of Gaussian states

    NASA Astrophysics Data System (ADS)

    Xu, Jianwei

    2016-03-01

    Coherence arises from the superposition principle and plays a key role in quantum mechanics. Recently, Baumgratz et al. [T. Baumgratz, M. Cramer, and M. B. Plenio, Phys. Rev. Lett. 113, 140401 (2014), 10.1103/PhysRevLett.113.140401] established a rigorous framework for quantifying the coherence of finite-dimensional quantum states. In this work we provide a framework for quantifying the coherence of Gaussian states and explicitly give a coherence measure for Gaussian states based on the relative entropy.

  17. Children’s developing intuitions about the truth conditions and implications of novel generics vs. quantified statements

    PubMed Central

    Brandone, Amanda C.; Gelman, Susan A; Hedglen, Jenna

    2014-01-01

    Generic statements express generalizations about categories and present a unique semantic profile that is distinct from quantified statements. This paper reports two studies examining the development of children’s intuitions about the semantics of generics and how they differ from statements quantified by all, most, and some. Results reveal that, like adults, preschoolers (1) recognize that generics have flexible truth conditions and are capable of representing a wide range of prevalence levels; and (2) interpret novel generics as having near-universal prevalence implications. Results further show that by age 4, children are beginning to differentiate the meaning of generics and quantified statements; however, even 7- to 11-year-olds are not adult-like in their intuitions about the meaning of most-quantified statements. Overall, these studies suggest that by preschool, children interpret generics in much the same way that adults do; however, mastery of the semantics of quantified statements follows a more protracted course. PMID:25297340

  18. Obesity in show dogs.

    PubMed

    Corbee, R J

    2012-08-11

    Obesity is an important disease with a growing incidence. Because obesity is related to several other diseases, and decreases life span, it is important to identify the population at risk. Several risk factors for obesity have been described in the literature. A higher incidence of obesity in certain breeds is often suggested. The aim of this study was to determine whether obesity occurs more often in certain breeds. The second aim was to relate the increased prevalence of obesity in certain breeds to the official standards of that breed. To this end, we investigated 1379 dogs of 128 different breeds by determining their body condition score (BCS). Overall, 18.6% of the show dogs had a BCS >5, and 1.1% of the show dogs had a BCS>7. There were significant differences between breeds, which could be correlated to the breed standards. It warrants firm discussions with breeders and judges in order to come to different interpretations of the standards to prevent overweight conditions from being the standard of beauty. PMID:22882163

  19. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336

  20. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  1. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  2. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  3. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (?15N and ?18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.

  4. Quantifying spatiotemporal chaos in Rayleigh-Bénard convection.

    PubMed

    Karimi, A; Paul, M R

    2012-04-01

    Using large-scale parallel numerical simulations we explore spatiotemporal chaos in Rayleigh-Bénard convection in a cylindrical domain with experimentally relevant boundary conditions. We use the variation of the spectrum of Lyapunov exponents and the leading-order Lyapunov vector with system parameters to quantify states of high-dimensional chaos in fluid convection. We explore the relationship between the time dynamics of the spectrum of Lyapunov exponents and the pattern dynamics. For chaotic dynamics we find that all of the Lyapunov exponents are positively correlated with the leading-order Lyapunov exponent, and we quantify the details of their response to the dynamics of defects. The leading-order Lyapunov vector is used to identify topological features of the fluid patterns that contribute significantly to the chaotic dynamics. Our results show a transition from boundary-dominated dynamics to bulk-dominated dynamics as the system size is increased. The spectrum of Lyapunov exponents is used to compute the variation of the fractal dimension with system parameters to quantify how the underlying high-dimensional strange attractor accommodates a range of different chaotic dynamics. PMID:22680550

  5. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  6. Quantifying uniformity of mapped reads

    PubMed Central

    Hower, Valerie; Starfield, Richard; Roberts, Adam; Pachter, Lior

    2012-01-01

    Summary: We describe a tool for quantifying the uniformity of mapped reads in high-throughput sequencing experiments. Our statistic directly measures the uniformity of both read position and fragment length, and we explain how to compute a P-value that can be used to quantify biases arising from experimental protocols and mapping procedures. Our method is useful for comparing different protocols in experiments such as RNA-Seq. Availability and implementation: We provide a freely available and open source python script that can be used to analyze raw read data or reads mapped to transcripts in BAM format at http://www.math.miami.edu/~vhower/ReadSpy.html Contact: lpachter@math.berkeley.edu Supplementary Information: Supplementary data are available at Bioinformatics online. PMID:22815359

  7. Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results

    SciTech Connect

    Plataniotis, George A. Dale, Roger G.

    2009-04-01

    Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

  8. Quantifying Order in Poly(3-hexylthiophene)

    NASA Astrophysics Data System (ADS)

    Snyder, Chad; Nieuwendaal, Ryan; Delongchamp, Dean; Luscombe, Christine; Sista, Prakash; Boyd, Shane

    2014-03-01

    While poly(3-hexylthiophene) (P3HT) is one of the most studied polymers in organic electronics, it remains one of the most challenging in terms of quantitative measures of its order, e.g., crystallinity. To address this challenge, we prepared a series of highly regioregular P3HT fractions ranging from 3.3 kg/mol to 23 kg/mol. Using this series plus a high molar mass (62 kg/mol) commercial material, we compare different metrics for order in P3HT via calorimetry, solid state NMR, and x-ray diffraction. We reconcile the results of our work with those of recent studies on oligomeric (3-hexylthiophenes). One challenges of quantifying low molar mass P3HT samples via DSC is a thermal fractionation effect due to varying chain lengths. We quantify these effects in our molar mass series, and a clear crossover region from extended chain crystals to chain folded crystals is identified through the thermal fractionation process. New values for the enthalpy of fusion of high molar mass P3HT and its equilibrium melting temperature are established through our work. Another result of our research is the validation of high heating rate DSC methods for quantifying crystallinity in P3HT samples with device relevant film thicknesses.

  9. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-01

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short' - prior-24-hour recall obtained in the afternoon and 'long' - previous-day recall obtained in the morning) with four prompts ('forward' - distant to recent, 'meal name' - breakfast, etc., 'open' - no instructions, and 'reverse' - recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure - report rate - and reporting-error-sensitive measures - correspondence rate and inflation ratio - were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio - but not report rate - showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID:26865356

  10. Gaussian intrinsic entanglement: An entanglement quantifier based on secret correlations

    NASA Astrophysics Data System (ADS)

    Mišta, Ladislav; Tatham, Richard

    2015-06-01

    Intrinsic entanglement (IE) is a quantity which aims at quantifying bipartite entanglement carried by a quantum state as an optimal amount of the intrinsic information that can be extracted from the state by measurement. We investigate in detail the properties of a Gaussian version of IE, the so-called Gaussian intrinsic entanglement (GIE). We show explicitly how GIE simplifies to the mutual information of a distribution of outcomes of measurements on a conditional state obtained by a measurement on a purifying subsystem of the analyzed state, which is first minimized over all measurements on the purifying subsystem and then maximized over all measurements on the conditional state. By constructing for any separable Gaussian state a purification and a measurement on the purifying subsystem which projects the purification onto a product state, we prove that GIE vanishes on all Gaussian separable states. Via realization of quantum operations by teleportation, we further show that GIE is nonincreasing under Gaussian local trace-preserving operations and classical communication. For pure Gaussian states and a reduction of the continuous-variable GHZ state, we calculate GIE analytically and we show that it is always equal to the Gaussian Rényi-2 entanglement. We also extend the analysis of IE to a non-Gaussian case by deriving an analytical lower bound on IE for a particular form of the non-Gaussian continuous-variable Werner state. Our results indicate that mapping of entanglement onto intrinsic information is capable of transmitting also quantitative properties of entanglement and that this property can be used for introduction of a quantifier of Gaussian entanglement which is a compromise between computable and physically meaningful entanglement quantifiers.

  11. PARAMETERS FOR QUANTIFYING BEAM HALO

    SciTech Connect

    C.K. ALLEN; T.P. WANGLER

    2001-06-01

    Two different parameters for the quantitative description of beam halo are introduced, both based on moments of the particle distribution. One parameter is a measure of spatial halo formation and has been defined previously by Wangler and Crandall [3], termed the profile parameter. The second parameter relies on kinematic invariants to quantify halo formation in phase space; we call it the halo parameter. The profile parameter can be computed from experimental beam profile data. The halo parameter provides a theoretically more complete description of halo in phase space, but is difficult to obtain experimentally.

  12. Quantifying Connectivity in the Coastal Ocean

    NASA Astrophysics Data System (ADS)

    Mitarai, S.; Siegel, D.; Watson, J.; Dong, C.; McWilliams, J.

    2008-12-01

    The quantification of coastal connectivity is important for a wide range of real-world applications ranging from marine pollution to nearshore fisheries management. For these purposes, coastal connectivity is best defined as the probability that water parcels from one nearshore location are advected to another site over a given time interval. Here, we demonstrate how to quantify coastal connectivity using Lagrangian probability- density function (PDF) methods, a classic modeling approach for many turbulent applications, and numerical solutions of coastal circulation for the Southern California Bight. Mean dispersal patterns from a single release site (or Lagrangian PDFs) show a strong dependency to the particle-release location and seasonal variability, reflecting circulation patterns in the Southern California Bight. Strong interannual variations, responding to El Nino and La Nina transitions are also observed. Mean connectivity patterns, deduced from Lagrangian PDFs, is spatially heterogeneous for the advection time of around 30 days or less, resulting from distinctive circulation patterns, and becomes more homogeneous for a longer advection time. A given realization of connectivity is stochastic because of eddy-driven transport and synoptic wind forcing changes. In general, mainland sites are good sources while both Northern and Southern Channel Islands are poor source sites, although they receive substantial fluxes of water parcels from the mainland. The predicted connectivity gives useful information to ecological and other applications for the Southern California Bight (e.g., designing marine protected areas, understanding gene structures, and predicting the impact of a pollution event) and provide a path for assessing connectivity for other regions of the coastal ocean.

  13. Quantifying crystal-melt segregation in dykes

    NASA Astrophysics Data System (ADS)

    Yamato, Philippe; Duretz, Thibault; May, Dave A.; Tartèse, Romain

    2015-04-01

    The dynamics of magma flow is highly affected by the presence of a crystalline load. During magma ascent, it has been demonstrated that crystal-melt segregation constitutes a viable mechanism for magmatic differentiation. However, the influences of crystal volume fraction, geometry, size and density on crystal melt segregation are still not well constrained. In order to address these issues, we performed a parametric study using 2D direct numerical simulations, which model the ascension of crystal-bearing magma in a vertical dyke. Using these models, we have characterised the amount of segregation as a function of different quantities including: the crystal fraction (φ), the density contrast between crystals and melt (Δρ), the size of the crystals (Ac) and their aspect ratio (R). Results show that crystal aspect ratio does not affect the segregation if R is small enough (long axis smaller than ~1/6 of the dyke width, Wd). Inertia within the system was also found not to influence crystal-melt segregation. The degree of segregation was however found to be highly dependent upon other parameters. Segregation is highest when Δρ and Ac are large, and lowest for large pressure gradient (Pd) and/or large values of Wd. These four parameters can be combined into a single one, the Snumber, which can be used to quantify the segregation. Based on systematic numerical modelling and dimensional analysis, we provide a first order scaling law which allows quantification of the segregation for an arbitrary Snumber and φ, encompassing a wide range of typical parameters encountered in terrestrial magmatic systems.

  14. Quantifying torso deformity in scoliosis

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James

    2006-03-01

    Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.

  15. Quantifying mixing using equilibrium reactions

    SciTech Connect

    Wheat, Philip M.; Posner, Jonathan D.

    2009-03-15

    A method of quantifying equilibrium reactions in a microchannel using a fluorometric reaction of Fluo-4 and Ca{sup 2+} ions is presented. Under the proper conditions, equilibrium reactions can be used to quantify fluid mixing without the challenges associated with constituent mixing measures such as limited imaging spatial resolution and viewing angle coupled with three-dimensional structure. Quantitative measurements of CaCl and calcium-indicating fluorescent dye Fluo-4 mixing are measured in Y-shaped microchannels. Reactant and product concentration distributions are modeled using Green's function solutions and a numerical solution to the advection-diffusion equation. Equilibrium reactions provide for an unambiguous, quantitative measure of mixing when the reactant concentrations are greater than 100 times their dissociation constant and the diffusivities are equal. At lower concentrations and for dissimilar diffusivities, the area averaged fluorescence signal reaches a maximum before the species have interdiffused, suggesting that reactant concentrations and diffusivities must be carefully selected to provide unambiguous, quantitative mixing measures. Fluorometric equilibrium reactions work over a wide range of pH and background concentrations such that they can be used for a wide variety of fluid mixing measures including industrial or microscale flows.

  16. Clinical consequences of the Calypso trial showing superiority of PEG-liposomal doxorubicin and carboplatin over paclitaxel and carboplatin in recurrent ovarian cancer: results of an Austrian gynecologic oncologists' expert meeting.

    PubMed

    Petru, Edgar; Reinthaller, Alexander; Angleitner-Boubenizek, Lukas; Schauer, Christian; Zeimet, Alain; Dirschlmayer, Wolfgang; Medl, Michael; Stummvoll, Wolfgang; Sevelda, Paul; Marth, Christian

    2010-11-01

    The Calypso trial showed an improved progression-free survival with PEG-liposomal doxorubicin (PLD) and carboplatin (P) as compared with the standard regimen paclitaxel (PCLTX) and P in the second- or third-line treatment of platinum-sensitive epithelial ovarian cancer [1]. A panel of Austrian gynecologic oncologists discussed the clinical consequences of the data from the Calypso study for the routine practice. PLD + P had a significantly lower rate of alopecia and neuropathy than the taxane regimen, both toxicities which compromise the quality of life. Due to possible significant thrombocytopenia, the blood counts of patients undergoing PLD + P therapy should be monitored weekly. Patients receiving PLD/P are at higher risk of nausea and vomiting. Palmoplantar erythrodysesthesia (hand-foot syndrome) is a significant toxicity of PLD + P most prevalent after the third or fourth cycle. Prophylaxis consists of avoiding pressure on feet and hands and other parts of the body. Similarly, prophylaxis of mucositis seems important and includes avoiding consumption of hot, spicy and salty foods and drinks. Mouth dryness should be avoided. Premedication with antiemetics and dexamethasone dissolved in 5% glucose is done to prevent hypersensitivity to PLD. In conclusion, the therapeutic index is more favorable for PLD + P than for PCTX + P. PMID:21072604

  17. Quantifying entanglement with witness operators

    SciTech Connect

    Brandao, Fernando G.S.L.

    2005-08-15

    We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

  18. Tagging SNP haplotype analysis of the secretory PLA2-V gene, PLA2G5, shows strong association with LDL and oxLDL levels, suggesting functional distinction from sPLA2-IIA: results from the UDACS study.

    PubMed

    Wootton, Peter T E; Arora, Nupur L; Drenos, Fotios; Thompson, Simon R; Cooper, Jackie A; Stephens, Jeffrey W; Hurel, Steven J; Hurt-Camejo, Eva; Wiklund, Olov; Humphries, Steve E; Talmud, Philippa J

    2007-06-15

    Animal and human studies suggest that both secretory PLA2 (sPLA2)-V and sPLA2-IIA (encoded, respectively, by the neighbouring PLA2G5 and PLA2G2A genes) contribute to atherogenesis. Elevated plasma sPLA2-IIA predicts coronary heart disease (CHD) risk, but no mass assay for sPLA2-V is available. We previously reported that tagging single nucleotide polymorphism (tSNP) haplotypes of PLA2G2A are strongly associated with sPLA2-IIA mass, but not lipid levels. Here, we use tSNPs of the sPLA2-V gene to investigate the association of PLA2G5 with CHD risk markers. Seven PLA2G5 tSNPs genotypes, explaining >92% of the locus genetic variability, were determined in 519 patients with Type II diabetes (in whom PLA2G2A tSNP data was available), and defined seven common haplotypes (frequencies >5%). PLA2G5 and PLA2G2A tSNPs showed linkage disequilibrium (LD). Compared to the common PLA2G5 haplotype, H1 (frequency 34.9%), haplotypes H2-7 were associated with overall higher plasma LDL (P < 0.00004) and total cholesterol (P < 0.00003) levels yet lower oxLDL/LDL (P = 0.006) and sPLA2-IIA mass (P = 0.04), probably reflecting LD with PLA2G2A. Intronic tSNP (rs11573248), unlikely itself to be functional, distinguished H1 from LDL-raising haplotypes and may mark a functional site. In conclusion, PLA2G5 tSNP haplotypes demonstrate an association with total and LDL cholesterol and oxLDL/LDL, not seen with PLA2G2A, thus confirming distinct functional roles for these two sPLA2s. PMID:17545304

  19. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  20. Results of the HepZero study comparing heparin-grafted membrane and standard care show that heparin-grafted dialyzer is safe and easy to use for heparin-free dialysis.

    PubMed

    Laville, Maurice; Dorval, Marc; Fort Ros, Joan; Fay, Renaud; Cridlig, Joëlle; Nortier, Joëlle L; Juillard, Laurent; Dębska-Ślizień, Alicja; Fernández Lorente, Loreto; Thibaudin, Damien; Franssen, Casper; Schulz, Michael; Moureau, Frédérique; Loughraieb, Nathalie; Rossignol, Patrick

    2014-12-01

    Heparin is used to prevent clotting during hemodialysis, but heparin-free hemodialysis is sometimes needed to decrease the risk of bleeding. The HepZero study is a randomized, multicenter international controlled open-label trial comparing no-heparin hemodialysis strategies designed to assess non-inferiority of a heparin grafted dialyzer (NCT01318486). A total of 251 maintenance hemodialysis patients at increased risk of hemorrhage were randomly allocated for up to three heparin-free hemodialysis sessions using a heparin-grafted dialyzer or the center standard-of-care consisting of regular saline flushes or pre-dilution. The first heparin-free hemodialysis session was considered successful when there was neither complete occlusion of air traps or dialyzer, nor additional saline flushes, changes of dialyzer or bloodlines, or premature termination. The current standard-of-care resulted in high failure rates (50%). The success rate in the heparin-grafted membrane arm was significantly higher than in the control group (68.5% versus 50.4%), which was consistent for both standard-of-care modalities. The absolute difference between the heparin-grafted membrane and the controls was 18.2%, with a lower bound of the 90% confidence interval equal to plus 7.9%. The hypothesis of the non-inferiority at the minus 15% level was accepted, although superiority at the plus 15% level was not reached. Thus, use of a heparin-grafted membrane is a safe, helpful, and easy-to-use method for heparin-free hemodialysis in patients at increased risk of hemorrhage. PMID:25007166

  1. Results, Results, Results?

    ERIC Educational Resources Information Center

    Wallace, Dale

    2000-01-01

    Given the amount of time, energy, and money devoted to provincial achievement exams in Canada, it is disturbing that Alberta students and teachers feel so pressured and that the exams do not accurately reflect what students know. Research shows that intelligence has an (untested) emotional component. (MLH)

  2. QUANTIFYING ASSAY VARIATION IN NUTRIENT ANALYSIS OF FEEDSTUFFS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analytical results from different laboratories have greater variation than those from a single laboratory, and this variation differs by nutrient. Objectives of this presentation are to describe methods for quantifying the analytical reproducibility among and repeatability within laboratories, estim...

  3. Quantifying Evaporation in a Permeable Pavement System

    EPA Science Inventory

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  4. Quantifying MLI Thermal Conduction in Cryogenic Applications from Experimental Data

    NASA Astrophysics Data System (ADS)

    Ross, R. G., Jr.

    2015-12-01

    Multilayer Insulation (MLI) uses stacks of low-emittance metalized sheets combined with low-conduction spacer features to greatly reduce the heat transfer to cryogenic applications from higher temperature surrounds. However, as the hot-side temperature decreases from room temperature to cryogenic temperatures, the level of radiant heat transfer drops as the fourth power of the temperature, while the heat transfer by conduction only falls off linearly. This results in cryogenic MLI being dominated by conduction, a quantity that is extremely sensitive to MLI blanket construction and very poorly quantified in the literature. To develop useful quantitative data on cryogenic blanket conduction, multilayer nonlinear heat transfer models are used to analyze extensive heat transfer data measured by Lockheed Palo Alto on their cryogenic dewar MLI and measured by JPL on their spacecraft MLI. The data-fitting aspect of the modeling allows the radiative and conductive thermal properties of the tested blankets to be explicitly quantified. Results are presented showing that MLI conductance varies by a factor of 600 between spacecraft MLI and Lockheed's best cryogenic MLI.

  5. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  6. Career and Technical Education: Show Us the Buck, We'll Show You the Bang!

    ERIC Educational Resources Information Center

    Whetstone, Ryan

    2011-01-01

    Adult and CTE programs in California have been cut by about 60 percent over the past three years. A number of school districts have summarily eliminated these programs to preserve funding for other educational endeavors. The author says part of the problem has been the community's inability to communicate quantifiable results. One of the hottest…

  7. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  8. Quantifying macromolecular conformational transition pathways

    NASA Astrophysics Data System (ADS)

    Seyler, Sean; Kumar, Avishek; Thorpe, Michael; Beckstein, Oliver

    2015-03-01

    Diverse classes of proteins function through large-scale conformational changes that are challenging for computer simulations. A range of fast path-sampling techniques have been used to generate transitions, but it has been difficult to compare paths from (and assess the relative strengths of) different methods. We introduce a comprehensive method (pathway similarity analysis, PSA) for quantitatively characterizing and comparing macromolecular pathways. The Hausdorff and Fréchet metrics (known from computational geometry) are used to quantify the degree of similarity between polygonal curves in configuration space. A strength of PSA is its use of the full information available from the 3 N-dimensional configuration space trajectory without requiring additional specific knowledge about the system. We compare a sample of eleven different methods for the closed-to-open transitions of the apo enzyme adenylate kinase (AdK) and also apply PSA to an ensemble of 400 AdK trajectories produced by dynamic importance sampling MD and the Geometrical Pathways algorithm. We discuss the method's potential to enhance our understanding of transition path sampling methods, validate them, and help guide future research toward deeper physical insights into conformational transitions.

  9. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePlus

    ... Care Obesity at Midlife May Speed Alzheimer’s Onset Hello from my mom Easing the Behavior Problems of ... Managers Continuing Care FOR MORE ARTICLES CLICK HERE Hello from my mom Common Estate Planning Errors Alzheimer’s ...

  10. Quantifying cell behaviors during embryonic wound healing

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Ma, Xiaoyan; Crews, Sarah; Lynch, Holley; McCleery, W. Tyler; Hutson, M. Shane

    2011-03-01

    During embryogenesis, internal forces induce motions in cells leading to widespread motion in tissues. We previously developed laser hole-drilling as a consistent, repeatable way to probe such epithelial mechanics. The initial recoil (less than 30s) gives information about physical properties (elasticity, force) of cells surrounding the wound, but the long-term healing process (tens of minutes) shows how cells adjust their behavior in response to stimuli. To study this biofeedback in many cells through time, we developed tools to quantify statistics of individual cells. By combining watershed segmentation with a powerful and efficient user interaction system, we overcome problems that arise in any automatic segmentation from poor image quality. We analyzed cell area, perimeter, aspect ratio, and orientation relative to wound for a wide variety of laser cuts in dorsal closure. We quantified statistics for different regions as well, i.e. cells near to and distant from the wound. Regional differences give a distribution of wound-induced changes, whose spatial localization provides clues into the physical/chemical signals that modulate the wound healing response. Supported by the Human Frontier Science Program (RGP0021/2007 C).

  11. Quantifying utricular stimulation during natural behavior

    PubMed Central

    Rivera, Angela R. V.; Davis, Julian; Grant, Wally; Blob, Richard W.; Peterson, Ellengene; Neiman, Alexander B.; Rowe, Michael

    2012-01-01

    The use of natural stimuli in neurophysiological studies has led to significant insights into the encoding strategies used by sensory neurons. To investigate these encoding strategies in vestibular receptors and neurons, we have developed a method for calculating the stimuli delivered to a vestibular organ, the utricle, during natural (unrestrained) behaviors, using the turtle as our experimental preparation. High-speed digital video sequences are used to calculate the dynamic gravito-inertial (GI) vector acting on the head during behavior. X-ray computed tomography (CT) scans are used to determine the orientation of the otoconial layer (OL) of the utricle within the head, and the calculated GI vectors are then rotated into the plane of the OL. Thus, the method allows us to quantify the spatio-temporal structure of stimuli to the OL during natural behaviors. In the future, these waveforms can be used as stimuli in neurophysiological experiments to understand how natural signals are encoded by vestibular receptors and neurons. We provide one example of the method which shows that turtle feeding behaviors can stimulate the utricle at frequencies higher than those typically used in vestibular studies. This method can be adapted to other species, to other vestibular end organs, and to other methods of quantifying head movements. PMID:22753360

  12. Quantifying of bactericide properties of medicinal plants.

    PubMed

    Kováts, Nora; Ács, András; Gölöncsér, Flóra; Barabás, Anikó

    2011-06-01

    Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defence, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

  13. Quantifying of bactericide properties of medicinal plants

    PubMed Central

    Ács, András; Gölöncsér, Flóra; Barabás, Anikó

    2011-01-01

    Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defense, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study, the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

  14. Quantifying Emergent Behavior of Autonomous Robots

    NASA Astrophysics Data System (ADS)

    Martius, Georg; Olbrich, Eckehard

    2015-10-01

    Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.

  15. Evaluation of two methods for quantifying passeriform lice.

    PubMed

    Koop, Jennifer A H; Clayton, Dale H

    2013-06-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer's timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238-302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  16. Evaluation of two methods for quantifying passeriform lice

    PubMed Central

    Koop, Jennifer A. H.; Clayton, Dale H.

    2013-01-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  17. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  18. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the deformation behavior was found to depend strongly on the character of the nearby microstructure.

  19. Quantifying facial paralysis using the Kinect v2.

    PubMed

    Gaber, Amira; Taher, Mona F; Wahed, Manal Abdel

    2015-08-01

    Assessment of facial paralysis (FP) and quantitative grading of facial asymmetry are essential in order to quantify the extent of the condition as well as to follow its improvement or progression. As such, there is a need for an accurate quantitative grading system that is easy to use, inexpensive and has minimal inter-observer variability. A comprehensive automated system to quantify and grade FP is the main objective of this work. An initial prototype has been presented by the authors. The present research aims to enhance the accuracy and robustness of one of this system's modules: the resting symmetry module. This is achieved by including several modifications to the computation method of the symmetry index (SI) for the eyebrows, eyes and mouth. These modifications are the gamma correction technique, the area of the eyes, and the slope of the mouth. The system was tested on normal subjects and showed promising results. The mean SI of the eyebrows decreased slightly from 98.42% to 98.04% using the modified method while the mean SI for the eyes and mouth increased from 96.93% to 99.63% and from 95.6% to 98.11% respectively while using the modified method. The system is easy to use, inexpensive, automated and fast, has no inter-observer variability and is thus well suited for clinical use. PMID:26736799

  20. Quantifying a cellular automata simulation of electric vehicles

    NASA Astrophysics Data System (ADS)

    Hill, Graeme; Bell, Margaret; Blythe, Phil

    2014-12-01

    Within this work the Nagel-Schreckenberg (NS) cellular automata is used to simulate a basic cyclic road network. Results from SwitchEV, a real world Electric Vehicle trial which has collected more than two years of detailed electric vehicle data, are used to quantify the results of the NS automata, demonstrating similar power consumption behavior to that observed in the experimental results. In particular the efficiency of the electric vehicles reduces as the vehicle density increases, due in part to the reduced efficiency of EVs at low speeds, but also due to the energy consumption inherent in changing speeds. Further work shows the results from introducing spatially restricted speed restriction. In general it can be seen that induced congestion from spatially transient events propagates back through the road network and alters the energy and efficiency profile of the simulated vehicles, both before and after the speed restriction. Vehicles upstream from the restriction show a reduced energy usage and an increased efficiency, and vehicles downstream show an initial large increase in energy usage as they accelerate away from the speed restriction.

  1. A Generalizable Methodology for Quantifying User Satisfaction

    NASA Astrophysics Data System (ADS)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  2. Quantifying, Visualizing, and Monitoring Lead Optimization.

    PubMed

    Maynard, Andrew T; Roberts, Christopher D

    2016-05-12

    Although lead optimization (LO) is by definition a process, process-centric analysis and visualization of this important phase of pharmaceutical R&D has been lacking. Here we describe a simple statistical framework to quantify and visualize the progression of LO projects so that the vital signs of LO convergence can be monitored. We refer to the resulting visualizations generated by our methodology as the "LO telemetry" of a project. These visualizations can be automated to provide objective, holistic, and instantaneous analysis and communication of LO progression. This enhances the ability of project teams to more effectively drive LO process, while enabling management to better coordinate and prioritize LO projects. We present the telemetry of five LO projects comprising different biological targets and different project outcomes, including clinical compound selection, termination due to preclinical safety/tox, and termination due to lack of tractability. We demonstrate that LO progression is accurately captured by the telemetry. We also present metrics to quantify LO efficiency and tractability. PMID:26262898

  3. Quantifying Significance of MHC II Residues.

    PubMed

    Fan, Ying; Lu, Ruoshui; Wang, Lusheng; Andreatta, Massimo; Li, Shuai Cheng

    2014-01-01

    The major histocompatibility complex (MHC), a cell-surface protein mediating immune recognition, plays important roles in the immune response system of all higher vertebrates. MHC molecules are highly polymorphic and they are grouped into serotypes according to the specificity of the response. It is a common belief that a protein sequence determines its three dimensional structure and function. Hence, the protein sequence determines the serotype. Residues play different levels of importance. In this paper, we quantify the residue significance with the available serotype information. Knowing the significance of the residues will deepen our understanding of the MHC molecules and yield us a concise representation of the molecules. In this paper we propose a linear programming-based approach to find significant residue positions as well as quantifying their significance in MHC II DR molecules. Among all the residues in MHC II DR molecules, 18 positions are of particular significance, which is consistent with the literature on MHC binding sites, and succinct pseudo-sequences appear to be adequate to capture the whole sequence features. When the result is used for classification of MHC molecules with serotype assigned by WHO, a 98.4 percent prediction performance is achieved. The methods have been implemented in java (http://code.google.com/p/quassi/). PMID:26355503

  4. Quantifying Differences Between Native and Introduced Species.

    PubMed

    Lemoine, Nathan P; Burkepile, Deron E; Parker, John D

    2016-05-01

    Introduced species have historically been presumed to be evolutionarily novel and 'different' from native species. Recent studies question these assumptions, however, as the traits and factors promoting successful introduced and native species can be similar. We advocate a novel statistical framework utilizing quantifiable metrics of evolutionary and ecological differences among species to test whether different forces govern the success of native versus introduced species. In two case studies, we show that native and introduced species appear to follow the same 'rules' for becoming abundant. We propose that incorporating quantitative differences in traits and evolutionary history among species might largely account for many perceived effects of geographic origin, leading to more rigorous and general tests of the factors promoting organism success. PMID:26965001

  5. Stimfit: quantifying electrophysiological data with Python

    PubMed Central

    Guzman, Segundo J.; Schlögl, Alois; Schmidt-Hieber, Christoph

    2013-01-01

    Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. PMID:24600389

  6. Measuring political polarization: Twitter shows the two sides of Venezuela.

    PubMed

    Morales, A J; Borondo, J; Losada, J C; Benito, R M

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network. PMID:25833436

  7. Measuring political polarization: Twitter shows the two sides of Venezuela

    NASA Astrophysics Data System (ADS)

    Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  8. Quantifying economic fluctuations using statistical physics methodology

    NASA Astrophysics Data System (ADS)

    Gopikrishnan, Parameswaran

    2001-08-01

    This thesis shows that concepts and methods of statistical physics, developed to understand the behavior of systems with a large number of interacting elements, can be applied to quantify and understand economic phenomena. First, it is shown that certain economic fluctuations, such as fluctuations of stock prices, display remarkably ``universal'' scale-free characteristics that are not unlike those found in the physics of strongly-interacting systems. Using an analogy with the physics of diffusion processes, price movements are shown to be equivalent to a complex variant of classic diffusion, where the diffusion coefficient fluctuates drastically in time. The analog of the diffusion coefficient-known in economics as the volatility-is related to two microscopic quantities: (a) the number of transactions NΔt in a time interval Δ t, which is the analog of the number of collisions, and (b) the variance W2Dt of the price changes for all transactions in Δt, which is the analog of the local mean-square displacement between collisions. Secondly, this thesis quantifies collective behavior in the economy by applying the conceptual framework of random matrix theory (RMT), developed by Wigner and Dyson to describe energy levels of complex systems for which the exact nature of interactions are unknown. The eigenvalue statistics of the empirical cross-correlation matrix C-whose elements reflect equal-time correlations between any two firms in the economy-are compared with those of a random matrix having the same symmetry. The bulk of the eigenvalue spectrum is shown to be consistent with the universal properties of real symmetric random matrices, showing that ~98% of the measured cross-correlations are random and unstable in time. In analogy to physical systems where deviations from RMT reflect collective behavior, the part of the eigenvalue spectrum of C that deviates from, RMT corresponds to collective behavior among firms belonging to certain sectors of economic activity.

  9. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    NASA Astrophysics Data System (ADS)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  10. Quantifying Drosophila food intake: comparative analysis of current methodology.

    PubMed

    Deshpande, Sonali A; Carvalho, Gil B; Amador, Ariadna; Phillips, Angela M; Hoxha, Sany; Lizotte, Keith J; Ja, William W

    2014-05-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the capillary feeder (CAFE), food labeling with a radioactive tracer or colorimetric dye and observations of proboscis extension (PE). We show that the CAFE and radioisotope labeling provide the most consistent results, have the highest sensitivity and can resolve differences in feeding that dye labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of methods for measuring food intake will greatly advance Drosophila studies of nutrition, behavior and disease. PMID:24681694

  11. Quantifying Drosophila food intake: comparative analysis of current methodology

    PubMed Central

    Deshpande, Sonali A.; Carvalho, Gil B.; Amador, Ariadna; Phillips, Angela M.; Hoxha, Sany; Lizotte, Keith J.; Ja, William W.

    2014-01-01

    Food intake is a fundamental parameter in animal studies. Despite the prevalent use of Drosophila in laboratory research, precise measurements of food intake remain challenging in this model organism. Here, we compare several common Drosophila feeding assays: the Capillary Feeder (CAFE), food-labeling with a radioactive tracer or a colorimetric dye, and observations of proboscis extension (PE). We show that the CAFE and radioisotope-labeling provide the most consistent results, have the highest sensitivity, and can resolve differences in feeding that dye-labeling and PE fail to distinguish. We conclude that performing the radiolabeling and CAFE assays in parallel is currently the best approach for quantifying Drosophila food intake. Understanding the strengths and limitations of food intake methodology will greatly advance Drosophila studies of nutrition, behavior, and disease. PMID:24681694

  12. Heart rate measurement as a tool to quantify sedentary behavior.

    PubMed

    Åkerberg, Anna; Koshmak, Gregory; Johansson, Anders; Lindén, Maria

    2015-01-01

    Sedentary work is very common today. The aim of this pilot study was to attempt to differentiate between typical work situations and to investigate the possibility to break sedentary behavior, based on physiological measurement among office workers. Ten test persons used one heart rate based activity monitor (Linkura), one pulse oximeter device (Wrist) and one movement based activity wristband (Fitbit Flex), in different working situations. The results showed that both heart rate devices, Linkura and Wrist, were able to detect differences in heart rate between the different working situations (resting, sitting, standing, slow walk and medium fast walk). The movement based device, Fitbit Flex, was only able to separate differences in steps between slow walk and medium fast walk. It can be concluded that heart rate measurement is a promising tool for quantifying and separating different working situations, such as sitting, standing and walking. PMID:25980855

  13. Quantifying capital goods for waste incineration

    SciTech Connect

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-06-15

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO{sub 2} per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO{sub 2} per tonne of waste combusted.

  14. Quantifying diet for nutrigenomic studies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nu...

  15. 10. INTERIOR VIEW SHOWING MOUNTINGS FROM TUNING DEVICE. VIEW SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. INTERIOR VIEW SHOWING MOUNTINGS FROM TUNING DEVICE. VIEW SHOWS COPPER SHEETING ON WALLS. - Chollas Heights Naval Radio Transmitting Facility, Helix House, 6410 Zero Road, San Diego, San Diego County, CA

  16. 1. Contextual view of cottage, showing front east elevation, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. Contextual view of cottage, showing front east elevation, showing in distance blacksmith shop (left) and summer kitchen (right); camera facing southwest. - Lemmon-Anderson-Hixson Ranch, Cottage, 11220 North Virginia Street, Reno, Washoe County, NV

  17. 4. View of west elevation, showing stone structure, showing a ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. View of west elevation, showing stone structure, showing a portion of north elevation; camera facing southeast. - Lemmon-Anderson-Hixson Ranch, Cottage, 11220 North Virginia Street, Reno, Washoe County, NV

  18. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  19. Processing of Numerical and Proportional Quantifiers

    ERIC Educational Resources Information Center

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-01-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using…

  20. Deaf Learners' Knowledge of English Universal Quantifiers

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith

    2008-01-01

    Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,…

  1. Scalar Quantifiers: Logic, Acquisition, and Processing

    ERIC Educational Resources Information Center

    Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

    2010-01-01

    Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1", respectively. Notwithstanding the prima facie plausibility of this claim, Geurts…

  2. quantifying and Predicting Reactive Transport

    SciTech Connect

    Peter C. Burns, Department of Civil Engineering and Geological Sciences, University of Notre Dame

    2009-12-04

    This project was led by Dr. Jiamin Wan at Lawrence Berkeley National Laboratory. Peter Burns provided expertise in uranium mineralogy and in identification of uranium minerals in test materials. Dr. Wan conducted column tests regarding uranium transport at LBNL, and samples of the resulting columns were sent to Dr. Burns for analysis. Samples were analyzed for uranium mineralogy by X-ray powder diffraction and by scanning electron microscopy, and results were provided to Dr. Wan for inclusion in the modeling effort. Full details of the project can be found in Dr. Wan's final reports for the associated effort at LBNL.

  3. Dust as interstellar catalyst. I. Quantifying the chemical desorption process

    NASA Astrophysics Data System (ADS)

    Minissale, M.; Dulieu, F.; Cazaux, S.; Hocuk, S.

    2016-01-01

    Context. The presence of dust in the interstellar medium has profound consequences on the chemical composition of regions where stars are forming. Recent observations show that many species formed onto dust are populating the gas phase, especially in cold environments where UV- and cosmic-ray-induced photons do not account for such processes. Aims: The aim of this paper is to understand and quantify the process that releases solid species into the gas phase, the so-called chemical desorption process, so that an explicit formula can be derived that can be included in astrochemical models. Methods: We present a collection of experimental results of more than ten reactive systems. For each reaction, different substrates such as oxidized graphite and compact amorphous water ice were used. We derived a formula for reproducing the efficiencies of the chemical desorption process that considers the equipartition of the energy of newly formed products, followed by classical bounce on the surface. In part II of this study we extend these results to astrophysical conditions. Results: The equipartition of energy correctly describes the chemical desorption process on bare surfaces. On icy surfaces, the chemical desorption process is much less efficient, and a better description of the interaction with the surface is still needed. Conclusions: We show that the mechanism that directly transforms solid species into gas phase species is efficient for many reactions.

  4. Quantifying chaos for ecological stoichiometry.

    PubMed

    Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2010-09-01

    The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincaré return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing δ1. However, for higher values of δ1 the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter ζ) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model. PMID:20887045

  5. Quantifying Coral Reef Ecosystem Services

    EPA Science Inventory

    Coral reefs have been declining during the last four decades as a result of both local and global anthropogenic stresses. Numerous research efforts to elucidate the nature, causes, magnitude, and potential remedies for the decline have led to the widely held belief that the recov...

  6. Quantifying Diet for Nutrigenomic Studies

    PubMed Central

    Tucker, Katherine L.; Smith, Caren E.; Lai, Chao-Qiang; Ordovas, Jose M.

    2015-01-01

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations tliat may be exaggerated in the context of gene x nutrient interaction in large multiethnic studies. Because of the specificity of most gene x nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

  7. Quantifying diet for nutrigenomic studies.

    PubMed

    Tucker, Katherine L; Smith, Caren E; Lai, Chao-Qiang; Ordovas, Jose M

    2013-01-01

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations that may be exaggerated in the context of gene × nutrient interaction in large multiethnic studies. Because of the specificity of most gene × nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

  8. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  9. Quantifying gyrotropy in magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Swisdak, M.

    2016-01-01

    A new scalar measure of the gyrotropy of a pressure tensor is defined. Previously suggested measures are shown to be incomplete by means of examples for which they give unphysical results. To demonstrate its usefulness as an indicator of magnetic topology, the new measure is calculated for electron data taken from numerical simulations of magnetic reconnection, shown to peak at separatrices and X points, and compared to the other measures. The new diagnostic has potential uses in analyzing spacecraft observations, and so a method for calculating it from measurements performed in an arbitrary coordinate system is derived.

  10. Quantifying hybridization in realistic time.

    PubMed

    Collins, Joshua; Linz, Simone; Semple, Charles

    2011-10-01

    Recently, numerous practical and theoretical studies in evolutionary biology aim at calculating the extent to which reticulation-for example, horizontal gene transfer, hybridization, or recombination-has influenced the evolution for a set of present-day species. It has been shown that inferring the minimum number of hybridization events that is needed to simultaneously explain the evolutionary history for a set of trees is an NP-hard and also fixed-parameter tractable problem. In this article, we give a new fixed-parameter algorithm for computing the minimum number of hybridization events for when two rooted binary phylogenetic trees are given. This newly developed algorithm is based on interleaving-a technique using repeated kernelization steps that are applied throughout the exhaustive search part of a fixed-parameter algorithm. To show that our algorithm runs efficiently to be applicable to a wide range of practical problem instances, we apply it to a grass data set and highlight the significant improvements in terms of running times in comparison to an algorithm that has previously been implemented. PMID:21210735

  11. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600

  12. What do children know about the universal quantifiers all and each?

    PubMed

    Brooks, P J; Braine, M D

    1996-09-01

    Children's comprehension of the universal quantifiers all and each was explored in a series of experiments using a picture selection task. The first experiment examined children's ability to restrict a quantifier to the noun phrase it modifies. The second and third experiments examined children's ability to associate collective, distributive, and exhaustive representations with sentences containing universal quantifiers. The collective representation corresponds to the "group" meaning (for All the flowers are in a vase all of the flowers are in the same vase). The distributive representation implies a pairing (e.g., each flower paired with a vase for Each flower is in a vase). The exhaustive representation exhausts both sets (e.g., for The flowers are in the vases all the flowers are in vases and all the vases have flowers in them). Four- to 10-year-olds children had little difficulty restricting the quantifier all to the noun it modified in a task which required them to attend to the group feature of all. In contrast, only 9- and 10-year-olds were able to solve the task when the quantifier was each and the pictures showed entities in partial one-to-one correspondence. Children showed a preference for associating collective pictures with sentences containing all and distributive pictures with sentences containing each. The results suggest that between the ages of 5 and 10 years, children's semantic representations undergo less radical changes than others have proposed. Instead, developmental change may occur gradually as children acquire linguistic cues which map onto existing semantic representations. PMID:8870514

  13. Quantifying tetrodotoxin levels in the California newt using a non-destructive sampling method.

    PubMed

    Bucciarelli, Gary M; Li, Amy; Zimmer, Richard K; Kats, Lee B; Green, David B

    2014-03-01

    Toxic or noxious substances often serve as a means of chemical defense for numerous taxa. However, such compounds may also facilitate ecological or evolutionary processes. The neurotoxin, tetrodotoxin (TTX), which is found in newts of the genus Taricha, acts as a selection pressure upon predatory garter snakes, is a chemical cue to conspecific larvae, which elicits antipredator behavior, and may also affect macroinvertebrate foraging behavior. To understand selection patterns and how potential variation might affect ecological and evolutionary processes, it is necessary to quantify TTX levels within individuals and populations. To do so has often required that animals be destructively sampled or removed from breeding habitats and brought into the laboratory. Here we demonstrate a non-destructive method of sampling adult Taricha that obviates the need to capture and collect individuals. We also show that embryos from oviposited California newt (Taricha torosa) egg masses can be individually sampled and TTX quantified from embryos. We employed three different extraction techniques to isolate TTX. Using a custom fabricated high performance liquid chromatography (HPLC) system we quantified recovery of TTX. We found that a newly developed micro-extraction technique significantly improved recovery compared to previously used methods. Results also indicate our improvements to the HPLC method have high repeatability and increased sensitivity, with a detection limit of 48pg (0.15pmol) TTX. The quantified amounts of TTX in adult newts suggest fine geographic variation in toxin levels between sampling localities isolated by as little as 3km. PMID:24467994

  14. Quantifying the value of redundant measurements at GRUAN sites

    NASA Astrophysics Data System (ADS)

    Madonna, F.; Rosoldi, M.; Güldner, J.; Haefele, A.; Kivi, R.; Cadeddu, M. P.; Sisterson, D.; Pappalardo, G.

    2014-06-01

    The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network) Stations in 2010-2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV) content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.

  15. Quantifying Antimicrobial Resistance at Veal Calf Farms

    PubMed Central

    Bosman, Angela B.; Wagenaar, Jaap; Stegeman, Arjan; Vernooij, Hans; Mevius, Dik

    2012-01-01

    This study was performed to determine a sampling strategy to quantify the prevalence of antimicrobial resistance on veal calf farms, based on the variation in antimicrobial resistance within and between calves on five farms. Faecal samples from 50 healthy calves (10 calves/farm) were collected. From each individual sample and one pooled faecal sample per farm, 90 selected Escherichia coli isolates were tested for their resistance against 25 mg/L amoxicillin, 25 mg/L tetracycline, 0.5 mg/L cefotaxime, 0.125 mg/L ciprofloxacin and 8/152 mg/L trimethoprim/sulfamethoxazole (tmp/s) by replica plating. From each faecal sample another 10 selected E. coli isolates were tested for their resistance by broth microdilution as a reference. Logistic regression analysis was performed to compare the odds of testing an isolate resistant between both test methods (replica plating vs. broth microdilution) and to evaluate the effect of pooling faecal samples. Bootstrap analysis was used to investigate the precision of the estimated prevalence of resistance to each antimicrobial obtained by several simulated sampling strategies. Replica plating showed similar odds of E. coli isolates tested resistant compared to broth microdilution, except for ciprofloxacin (OR 0.29, p≤0.05). Pooled samples showed in general lower odds of an isolate being resistant compared to individual samples, although these differences were not significant. Bootstrap analysis showed that within each antimicrobial the various compositions of a pooled sample provided consistent estimates for the mean proportion of resistant isolates. Sampling strategies should be based on the variation in resistance among isolates within faecal samples and between faecal samples, which may vary by antimicrobial. In our study, the optimal sampling strategy from the perspective of precision of the estimated levels of resistance and practicality consists of a pooled faecal sample from 20 individual animals, of which 90 isolates are tested for their susceptibility by replica plating. PMID:22970313

  16. Quantifying asymmetry: ratios and alternatives.

    PubMed

    Franks, Erin M; Cabo, Luis L

    2014-08-01

    Traditionally, the study of metric skeletal asymmetry has relied largely on univariate analyses, utilizing ratio transformations when the goal is comparing asymmetries in skeletal elements or populations of dissimilar dimensions. Under this approach, raw asymmetries are divided by a size marker, such as a bilateral average, in an attempt to produce size-free asymmetry indices. Henceforth, this will be referred to as "controlling for size" (see Smith: Curr Anthropol 46 (2005) 249-273). Ratios obtained in this manner often require further transformations to interpret the meaning and sources of asymmetry. This model frequently ignores the fundamental assumption of ratios: the relationship between the variables entered in the ratio must be isometric. Violations of this assumption can obscure existing asymmetries and render spurious results. In this study, we examined the performance of the classic indices in detecting and portraying the asymmetry patterns in four human appendicular bones and explored potential methodological alternatives. Examination of the ratio model revealed that it does not fulfill its intended goals in the bones examined, as the numerator and denominator are independent in all cases. The ratios also introduced strong biases in the comparisons between different elements and variables, generating spurious asymmetry patterns. Multivariate analyses strongly suggest that any transformation to control for overall size or variable range must be conducted before, rather than after, calculating the asymmetries. A combination of exploratory multivariate techniques, such as Principal Components Analysis, and confirmatory linear methods, such as regression and analysis of covariance, appear as a promising and powerful alternative to the use of ratios. PMID:24842694

  17. The missing metric: quantifying contributions of reviewers.

    PubMed

    Cantor, Maurício; Gero, Shane

    2015-02-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early-mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  18. The missing metric: quantifying contributions of reviewers

    PubMed Central

    Cantor, Maurício; Gero, Shane

    2015-01-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early–mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  19. Asteroid Geophysics and Quantifying the Impact Hazard

    NASA Technical Reports Server (NTRS)

    Sears, D.; Wooden, D. H.; Korycanksy, D. G.

    2015-01-01

    Probably the major challenge in understanding, quantifying, and mitigating the effects of an impact on Earth is understanding the nature of the impactor. Of the roughly 25 meteorite craters on the Earth that have associated meteorites, all but one was produced by an iron meteorite and only one was produced by a stony meteorite. Equally important, even meteorites of a given chemical class produce a wide variety of behavior in the atmosphere. This is because they show considerable diversity in their mechanical properties which have a profound influence on the behavior of meteorites during atmospheric passage. Some stony meteorites are weak and do not reach the surface or reach the surface as thousands of relatively harmless pieces. Some stony meteorites roll into a maximum drag configuration and are strong enough to remain intact so a large single object reaches the surface. Others have high concentrations of water that may facilitate disruption. However, while meteorite falls and meteorites provide invaluable information on the physical nature of the objects entering the atmosphere, there are many unknowns concerning size and scale that can only be determined by from the pre-atmospheric properties of the asteroids. Their internal structure, their thermal properties, their internal strength and composition, will all play a role in determining the behavior of the object as it passes through the atmosphere, whether it produces an airblast and at what height, and the nature of the impact and amount and distribution of ejecta.

  20. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    PubMed Central

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  1. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    PubMed

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help identify reef ecosystems most exposed to environmental stress as well as systems that may be more resistant or resilient to future climate change. PMID:23637939

  2. Quantifying diplopia with a questionnaire

    PubMed Central

    Holmes, Jonathan M.; Liebermann, Laura; Hatt, Sarah R.; Smith, Stephen J.; Leske, David A.

    2013-01-01

    Purpose To report a diplopia questionnaire (DQ) with a data-driven scoring algorithm. Design Cross-sectional study. Participants To optimize questionnaire scoring: 147 adults with diplopic strabismus completed both the DQ and the Adult Strabismus-20 (AS-20) health-related quality of life (HRQOL) questionnaire. To assess test-retest reliability: 117 adults with diplopic strabismus. To assess responsiveness to surgery: 42 adults (46 surgeries). Methods The 10-item AS-20 function subscale score (scored 0 to 100) was defined as the gold standard for severity. A range of weights was assigned to the responses and to the gaze positions (from equal weighting to greater weighting of primary and reading). Combining all response option weights with all gaze position weights yielded 382,848 scoring algorithms. We then calculated 382,848 Spearman rank correlation coefficients comparing each algorithm with the AS-20 function subscale score. Main outcome measures To optimize scoring, Spearman rank correlation coefficients (measuring agreement) between DQ scores and AS-20 function subscale scores. For test-retest reliability, 95% limits of agreement, and intraclass correlation coefficient (ICC). For responsiveness, change in DQ score. Results For the 382,848 possible scoring algorithms, correlations with AS-20 function subscale score ranged from −0.64 (best correlated) to −0.55. The best-correlated algorithm had response option weights of 5 for rarely, 50 for sometimes, and 75 for often, and gaze position weights of 40 for straight ahead in the distance, 40 for reading, 1 for up, 8 for down, 4 for right, 4 for left, and 3 for other, totaling 100. There was excellent test-retest reliability with an ICC of 0.89 (95% confidence interval 0.84 to 0.92) and 95% limits of agreement were 30.9 points. The DQ score was responsive to surgery with a mean change of 51 ± 34 (p<0.001). Conclusions We have developed a data-driven scoring algorithm for the diplopia questionnaire, rating diplopia symptoms from 0 to 100. Based on correlations with HRQOL, straight ahead and reading positions should be highly weighted. The DQ has excellent test-retest reliability and responsiveness, and may be useful in both clinical and research settings. PMID:23531348

  3. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact

  4. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  5. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  6. Quantifying Effective Flow and Transport Properties in Heterogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Heidari, P.; Li, L.

    2012-12-01

    Spatial heterogeneity, the spatial variation in physical and chemical properties, exists at almost all scales and is an intrinsic property of natural porous media. It is important to understand and quantify how small-scale spatial variations determine large-scale "effective" properties in order to predict fluid flow and transport behavior in the natural subsurface. In this work, we aim to systematically understand and quantify the role of the spatial distribution of sand grains of different sizes in determining effective dispersivity and effective permeability using quasi-2D flow-cell experiments and numerical simulations. Two dimensional flow cells (20 cm by 20 cm) were packed with the same total amount of fine and coarse sands however with different spatial patterns. The homogeneous case has the completely mixed fine and coarse sands. The four zone case distributes the fine sand in four identical square zones within the coarse sand matrix. The one square case has all the fine sands in one square block. With the one square case pattern, two more experiments were designed in order to examine the effect of grain size contrast on effective permeability and dispersivity. Effective permeability was calculated based on both experimental and modeling results. Tracer tests were run for all cases. Advection dispersion equations were solved to match breakthrough data and to obtain average dispersivity. We also used Continuous Time Random Walk (CTRW) to quantify the non-Fickian transport behavior for each case. For the three cases with the same grain size contrast, the results show that the effective permeability does not differ significantly. The effective dispersion coefficient is the smallest for the homogeneous case (0.05 cm) and largest for the four zone case (0.27 cm). With the same pattern, the dispersivity value is the largest with the highest size contrast (0.28 cm), which is higher than the one with the lowest case by a factor of 2. The non-Fickian behavior was quantified by the ? value within the CTRW framework. Fickian transport will result in ? values larger than 2 while its deviation from 2 indicates the extent of non-Fickian behavior. Among the three cases with the same grain size contrast, the ? value is closest to 2 in the homogeneous case (1.95), while smallest in the four zone case (1.89). In the one square case, with the highest size contrast, the ? value was 1.57, indicating increasing extent of non-Fickian behavior with higher size contrast. This study is one step toward understanding how small-scale spatial variation in physical properties affect large-scale flow and transport behavior. This step is important in predicting subsurface transport processes that are relevant to earth sciences, environmental engineering, and petroleum engineering.

  7. Quantifying the infectivity of human immunodeficiency virus.

    PubMed Central

    Layne, S P; Spouge, J L; Dembo, M

    1989-01-01

    We have developed a mathematical model that quantifies lymphocyte infection by human immunodeficiency virus (HIV) and lymphocyte protection by blocking agents such as soluble CD4. We use this model to suggest standardized parameters for quantifying viral infectivity and to suggest techniques for calculating these parameters from well-mixed infectivity assays. We discuss the implications of the model for our understanding of the infectious process and virulence of HIV in vivo. PMID:2734313

  8. Portable XRF Technology to Quantify Pb in Bone In Vivo

    PubMed Central

    Specht, Aaron James; Weisskopf, Marc; Nie, Linda Huiling

    2014-01-01

    Lead is a ubiquitous toxicant. Bone lead has been established as an important biomarker for cumulative lead exposures and has been correlated with adverse health effects on many systems in the body. K-shell X-ray fluorescence (KXRF) is the standard method for measuring bone lead, but this approach has many difficulties that have limited the widespread use of this exposure assessment method. With recent advancements in X-ray fluorescence (XRF) technology, we have developed a portable system that can quantify lead in bone in vivo within 3 minutes. Our study investigated improvements to the system, four calibration methods, and system validation for in vivo measurements. Our main results show that the detection limit of the system is 2.9 ppm with 2 mm soft tissue thickness, the best calibration method for in vivo measurement is background subtraction, and there is strong correlation between KXRF and portable LXRF bone lead results. Our results indicate that the technology is ready to be used in large human population studies to investigate adverse health effects of lead exposure. The portability of the system and fast measurement time should allow for this technology to greatly advance the research on lead exposure and public/environmental health. PMID:26317033

  9. Quantifying Urban Groundwater in Environmental Field Observatories

    NASA Astrophysics Data System (ADS)

    Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

    2006-12-01

    Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5) development of a mass balance for precipitation over a 170 km2 area on a 1x1 km2 grid using recording rain gages for bias correction of weather radar products; (5) calculation of urban evapotranspiration using the Penman-Monteith method compared with results from an eddy correlation station; (7) use of numerical groundwater model in a screening mode to estimate depth of groundwater contributing surface water flow; and (8) data mining of public agency records of potable water and wastewater flows to estimate leakage rates and flowpaths in relation to streamflow and groundwater fluxes.

  10. Quantifying capital goods for biological treatment of organic waste.

    PubMed

    Brogaard, Line K; Petersen, Per H; Nielsen, Peter D; Christensen, Thomas H

    2015-02-01

    Materials and energy used for construction of anaerobic digestion (AD) and windrow composting plants were quantified in detail. The two technologies were quantified in collaboration with consultants and producers of the parts used to construct the plants. The composting plants were quantified based on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest contribution to Global Warming. The total impact on Global Warming from the capital goods compared to the operation reported in the literature on the AD plant showed an insignificant contribution of 1-2%. For the composting plants, the capital goods accounted for 10-22% of the total impact on Global Warming from composting. PMID:25595291

  11. Quantifying Energy Intake Changes during Obesity Pharmacotherapy

    PubMed Central

    Gbel, Britta; Sanghvi, Arjun; Hall, Kevin D.

    2014-01-01

    Objective Despite the fact that most obesity drugs primarily work by reducing metabolizable energy intake, elucidation of the time course of energy intake changes during long-term obesity pharmacotherapy has been prevented by the limitations of self-report methods of measuring energy intake. Methods We used a validated mathematical model of human metabolism to provide the first quantification of metabolizable energy intake changes during long-term obesity pharmacotherapy using body weight data from randomized, placebo-controlled trials that evaluated 14 different drugs or drug combinations. Results Changes in metabolizable energy intake during obesity pharmacotherapy were reasonably well-described by an exponential pattern comprising three simple parameters, with early large changes in metabolizable energy intake followed by a slow transition to a smaller persistent drug effect. Conclusions Repeated body weight measurements along with a mathematical model of human metabolism can be used to quantify changes in metabolizable energy intake during obesity pharmacotherapy. The calculated metabolizable energy intake changes followed an exponential time course, and therefore different drugs can be evaluated and compared using a common mathematical framework. PMID:24961931

  12. Identifying and quantifying urban recharge: a review

    NASA Astrophysics Data System (ADS)

    Lerner, David N.

    2002-02-01

    The sources of and pathways for groundwater recharge in urban areas are more numerous and complex than in rural environments. Buildings, roads, and other surface infrastructure combine with man-made drainage networks to change the pathways for precipitation. Some direct recharge is lost, but additional recharge can occur from storm drainage systems. Large amounts of water are imported into most cities for supply, distributed through underground pipes, and collected again in sewers or septic tanks. The leaks from these pipe networks often provide substantial recharge. Sources of recharge in urban areas are identified through piezometry, chemical signatures, and water balances. All three approaches have problems. Recharge is quantified either by individual components (direct recharge, water-mains leakage, septic tanks, etc.) or holistically. Working with individual components requires large amounts of data, much of which is uncertain and is likely to lead to large uncertainties in the final result. Recommended holistic approaches include the use of groundwater modelling and solute balances, where various types of data are integrated. Urban recharge remains an under-researched topic, with few high-quality case studies reported in the literature.

  13. Quantifying Access Disparities in Response Plans.

    PubMed

    Indrakanti, Saratchandra; Mikler, Armin R; O'Neill, Martin; Tiwari, Chetan

    2016-01-01

    Effective response planning and preparedness are critical to the health and well-being of communities in the face of biological emergencies. Response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable subpopulations, resulting in access disparities during emergency response. For a response plan to be effective, sufficient mitigation resources must be made accessible to target populations within short, federally-mandated time frames. A major challenge in response plan design is to establish a balance between the allocation of available resources and the provision of equal access to PODs for all individuals in a given geographic region. Limitations on the availability, granularity, and currency of data to identify vulnerable populations further complicate the planning process. To address these challenges and limitations, data driven methods to quantify vulnerabilities in the context of response plans have been developed and are explored in this article. PMID:26771551

  14. Quantifying ant activity using vibration measurements.

    PubMed

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C S; Evans, Theodore A

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  15. Quantifying the limits of fingerprint variability.

    PubMed

    Fagert, Michael; Morris, Keith

    2015-09-01

    The comparison and identification of fingerprints are made difficult by fingerprint variability arising from distortion. This study seeks to quantify both the limits of fingerprint variability when subject to heavy distortion, and the variability observed in repeated inked planar impressions. A total of 30 fingers were studied: 10 right slant loops, 10 plain whorls, and 10 plain arches. Fingers were video recorded performing several distortion movements under heavy deposition pressure: left, right, up, and down translation of the finger, clockwise and counter-clockwise torque of the finger, and planar impressions. Fingerprint templates, containing 'true' minutiae locations, were created for each finger using 10 repeated inked planar impressions. A minimal amount of variability, 0.18mm globally, was observed for minutiae in repeated inked planar impressions. When subject to heavy distortion minutiae can be displaced by upwards of 3mm and their orientation altered by as much as 30° in relation to their template positions. Minutiae displacements of 1mm and 10° changes in orientation are readily observed. The results of this study will allow fingerprint examiners to identify and understand the degree of variability that can be reasonably expected throughout the various regions of fingerprints. PMID:26197351

  16. Quantifying Ant Activity Using Vibration Measurements

    PubMed Central

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C. S.; Evans, Theodore A.

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  17. Quantifying truncation errors in effective field theory

    NASA Astrophysics Data System (ADS)

    Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2015-08-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of quantum chromodynamics observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions ("priors") for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We first demonstrate the calculation of Bayesian probability distributions for the EFT truncation error in some representative examples and then focus on the application of chiral EFT to neutron-proton scattering. Epelbaum, Krebs, and Meißner recently articulated explicit rules for estimating truncation errors in such EFT calculations of few-nucleon-system properties. We find that their basic procedure emerges generically from one class of naturalness priors considered and that all such priors result in consistent quantitative predictions for 68% DOB intervals. We then explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter.

  18. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  19. Quantifying Access Disparities in Response Plans

    PubMed Central

    Indrakanti, Saratchandra; Mikler, Armin R.; O’Neill, Martin; Tiwari, Chetan

    2016-01-01

    Effective response planning and preparedness are critical to the health and well-being of communities in the face of biological emergencies. Response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable subpopulations, resulting in access disparities during emergency response. For a response plan to be effective, sufficient mitigation resources must be made accessible to target populations within short, federally-mandated time frames. A major challenge in response plan design is to establish a balance between the allocation of available resources and the provision of equal access to PODs for all individuals in a given geographic region. Limitations on the availability, granularity, and currency of data to identify vulnerable populations further complicate the planning process. To address these challenges and limitations, data driven methods to quantify vulnerabilities in the context of response plans have been developed and are explored in this article. PMID:26771551

  20. Quantifying dynamical spillover in co-evolving multiplex networks.

    PubMed

    Vijayaraghavan, Vikram S; Noël, Pierre-André; Maoz, Zeev; D'Souza, Raissa M

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of "dynamical spillover" showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  1. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  2. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  3. Quantifying dynamical spillover in co-evolving multiplex networks

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D'Souza, Raissa M.

    2015-10-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways.

  4. Quantifying dynamical spillover in co-evolving multiplex networks

    PubMed Central

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D’Souza, Raissa M.

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  5. Quantifying Annual Aboveground Net Primary Production in the Intermountain West

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As part of a larger project, methods were developed to quantify current year growth on grasses, forbs, and shrubs. Annual aboveground net primary production (ANPP) data are needed for this project to calibrate results from computer simulation models and remote-sensing data. Measuring annual ANPP of ...

  6. Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers

    NASA Astrophysics Data System (ADS)

    Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

    2009-03-01

    We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lovers Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannons divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeares work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

  7. Quantifying Sentiment and Influence in Blogspaces

    SciTech Connect

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  8. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  9. Quantifying consistent individual differences in habitat selection.

    PubMed

    Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie

    2016-03-01

    Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection. PMID:26597548

  10. Uncertainty of natural tracer methods for quantifying river-aquifer interaction in a large river

    NASA Astrophysics Data System (ADS)

    Xie, Yueqing; Cook, Peter G.; Shanafield, Margaret; Simmons, Craig T.; Zheng, Chunmiao

    2016-04-01

    The quantification of river-aquifer interaction is critical to the conjunctive management of surface water and groundwater, in particular in the arid and semiarid environment with much higher potential evapotranspiration than precipitation. A variety of natural tracer methods are available to quantify river-aquifer interaction at different scales. These methods however have only been tested in rivers with relatively low flow rates (mostly less than 5 m3 s-1). In this study, several natural tracers including heat, radon-222 and electrical conductivity were measured both on vertical riverbed profiles and on longitudinal river samples to quantify river-aquifer exchange flux at both point and regional scales in the Heihe River (northwest China; flow rate 63 m3 s-1). Results show that the radon-222 profile method can estimate a narrower range of point-scale flux than the temperature profile method. In particular, three vertical radon-222 profiles failed to estimate the upper bounds of plausible flux ranges. Results also show that when quantifying regional-scale river-aquifer exchange flux, the river chemistry method constrained the flux (5.20-10.39 m2 d-1) better than the river temperature method (-100 to 100 m2 d-1). The river chemistry method also identified spatial variability of flux, whereas the river temperature method did not have sufficient resolution. Overall, for quantifying river-aquifer exchange flux in a large river, both the temperature profile method and the radon-222 profile method provide useful complementary information at the point scale to complement each other, whereas the river chemistry method is recommended over the river temperature method at the regional scale.

  11. Interpolating Quantifier-Free Presburger Arithmetic

    NASA Astrophysics Data System (ADS)

    Kroening, Daniel; Leroux, Jrme; Rmmer, Philipp

    Craig interpolation has become a key ingredient in many symbolic model checkers, serving as an approximative replacement for expensive quantifier elimination. In this paper, we focus on an interpolating decision procedure for the full quantifier-free fragment of Presburger Arithmetic, i.e., linear arithmetic over the integers, a theory which is a good fit for the analysis of software systems. In contrast to earlier procedures based on quantifier elimination and the Omega test, our approach uses integer linear programming techniques: relaxation of interpolation problems to the rationals, and a complete branch-and-bound rule tailored to efficient interpolation. Equations are handled via a dedicated polynomial-time sub-procedure. We have fully implemented our procedure on top of the SMT-solver OpenSMT and present an extensive experimental evaluation.

  12. ENVITEC shows off air technologies

    SciTech Connect

    McIlvaine, R.W.

    1995-08-01

    The ENVITEC International Trade Fair for Environmental Protection and Waste Management Technologies, held in June in Duesseldorf, Germany, is the largest air pollution exhibition in the world and may be the largest environmental technology show overall. Visitors saw thousands of environmental solutions from 1,318 companies representing 29 countries and occupying roughly 43,000 square meters of exhibit space. Many innovations were displayed under the category, ``thermal treatment of air pollutants.`` New technologies include the following: regenerative thermal oxidizers; wet systems for removing pollutants; biological scrubbers;electrostatic precipitators; selective adsorption systems; activated-coke adsorbers; optimization of scrubber systems; and air pollution monitors.

  13. Quantifying forest mortality with the remote sensing of snow

    NASA Astrophysics Data System (ADS)

    Baker, Emily Hewitt

    Greenhouse gas emissions have altered global climate significantly, increasing the frequency of drought, fire, and pest-related mortality in forests across the western United States, with increasing area affected each year. Associated changes in forests are of great concern for the public, land managers, and the broader scientific community. These increased stresses have resulted in a widespread, spatially heterogeneous decline of forest canopies, which in turn exerts strong controls on the accumulation and melt of the snowpack, and changes forest-atmosphere exchanges of carbon, water, and energy. Most satellite-based retrievals of summer-season forest data are insufficient to quantify canopy, as opposed to the combination of canopy and undergrowth, since the signals of the two types of vegetation greenness have proven persistently difficult to distinguish. To overcome this issue, this research develops a method to quantify forest canopy cover using winter-season fractional snow covered area (FSCA) data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow covered area and grain size (MODSCAG) algorithm. In areas where the ground surface and undergrowth are completely snow-covered, a pixel comprises only forest canopy and snow. Following a snowfall event, FSCA initially rises, as snow is intercepted in the canopy, and then falls, as snow unloads. A select set of local minima in a winter F SCA timeseries form a threshold where canopy is snow-free, but forest understory is snow-covered. This serves as a spatially-explicit measurement of forest canopy, and viewable gap fraction (VGF) on a yearly basis. Using this method, we determine that MODIS-observed VGF is significantly correlated with an independent product of yearly crown mortality derived from spectral analysis of Landsat imagery at 25 high-mortality sites in northern Colorado. (r =0.96 +/-0.03, p =0.03). Additionally, we determine the lag timing between green-stage tree mortality and needlefall, showing that needlefall occurred an average of 2.6 +/- 1.2 years after green-stage mortality. We relate observed increases in the VGF with crown mortality, showing that a 1% increase in mortality area produces a 0.33 +/- 0.1 % increase in the VGF.

  14. Erythropoietic protoporphyria showing solar purpura.

    PubMed

    Torinuki, W; Miura, T

    1983-01-01

    An 11-year-old girl with erythropoietic protoporphyria is described. She was admitted to our hospital complaining of swelling and purpura on her arms resulting from overexposure to solar radiation. An elevated level of protoporphyrin in the red blood cells and feces was detected by thin-layer chromatography and fluorescent scanning analysis. PMID:6642040

  15. Quantifying and Communicating Uncertainty in Preclinical Human Dose-Prediction

    PubMed Central

    Sundqvist, M; Lundahl, A; Någård, MB; Bredberg, U; Gennemark, P

    2015-01-01

    Human dose-prediction is fundamental for ranking lead-optimization compounds in drug discovery and to inform design of early clinical trials. This tutorial describes how uncertainty in such predictions can be quantified and efficiently communicated to facilitate decision-making. Using three drug-discovery case studies, we show how several uncertain pieces of input information can be integrated into one single uncomplicated plot with key predictions, including their uncertainties, for many compounds or for many scenarios, or both. PMID:26225248

  16. Quantifying channels output similarity with applications to quantum control

    NASA Astrophysics Data System (ADS)

    Pawela, Łukasz; Puchała, Zbigniew

    2016-04-01

    In this work, we aim at quantifying quantum channel output similarity. In order to achieve this, we introduce the notion of quantum channel superfidelity, which gives us an upper bound on the quantum channel fidelity. This quantity is expressed in a clear form using the Kraus representation of a quantum channel. As examples, we show potential applications of this quantity in the quantum control field.

  17. ShowMe3D

    Energy Science and Technology Software Center (ESTSC)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  18. "Show me" bioethics and politics.

    PubMed

    Christopher, Myra J

    2007-10-01

    Missouri, the "Show Me State," has become the epicenter of several important national public policy debates, including abortion rights, the right to choose and refuse medical treatment, and, most recently, early stem cell research. In this environment, the Center for Practical Bioethics (formerly, Midwest Bioethics Center) emerged and grew. The Center's role in these "cultural wars" is not to advocate for a particular position but to provide well researched and objective information, perspective, and advocacy for the ethical justification of policy positions; and to serve as a neutral convener and provider of a public forum for discussion. In this article, the Center's work on early stem cell research is a case study through which to argue that not only the Center, but also the field of bioethics has a critical role in the politics of public health policy. PMID:17926217

  19. Phoenix Scoop Inverted Showing Rasp

    NASA Technical Reports Server (NTRS)

    2008-01-01

    This image taken by the Surface Stereo Imager on Sol 49, or the 49th Martian day of the mission (July 14, 2008), shows the silver colored rasp protruding from NASA's Phoenix Mars Lander's Robotic Arm scoop. The scoop is inverted and the rasp is pointing up.

    Shown with its forks pointing toward the ground is the thermal and electrical conductivity probe, at the lower right. The Robotic Arm Camera is pointed toward the ground.

    The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is led by NASA's Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.

  20. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  1. Quantifying the reheating temperature of the universe

    NASA Astrophysics Data System (ADS)

    Mazumdar, Anupam; Zaldívar, Bryan

    2014-09-01

    The aim of this paper is to determine an exact definition of the reheat temperature for a generic perturbative decay of the inflaton. In order to estimate the reheat temperature, there are two important conditions one needs to satisfy: (a) the decay products of the inflaton must dominate the energy density of the universe, i.e. the universe becomes completely radiation dominated, and (b) the decay products of the inflaton have attained local thermodynamical equilibrium. For some choices of parameters, the latter is a more stringent condition, such that the decay products may thermalise much after the beginning of radiation-domination. Consequently, we have obtained that the reheat temperature can be much lower than the standard-lore estimation. In this paper we describe under what conditions our universe could have efficient or inefficient thermalisation, and quantify the reheat temperature for both the scenarios. This result has an immediate impact on many applications which rely on the thermal history of the universe, in particular gravitino abundance. Instant thermalisation: when the inflaton decay products instantly thermalise upon decay. Efficient thermalisation: when the inflaton decay products thermalise right at the instant when radiation epoch starts dominating the universe. Delayed thermalisation: when the inflaton decay products thermalise deep inside the radiation dominated epoch after the transition from inflaton-to-radiation domination had occurred. This paper is organised as follows. In Section 2 we set the stage and write down the relevant equations for our analysis. The standard lore about the reheating epoch is briefly commented in Section 3. Section 4 is devoted to present our analysis, in which we study the conditions under which the plasma attains thermalisation. Later on, in Section 5 we discuss the concept of reheat temperature such as to properly capture the issues of thermalisation. Finally, we conclude in Section 6.

  2. Quantifying thresholds in state space to advance our understanding of emergent behavior

    NASA Astrophysics Data System (ADS)

    Lintz, H. E.; Graham, C. B.

    2011-12-01

    Thresholds are common across diverse systems and scales and often represent emergent, complex behavior. While thresholds are a widely accepted concept, most empirical methods focus on their detection in time. Although threshold detection is useful, it does not quantify the direct drivers of the threshold response. Causal understanding of thresholds detected empirically requires their investigation in a multi-factor domain containing the direct drivers (often referred to as state space). Here, we present a new approach that quantifies threshold strength from response surfaces modeled in state space. We illustrate how this method can be used to study and better understand mechanisms that drive thresholds resulting from interactions among multiple factors. In particular, we examine stream threshold response to storm precipitation and antecedent wetness and ask how climate and catchment factors modulate local interactions that determine threshold strength. We pair data from the basin outlet of USGS gauging stations within 1 kilometer of meteorological stations with data from the nearest met-station. Non-parametric multiplicative regression (NPMR) is used to build response surfaces of flow with respect to antecedent wetness indices and storm precipitation. We quantify threshold strength using a threshold strength index applied to response surfaces that are built for each gauging station. We show how the approach can be used to study and better understand mechanisms that drive multi-factor thresholds resulting from interactions across scales. We find that catchment characteristics modulate the domain of interaction (between storm precipitation and antecedent wetness) that exhibits the strongest thresholds in runoff. We argue that our method and results can advance mechanistic understanding of hydrologic thresholds in stream response across catchments. Finally, we also argue that the relative strength of multi-factor thresholds exhibited by a system or across systems should be quantified and compared in state space. In so doing, we can enhance our understanding of threshold behavior across systems and disciplines.

  3. Improved Estimates Show Large Circumpolar Stocks of Permafrost Carbon While Quantifying Substantial Uncertainty Ranges and Identifying Remaining Data Gaps

    NASA Astrophysics Data System (ADS)

    Hugelius, G.; Strauss, J.; Zubrzycki, S.; Harden, J. W.; Schuur, E. A. G.; Ping, C. L.; Schirrmeister, L.; Grosse, G.; Michaelson, G. J.; Koven, C. D.; ODonnell, J. A.; Elberling, B.; Mishra, U.; Camill, P.; Yu, Z.; Palmtag, J.; Kuhry, P.

    2014-12-01

    Soils and other unconsolidated deposits in the northern circumpolar permafrost region store large amounts of soil organic carbon (SOC). This SOC is potentially vulnerable to remobilization following soil warming and permafrost thaw, but stock estimates are poorly constrained and quantitative error estimates were lacking. This study presents revised estimates of the permafrost SOC pool, including quantitative uncertainty estimates, in the 0-3 m depth range in soils as well as for deeper sediments (> 3 m) in deltaic deposits of major rivers and in the Yedoma region of Siberia and Alaska. The revised estimates are based on significantly larger databases compared to previous studies. Compared to previous studies, the number of individual sites/pedons has increased by a factor ×8-11 for 1-3 m soils, a factor ×8 for deltaic alluvium and a factor ×5 for Yedoma region deposits. A total estimated mean storage for the permafrost region of ca. 1300-1400 Pg with an uncertainty range of 1050-1650 Pg encompasses the revised estimates. Of this, ≤900 Pg is perennially frozen. While some components of the revised SOC stocks are similar in magnitude to those previously reported for this region, there are also substantial differences in individual components. There is evidence of substantial remaining regional data-gaps. Estimates remain particularly poorly constrained for soils in the High Arctic region and physiographic regions with thin sedimentary overburden (mountains, highlands and plateaus) as well as for >3 m depth deposits in deltas and the Yedoma region.

  4. Taking the high (or low) road: a quantifier priming perspective on basic anchoring effects.

    PubMed

    Sleeth-Keppler, David

    2013-01-01

    Current explanations of basic anchoring effects, defined as the influence of an arbitrary number standard on an uncertain judgment, confound numerical values with vague quantifiers. I show that the consideration of numerical anchors may bias subsequent judgments primarily through the priming of quantifiers, rather than the numbers themselves. Study 1 varied the target of a numerical comparison judgment in a between--participants design, while holding the numerical anchor value constant. This design yielded an anchoring effect consistent with a quantifier priming hypothesis. Study 2 included a direct manipulation of vague quantifiers in the traditional anchoring paradigm. Finally, Study 3 examined the notion that specific associations between quantifiers, reflecting values on separate judgmental dimensions (i.e., the price and height of a target) can affect the direction of anchoring effects. Discussion focuses on the nature of vague quantifier priming in numerically anchored judgments. PMID:23951950

  5. Method to quantify tail vein injection technique in small animals.

    PubMed

    Groman, Ernest V; Reinhardt, Christopher P

    2004-01-01

    Injection errors, which are often not readily recognized, can greatly impact the outcome of a pre-clinical research study. As a result, unrecognized misadministration of test compounds can render a high cost to the biomedical community. In this report, we propose six criteria for a reagent designed to assess tail vein injection technique in small animals and suggest a reagent, colloidal gold labeled with the stable isotope 197Au, that satisfies these criteria, thereby describing and validating for the first time a method to quantify technical compliance in tail vein injections. In an application of this reagent, we show the degree of variation experienced by technologists performing tail vein injection procedures in mice. In this study, mice were manually restrained and received an injection in the tail vein. One hour after injection, the mice were euthanized, various organs including the tail (the site of the injection) were collected, and their gold content was quantified by neutron activation. The three experienced animal technologists in the study were tested for tail vein injection proficiency in 30 mice. Prior to the study, the supervisor stated that a misinjection occurs when more than 10% of the intended volume remains in the tail. In light of this criterion, 12 of the 30 injections were misadministered: two with technologist 1, three with technologist 2, and seven with technologist 3. Although she was able to correctly rank the injection skills of the three technologists used in this experiment, i.e., technologist 1 and 2 more better skilled than technologist 3, the supervisor greatly underestimated the extent and degree of injection failures for the procedure. The results of the study illustrate the potential problems associated with the technical compliance with this common laboratory procedure and suggest that there is a need to validate injection methods and a need to monitor technical competence. Application of reagents similar to colloidal gold and the methods presented will facilitate the development of improved methods of teaching injection technique and monitoring technical quality in the laboratory setting. In Vivo Micro Computed Tomography of Subchondral Bone in the Rat After Intra-articular Administration of Monosodium Iodoacetate PMID:14984288

  6. Convective activity quantified by sub-gridscale fluxes

    NASA Astrophysics Data System (ADS)

    Hantel, M.; Hamelbeck, F.

    A dynamic quantity to measure the actual strength of convection is the sub-gridscale transport of equivalent temperature v? T+ c-1pLq; we refer to the corresponding correlation overline???? with ?? {dp }/{dt } as the convective flux. Vertical profiles of the convective flux within atmospheric columns are computed with a diagnostic model (DIAMOD) from the observed gridscale budgets by using analysed fields of ECMWF. Their high quality makes DIAMOD sufficiently accurate despite the strong internal compensation in the gridscale budget terms. Boundary value for the vertical integration is the latent plus sensible heat flux across the Earth's surface. We show that the maximum convective flux in a column is proportional to the mean vertical slope of the gridscale budget averaged over the troposphere. Results for 144 columns (100km/12h each) over Europe for a case of deep convection south of the Alps in September 1995 (the South Ticino case) are presented. There is areal precipitation of up to 45 mm/12 h. The areal convective flux exceeds 1 000 W/m 2 around 600 hPa in some columns. Maxima of precipitation and convective flux do not exactly coincide. This is not inconsistent with the notion that the convective flux (estimated with DIAMOD or an equivalent approach) is the proper dynamic measure to quantify the convective process.

  7. Statistical physics approach to quantifying differences in myelinated nerve fibers

    NASA Astrophysics Data System (ADS)

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-03-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.

  8. Quantifying solute transport processes: are chemically "conservative" tracers electrically conservative?

    USGS Publications Warehouse

    Singha, Kamini; Li, Li; Day-Lewis, Frederick D.; Regberg, Aaron B.

    2012-01-01

    The concept of a nonreactive or conservative tracer, commonly invoked in investigations of solute transport, requires additional study in the context of electrical geophysical monitoring. Tracers that are commonly considered conservative may undergo reactive processes, such as ion exchange, thus changing the aqueous composition of the system. As a result, the measured electrical conductivity may reflect not only solute transport but also reactive processes. We have evaluated the impacts of ion exchange reactions, rate-limited mass transfer, and surface conduction on quantifying tracer mass, mean arrival time, and temporal variance in laboratory-scale column experiments. Numerical examples showed that (1) ion exchange can lead to resistivity-estimated tracer mass, velocity, and dispersivity that may be inaccurate; (2) mass transfer leads to an overestimate in the mobile tracer mass and an underestimate in velocity when using electrical methods; and (3) surface conductance does not notably affect estimated moments when high-concentration tracers are used, although this phenomenon may be important at low concentrations or in sediments with high and/or spatially variable cation-exchange capacity. In all cases, colocated groundwater concentration measurements are of high importance for interpreting geophysical data with respect to the controlling transport processes of interest.

  9. Quantifying subsurface mixing of groundwater from lowland stream perspective.

    NASA Astrophysics Data System (ADS)

    van der Velde, Ype; Torfs, Paul; van der Zee, Sjoerd; Uijlenhoet, Remko

    2013-04-01

    The distribution of time it takes water from the moment of precipitation to reach the catchment outlet is widely used as a characteristic for catchment discharge behaviour, catchment vulnerability to pollution spreading and pollutant loads from catchments to downstream waters. However, this distribution tends to vary in time driven by variability in precipitation and evapotranspiration. Subsurface mixing controls to what extent dynamics in rainfall and evpotranspiration are translated into dynamics of travel time distributions. This insight in hydrologic functioning of catchments requires new definitions and concepts that link dynamics of catchment travel time distributions to the degree of subsurface mixing. In this presentation we propose the concept of STorage Outflow Probability (STOP) functions, that quantify the probability of water parcels stored in a catchment, to leave this catchment by discharge or evapotranspiration. We will show how STOPs relate to the topography and subsurface and how they can be used for deriving time varying travel time distributions of a catchment. The presented analyses will combine a unique dataset of high-frequent discharge and nitrate concentration measurements with results of a spatially distributed groundwater model and conceptual models of water flow and solute transport. Remarkable findings are the large contrasts in discharge behaviour expressed in travel time between lowland and sloping catchments and the strong relationship between evapotranspiration and stream water nutrient concentration dynamics.

  10. Quantifying singlet fission in novel organic materials using nonlinear optics

    NASA Astrophysics Data System (ADS)

    Busby, Erik; Xia, Jianlong; Yaffe, Omer; Kumar, Bharat; Berkelbach, Timothy; Wu, Qin; Miller, John; Nuckolls, Colin; Zhu, Xiaoyang; Reichman, David; Campos, Luis; Sfeir, Matthew Y.

    2014-10-01

    Singlet fission is a form of multiple exciton generation in which two triplet excitons are produced from the decay of a photoexcited singlet exciton. In a small number of organic materials, most notably pentacene, this conversion process has been shown to occur with unity quantum yield on sub-ps timescales. However, a poorly understood mechanism for fission along with strict energy and geometry requirements have so far limited the observation of this process to a few classes of organic materials, with only a subset of these (most notably the polyacenes) showing both efficient fission and long-lived triplets. Here, we utilize novel organic materials to investigate how the efficiency of the fission process depends on the coupling and the energetic driving force between chromophores in both intra- and intermolecular singlet fission materials. We demonstrate how the triplet yield can be accurately quantified using a combination of traditional transient spectroscopies and recently developed excited state saturable absorption techniques. These results allow us to gain mechanistic insight into the fission process and suggest general strategies for generating new materials that can undergo efficient fission.

  11. Microfluidic experiments to quantify microbes encountering oil water interfaces

    NASA Astrophysics Data System (ADS)

    Sheng, Jian; Jalali, Maryam; Molaei, Mehdi

    2015-11-01

    It is known that marine microbes are one of the components of biodegradation of crude oil. Biodegradation of crude oil is initiated by microbes encountering the droplet. To elucidate the key processes involved in bacterial encountering the rising oil droplets we have established microfluidic devices with hydrophilic surfaces to create micro oil droplets with controlled sizes. To quantify effect of motility of bacteria on their encounter rate, using high speed microscopy, we simultaneously tracked motile bacteria and solid particles with equivalent sizes encountering oil droplets. The results show that in the advection dominant regime, where the droplet size and the rising velocity are large, bacterial motility plays no role in the encountering rate; however, in the diffusion dominant regime, where the swimming velocity of the cells are comparable with rising velocity and Peclet number of particles is small, motility of the cells increases their encounter rate. Ongoing analysis focus on developing a mathematical model to predict the encounter rate of the cells based on their size, swimming speed, and dispersion rate and the size of oil droplets. GoMRI.

  12. Statistical physics approach to quantifying differences in myelinated nerve fibers

    PubMed Central

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-01-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross–sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

  13. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  14. Quantifying Item Dependency by Fisher's Z.

    ERIC Educational Resources Information Center

    Shen, Linjun

    Three aspects of the usual approach to assessing local item dependency, Yen's "Q" (H. Huynh, H. Michaels, and S. Ferrara, 1995), deserve further investigation. Pearson correlation coefficients do not distribute normally when the coefficients are large, and thus cannot quantify the dependency well. In the second place, the accuracy of item response…

  15. Quantifying the Reuse of Learning Objects

    ERIC Educational Resources Information Center

    Elliott, Kristine; Sweeney, Kevin

    2008-01-01

    This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then integrating as many…

  16. Subtleties of Hidden Quantifiers in Implication

    ERIC Educational Resources Information Center

    Shipman, Barbara A.

    2016-01-01

    Mathematical conjectures and theorems are most often of the form P(x) ? Q(x), meaning ?x,P(x) ? Q(x). The hidden quantifier ?x is crucial in understanding the implication as a statement with a truth value. Here P(x) and Q(x) alone are only predicates, without truth values, since they contain unquantified variables. But standard textbook…

  17. Protocol comparison for quantifying in situ mineralization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In situ mineralization methods are intended to quantify mineralization under realistic environmental conditions. This study was conducted to compare soil moisture and temperature in intake soil cores contained in cylinders to that in adjacent bulk soil, compare the effect of two resin bag techniques...

  18. Quantifying the Thermal Fatigue of CPV Modules

    SciTech Connect

    Bosco, N.; Kurtz, S.

    2011-02-01

    A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative; study between cities demonstrates a significant difference in the accumulated damage. These differences are most; sensitive to the number of larger (ΔT) thermal cycles experienced for a location. High frequency data (<1/min) may; be required to most accurately employ this method.

  19. Quantifiable diagnosis of muscular dystrophies and neurogenic atrophies through network analysis

    PubMed Central

    2013-01-01

    Background The diagnosis of neuromuscular diseases is strongly based on the histological characterization of muscle biopsies. However, this morphological analysis is mostly a subjective process and difficult to quantify. We have tested if network science can provide a novel framework to extract useful information from muscle biopsies, developing a novel method that analyzes muscle samples in an objective, automated, fast and precise manner. Methods Our database consisted of 102 muscle biopsy images from 70 individuals (including controls, patients with neurogenic atrophies and patients with muscular dystrophies). We used this to develop a new method, Neuromuscular DIseases Computerized Image Analysis (NDICIA), that uses network science analysis to capture the defining signature of muscle biopsy images. NDICIA characterizes muscle tissues by representing each image as a network, with fibers serving as nodes and fiber contacts as links. Results After a ‘training’ phase with control and pathological biopsies, NDICIA was able to quantify the degree of pathology of each sample. We validated our method by comparing NDICIA quantification of the severity of muscular dystrophies with a pathologist’s evaluation of the degree of pathology, resulting in a strong correlation (R = 0.900, P <0.00001). Importantly, our approach can be used to quantify new images without the need for prior ‘training’. Therefore, we show that network science analysis captures the useful information contained in muscle biopsies, helping the diagnosis of muscular dystrophies and neurogenic atrophies. Conclusions Our novel network analysis approach will serve as a valuable tool for assessing the etiology of muscular dystrophies or neurogenic atrophies, and has the potential to quantify treatment outcomes in preclinical and clinical trials. PMID:23514382

  20. Entropy generation method to quantify thermal comfort

    NASA Technical Reports Server (NTRS)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study is needed in the future to fully establish the validity of the OTCI formula and the model. One of the practical applications of this index is that could it be integrated in thermal control systems to develop human-centered environmental control systems for potential use in aircraft, mass transit vehicles, intelligent building systems, and space vehicles.

  1. DOE: Quantifying the Value of Hydropower in the Electric Grid

    SciTech Connect

    2012-12-31

    The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, www.epri.com/hydrogrid. With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms of energy, capacity, and ancillary services. Many potential improvements to existing hydropower plants were found to be cost-effective. Pumped storage is the most likely form of large new hydro asset expansions in the U.S. however, justifying investments in new pumped storage plants remains very challenging with current electricity market economics. Even over a wide range of possible energy futures, up to 2020, no energy future was found to bring quantifiable revenues sufficient to cover estimated costs of plant construction. Value streams not quantified in this study may provide a different cost-benefit balance and an economic tipping point for hydro. Future studies are essential in the quest to quantify the full potential value. Additional research should consider the value of services provided by advanced storage hydropower and pumped storage at smaller time steps for integration of variable renewable resources, and should include all possible value streams such as capacity value and portfolio benefits i.e.; reducing cycling on traditional generation.

  2. Development and application of a method for quantifying factors affecting chloramine decay in service reservoirs.

    PubMed

    Sathasivan, Arumugam; Krishna, K C Bal; Fisher, Ian

    2010-08-01

    Service reservoirs play an important role in maintaining water quality in distribution systems. Several factors affect the reservoir water quality, including bulk water reactions, stratification, sediment accumulation and wall reactions. It is generally thought that biofilm and sediments can harbour microorganisms, especially in chloraminated reservoirs, but their impact on disinfectant loss on disinfectant loss has not been quantified. Hence, debate exists as to the extent of the problem. To quantify the impact, the reservoir acceleration factor (F(Ra)) is defined. This factor represents the acceleration of chloramine decay arising from all causes, including changes in retention time, assuming that the reservoir is completely mixed. Such an approach quantifies the impact of factors, other than chemical reactions, in the bulk water. Data from three full-scale chloraminated service reservoirs in distribution systems of Sydney, Australia, were analysed to demonstrate the generality of the method. Results showed that in two large service reservoirs (404 x 10(3) m(3) and 82 x 10(3) m(3)) there was minimal impact from biofilm/sediment. However, in a small reservoir (3 x 10(3) m(3)), the biofilm/sediment had significant impact. In both small and large reservoirs, the effect of stratification was significant. PMID:20621323

  3. Quantifying disturbed hill dipterocarp forest lands in Ulu Tembeling, Malaysia with HRV/SPOT images

    NASA Astrophysics Data System (ADS)

    Jusoff, Kamaruzaman; D'Souza, Giles

    A satellite remote sensing survey was conducted in a disturbed logged-over hill dipterocarp forest of Ulu Tembeling in northern Pahang, Malaysia to identify and quantify the site disturbance classes due to road construction and logging activities. The merged SPOT data path/row K271-J341 was taken in October 3, 1988. The SPOT scene was obtained in a computer-compatible tape format and manual analysis was initiated by selecting a representative subsection of the scene that covered the study area. Ground truthing/field work was carried out and parameters such as causal factors, forms, sizes, shapes and patterns of soil disturbance were recorded, measured and correlated with image classification. Image interpretation, registration and classification were conducted for visual image analysis. The results showed that forest soil disturbance could be easily detected and monitored with 93% accuracy. Six classes of soil disturbance were quantified and recognized, namely (i) primary forest road, (ii) secondary forest road, (iii) skid road, (iv) skid trail, (v) secondary landing, and (vi) primary landing. By reference to maps of licensing applications and logging permits, legal and unpermitted logging activity can be well identified. However it is emphasized that interpreters should have a prior knowledge of the topography and the soil disturbance pattern of a logged-over forest area to be quantified and identified before actually beginning with image interpretation.

  4. Digital Optical Method to quantify the visual opacity of fugitive plumes

    NASA Astrophysics Data System (ADS)

    Du, Ke; Shi, Peng; Rood, Mark J.; Wang, Kai; Wang, Yang; Varma, Ravi M.

    2013-10-01

    Fugitive emissions of particulate matter (PM) raise public concerns due to their adverse impacts on human health and atmospheric visibility. Although the United States Environmental Protection Agency (USEPA) has not developed a standard method for quantifying the opacities of fugitive plumes, select states have developed human vision-based opacity methods for such applications. A digital photographic method, Digital Optical Method for fugitive plumes (DOMfugitive), is described herein for quantifying the opacities of fugitive plume emissions. Field campaigns were completed to evaluate this method by driving vehicles on unpaved roads to generate dust plumes. DOMfugitive was validated by performing simultaneous measurements using a co-located laser transmissometer. For 84% of the measurements, the individual absolute opacity difference values between the two methods were ≤15%. The average absolute opacity difference for all the measurements was 8.5%. The paired t-test showed no significant difference between the two methods at 99% confidence level. Comparisons of wavelength dependent opacities with grayscale opacities indicated that DOMfugitive was not sensitive to the wavelength in the visible spectrum evaluated during these field campaigns. These results encourage the development of a USEPA standard method for quantifying the opacities of fugitive PM plumes using digital photography, as an alternative to human-vision based approaches.

  5. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  6. Quantifying the CV: Adapting an Impact Assessment Model to Astronomy

    NASA Astrophysics Data System (ADS)

    Bohmier, K. A.

    2015-04-01

    We present the process and results of applying the Becker Model to the curriculum vitae of a Yale University astronomy professor. As background, in July 2013, the Becker Medical Library at Washington Univ. in St. Louis held a workshop for librarians on the Becker Model, a framework developed by research assessment librarians for quantifying medical researchers' individual and group outputs. Following the workshop, the model was analyzed for content to adapt it to the physical sciences.

  7. Model for quantifying absorption through abnormal skin

    SciTech Connect

    Scott, R.C.; Dugard, P.H.

    1986-02-01

    Techniques are available for quantitatively studying factors governing absorption through normal skin (in vivo and in vitro) but relatively little is known about the permeability of abnormal skin. We have designed and evaluated an in vivo model for quantifying absorption through abnormal skin. Absorption of (/sup 3/H)mannitol and (/sup 14/C)octyl benzoate was studied through altered rat skin. (/sup 3/H)Mannitol penetrated normal skin much more slowly than did (/sup 14/C)octyl benzoate. Abnormal skin was more permeable to (/sup 3/H)mannitol and (/sup 14/C)octyl benzoate, absorption was greater than 100X and greater than 2X greater, respectively, than normal. The in vivo model has been successfully used to quantify absorption through abnormal skin.

  8. Quantifying spatial correlations of general quantum dynamics

    NASA Astrophysics Data System (ADS)

    Rivas, ngel; Mller, Markus

    2015-06-01

    Understanding the role of correlations in quantum systems is both a fundamental challenge as well as of high practical relevance for the control of multi-particle quantum systems. Whereas a lot of research has been devoted to study the various types of correlations that can be present in the states of quantum systems, in this work we introduce a general and rigorous method to quantify the amount of correlations in the dynamics of quantum systems. Using a resource-theoretical approach, we introduce a suitable quantifier and characterize the properties of correlated dynamics. Furthermore, we benchmark our method by applying it to the paradigmatic case of two atoms weakly coupled to the electromagnetic radiation field, and illustrate its potential use to detect and assess spatial noise correlations in quantum computing architectures.

  9. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  10. Quantifying Stock Return Distributions in Financial Markets.

    PubMed

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  11. Quantifying the sources of error in measurements of urine activity

    SciTech Connect

    Mozley, P.D.; Kim, H.J.; McElgin, W.

    1994-05-01

    Accurate scintigraphic measurements of radioactivity in the bladder and voided urine specimens can be limited by scatter, attenuation, and variations in the volume of urine that a given dose is distributed in. The purpose of this study was to quantify some of the errors that these problems can introduce. Transmission scans and 41 conjugate images of the bladder were sequentially acquired on a dual headed camera over 24 hours in 6 subjects after the intravenous administration of 100-150 MBq (2.7-3.6 mCi) of a novel I-123 labeled benzamide. Renal excretion fractions were calculated by measuring the counts in conjugate images of 41 sequentially voided urine samples. A correction for scatter was estimated by comparing the count rates in images that were acquired with the photopeak centered an 159 keV and images that were made simultaneously with the photopeak centered on 126 keV. The decay and attenuation corrected, geometric mean activities were compared to images of the net dose injected. Checks of the results were performed by measuring the total volume of each voided urine specimen and determining the activity in a 20 ml aliquot of it with a dose calibrator. Modeling verified the experimental results which showed that 34% of the counts were attenuated when the bladder had been expanded to a volume of 300 ml. Corrections for attenuation that were based solely on the transmission scans were limited by the volume of non-radioactive urine in the bladder before the activity was administered. The attenuation of activity in images of the voided wine samples was dependent on the geometry of the specimen container. The images of urine in standard, 300 ml laboratory specimen cups had 39{plus_minus}5% fewer counts than images of the same samples laid out in 3 liter bedpans. Scatter through the carbon fiber table substantially increased the number of counts in the images by an average of 14%.

  12. The Arizona Sun Corridor: Quantifying climatic implications of megapolitan development

    NASA Astrophysics Data System (ADS)

    Georgescu, M.; Moustaoui, M.; Mahalov, A.

    2010-12-01

    The local and regional-scale hydro-climatic impacts of land use and land cover change (LULCC) that result from urbanization require attention in light of future urban growth projections and related concerns for environmental sustainability. This is an especially serious issue over the southwestern U.S. where mounting pressure on the area’s natural desert environment and increasingly limited resources (e.g. water) exists, and is likely to worsen, due to unrelenting sprawl and associated urbanization. While previous modeling results have shown the degree to which the built environment has contributed to the region’s warming summertime climate, we use projections of future landscape change over the rapidly urbanizing Arizona Sun Corridor - an anticipated stretch of urban expanse that includes current metro Phoenix and Tucson - as surface boundary conditions to conduct high-resolution (order of 1-km) numerical simulations, over the seasonal timescale, to quantify the climatic effect of this relentlessly growing and increasingly vulnerable region. We use the latest version of the WRF modeling system to take advantage of several new capabilities, including a newly implemented nesting method used to refine the vertical mesh, and a comprehensive multi-story urban canopy scheme. We quantify the impact of projected (circa 2050) Sun Corridor megapolitan area on further development of the urban heat island (UHI), assess changes in the surface energy budget, with important implications for the near surface temperature and stability, and discuss modeled impacts on regional rainfall. Lastly, simulated effects are compared with projected warming due to increasing greenhouse gases (the GCMs from which these results are obtained currently do not take into account effects of urbanizing regions) and quantify the degree to which LULCC over the Arizona Sun Corridor will exacerbate regional anthropogenic climate change. A number of potential mitigation strategies are discussed (including effects of renewable energy), the simulated impact on anthropogenic heat production is quantified, and the degree to which future warming may be offset is estimated.

  13. Progress toward quantifying landscape-scale movement patterns of the glassy-winged sharpshooter and its natural enemies using a novel marl-capture technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Here we present the results of the first year of our research targeted at quantifying the landscape-level movement patterns of GWSS and its natural enemies. We showed that protein markers can be rapidly acquired and retained on insects for several weeks after marking directly in the field. Specifica...

  14. Quantifying the Impact of Scenic Environments on Health

    NASA Astrophysics Data System (ADS)

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-11-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing.

  15. Quantifying the Impact of Scenic Environments on Health

    PubMed Central

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  16. Quantifying variances in comparative RNA secondary structure prediction

    PubMed Central

    2013-01-01

    Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662

  17. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows…

  18. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows

  19. Oxygen-Enhanced MRI Accurately Identifies, Quantifies, and Maps Tumor Hypoxia in Preclinical Cancer Models.

    PubMed

    O'Connor, James P B; Boult, Jessica K R; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff J M; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2016-02-15

    There is a clinical need for noninvasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning, and therapy monitoring. Oxygen-enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed "Oxy-R fraction") would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here, we demonstrate that OE-MRI signals are accurate, precise, and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia noninvasively and is immediately translatable to the clinic. Cancer Res; 76(4); 787-95. ©2015 AACR. PMID:26659574

  20. Quantifying Ammonia Emissions from High Elevation Grassland and Forest Soils

    NASA Astrophysics Data System (ADS)

    Stratton, J. J.; Levin, E. J.; Ham, J. M.; Collett, J. L.; Borch, T.

    2010-12-01

    Extensive evidence has shown that Rocky Mountain National Park (RMNP) has undergone ecosystem changes due to excessive nitrogen (N) deposition. Previously, the Rocky Mountain Atmospheric Nitrogen and Sulfur (RoMANS) study was conducted to identify the species of N that deposit in RMNP. Results from the RoMANS study were used to identify contributions to N deposition in RMNP showing that local sources provided 33% of the wet-deposited ammonia in RMNP during the summer period. With the uncertainty of the type of local sources and their influence on the N in RMNP, the major goal of this study is to determine the amount of ammonia released from native grassland and forest soils. Intact soil cores were collected from native grassland and forest soils near RMNP on June 28th, July 20th, and August 9th 2010 and monitored in a laboratory chamber study for seven days. The samples were collected in the morning of the sampling dates to limit artifacts such as temperature variations. Ammonia gas released from the cores was collected in an acid trap and analyzed using Ion Chromatography. Results showed that ammonia gas released, based on an average (n = 18) over seven days, was 1.71 and 0.677 mg NH3/m2 soil/day for grassland and forest soils, respectively. Not all of the 36 soil cores investigated lost quantifiable amounts of ammonia. The results are small in comparison to other non-local sources (e.g. animal feeding operations, fertilizer, etc.), but further studies need to be conducted to determine its significance as a local source. Seasonal trends were visible with June 28th being higher than both July 20th and August 9th sampling. Grassland soil emissions were higher than forest soils emissions for all three sampling dates, and water loss from the soil cores did not strongly correlate with ammonia emission. Studies are also being conducted to understand the fate of wet-deposited N on native grassland and forest soils.

  1. Quantifying and scaling airplane performance in turbulence

    NASA Astrophysics Data System (ADS)

    Richardson, Johnhenri R.

    This dissertation studies the effects of turbulent wind on airplane airspeed and normal load factor, determining how these effects scale with airplane size and developing envelopes to account for them. The results have applications in design and control of aircraft, especially small scale aircraft, for robustness with respect to turbulence. Using linearized airplane dynamics and the Dryden gust model, this dissertation presents analytical and numerical scaling laws for airplane performance in gusts, safety margins that guarantee, with specified probability, that steady flight can be maintained when stochastic wind gusts act upon an airplane, and envelopes to visualize these safety margins. Presented here for the first time are scaling laws for the phugoid natural frequency, phugoid damping ratio, airspeed variance in turbulence, and flight path angle variance in turbulence. The results show that small aircraft are more susceptible to high frequency gusts, that the phugoid damping ratio does not depend directly on airplane size, that the airspeed and flight path angle variances can be parameterized by the ratio of the phugoid natural frequency to a characteristic turbulence frequency, and that the coefficient of variation of the airspeed decreases with increasing airplane size. Accompanying numerical examples validate the results using eleven different airplanes models, focusing on NASA's hypothetical Boeing 757 analog the Generic Transport Model and its operational 5.5% scale model, the NASA T2. Also presented here for the first time are stationary flight, where the flight state is a stationary random process, and the stationary flight envelope, an adjusted steady flight envelope to visualize safety margins for stationary flight. The dissertation shows that driving the linearized airplane equations of motion with stationary, stochastic gusts results in stationary flight. It also shows how feedback control can enlarge the stationary flight envelope by alleviating gust loads, though the enlargement is significantly limited by control surface saturation. The results end with a numerical example of a Navion general aviation aircraft performing various steady flight maneuvers in moderate turbulence, showing substantial reductions in the steady flight envelope for some combinations of maneuvers, turbulence, and safety margins.

  2. Assessment of MOF's Quality: Quantifying Defect Content in Crystalline Porous Materials.

    PubMed

    Al-Janabi, Nadeen; Fan, Xiaolei; Siperstein, Flor R

    2016-04-21

    A quantitative method for assessment of defects in metal-organic framework (MOF) is presented based on isotherms calculated using Grand Canonical Monte Carlo (GCMC) simulations. Defects in MOF structures generated during the synthesis and sample preparation can lead to large variations in experimentally measured adsorption isotherms but are difficult to quantify. We use as a case study CO2 adsorption on Cu3(BTC)2 MOF (BTC = benzene-1,3,5-tricarboxylic acid) to show that different samples reported in the literature have various proportions of principal pores blocked or side pores blocked, resulting in isotherms with different capacity and affinity toward CO2. The approach presented is easily generalized to other materials, showing that simulation results combined with experimentally measured gas adsorption isotherms can be used to quantitatively identify key defective features of the material. PMID:27050536

  3. Toward quantifying the deep Atlantic carbon storage increase during the last glaciation

    NASA Astrophysics Data System (ADS)

    Yu, J.; Menviel, L.; Jin, Z.

    2014-12-01

    Ice core records show that atmospheric CO2 concentrations during peak glacial time were ~30% lower than the levels during interglacial periods. The terrestrial biosphere carbon stock was likely reduced during glacials. Increased carbon storage in the deep ocean is thought to play an important role in lowering glacial atmospheric CO2. However, it has been challenging to quantify carbon storage changes in the deep ocean using existing proxy data. Here, we present deepwater carbonate ion reconstructions for a few locations in the deep Atlantic. These data allow us to estimate the minimum carbon storage increase in the deep Atlantic Ocean during the last glaciation. Our results show that, despite its relative small volume, the deep Atlantic Ocean may contribute significantly to atmospheric CO2 variations at major climate transitions. Furthermore, our results suggest a strong coupling of ocean circulation and carbon cycle in the deep Atlantic during the last glaciation.

  4. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance.

    PubMed

    Chevereau, Guillaume; Dravecká, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-11-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the "morbidostat", a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations-an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. PMID:26581035

  5. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance

    PubMed Central

    Chevereau, Guillaume; Dravecká, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-01-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the “morbidostat”, a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations—an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall, we identified novel quantitative characteristics of the evolutionary landscape that provide the conceptual foundation for predicting the dynamics of drug resistance evolution. PMID:26581035

  6. Quantifying uncertainties in U.S. wildland fire emissions across space and time scales

    NASA Astrophysics Data System (ADS)

    Larkin, N. K.; Strand, T. T.; Raffuse, S. M.; Drury, S.

    2011-12-01

    Smoke from wildland fire is a growing concern as air quality regulations tighten and public acceptance declines. Wildland fire emissions inventories are not only important for understanding smoke impacts on air quality but also in quantifying sources of greenhouse gas emissions. Wildland fire emissions can be calculated using a number of models and methods. We show an overview of results from the Smoke and Emissions Model Intercomparison Project (SEMIP) describing uncertainties in calculations of U.S. wildland fire emissions across space and time scales from single fires to annual national totals. Differences in emissions calculated from different models and systems and satallite algorithms and ground based systems are shown. The relative importance of uncertainties in fire size and available fuel data, consumption modeling techniques, and emissions factors are compared and quantified and can be applied to various use cases that include air quality impact modeling and greenhouse gas accounting. The results of this work show where additional information and updated models can most improve wildland fire emission inventories.

  7. Quantifying antibody binding on protein microarrays using microarray nonlinear calibration.

    PubMed

    Yu, Xiaobo; Wallstrom, Garrick; Magee, Dewey Mitchell; Qiu, Ji; Mendoza, D Eliseo A; Wang, Jie; Bian, Xiaofang; Graves, Morgan; LaBaer, Joshua

    2013-05-01

    We present a microarray nonlinear calibration (MiNC) method for quantifying antibody binding to the surface of protein microarrays that significantly increases the linear dynamic range and reduces assay variation compared with traditional approaches. A serological analysis of guinea pig Mycobacterium tuberculosis models showed that a larger number of putative antigen targets were identified with MiNC, which is consistent with the improved assay performance of protein microarrays. MiNC has the potential to be employed in biomedical research using multiplex antibody assays that need quantitation, including the discovery of antibody biomarkers, clinical diagnostics with multi-antibody signatures, and construction of immune mathematical models. PMID:23662896

  8. A vector space method to quantify agreement in qualitative data

    PubMed Central

    McFarlane, Delano J.; Ancker, Jessica S.; Kukafka, Rita

    2008-01-01

    Interrater agreement in qualitative research is rarely quantified. We present a new method for assessing interrater agreement in the coding of focus group transcripts, based on vector space methods. We also demonstrate similarities between this vector method and two previously published interrater agreement methods. Using these methods, we showed that interrater agreement for the qualitative data was quite low, attributable in part to the subjective nature of the codes and in part to the very large number of possible codes. These methods of assessing inter-rater agreement have the potential to be useful in determining and improving reliability of qualitative codings. PMID:18999026

  9. Quantifying the surface chemistry of 3D matrices in situ

    NASA Astrophysics Data System (ADS)

    Tzeranis, Dimitrios S.; So, Peter T. C.; Yannas, Ioannis V.

    2014-03-01

    Despite the major role of the matrix (the insoluble environment around cells) in physiology and pathology, there are very few and limited methods that can quantify the surface chemistry of a 3D matrix such as a biomaterial or tissue ECM. This study describes a novel optical-based methodology that can quantify the surface chemistry (density of adhesion ligands for particular cell adhesion receptors) of a matrix in situ. The methodology utilizes fluorescent analogs (markers) of the receptor of interest and a series of binding assays, where the amount of bound markers on the matrix is quantified via spectral multi-photon imaging. The study provides preliminary results for the quantification of the ligands for the two major collagen-binding integrins (α1β1, α2β1) in porous collagen scaffolds that have been shown to be able to induce maximum regeneration in transected peripheral nerves. The developed methodology opens the way for quantitative descriptions of the insoluble microenvironment of cells in physiology and pathology, and for integrating the matrix in quantitative models of cell signaling. α

  10. Quantifying meta-correlations in financial markets

    NASA Astrophysics Data System (ADS)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  11. Precise thermal NDE for quantifying structural damage

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-09-18

    The authors demonstrated a fast, wide-area, precise thermal NDE imaging system to quantify aircraft corrosion damage, such as percent metal loss, above a threshold of 5% with 3% overall uncertainties. The DBIR precise thermal imaging and detection method has been used successfully to characterize defect types, and their respective depths, in aircraft skins, and multi-layered composite materials used for wing patches, doublers and stiffeners. This precise thermal NDE inspection tool has long-term potential benefits to evaluate the structural integrity of airframes, pipelines and waste containers. They proved the feasibility of the DBIR thermal NDE imaging system to inspect concrete and asphalt-concrete bridge decks. As a logical extension to the successful feasibility study, they plan to inspect a concrete bridge deck from a moving vehicle to quantify the volumetric damage within the deck and the percent of the deck which has subsurface delaminations. Potential near-term benefits are in-service monitoring from a moving vehicle to inspect the structural integrity of the bridge deck. This would help prioritize the repair schedule for a reported 200,000 bridge decks in the US which need substantive repairs. Potential long-term benefits are affordable, and reliable, rehabilitation for bridge decks.

  12. Obtaining Laws Through Quantifying Experiments: Justifications of Pre-service Physics Teachers in the Case of Electric Current, Voltage and Resistance

    NASA Astrophysics Data System (ADS)

    Mäntylä, Terhi; Hämäläinen, Ari

    2015-07-01

    The language of physics is mathematics, and physics ideas, laws and models describing phenomena are usually represented in mathematical form. Therefore, an understanding of how to navigate between phenomena and the models representing them in mathematical form is important for a physics teacher so that the teacher can make physics understandable to students. Here, the focus is on the "experimental mathematization," how laws are established through quantifying experiments. A sequence from qualitative experiments to mathematical formulations through quantifying experiments on electric current, voltage and resistance in pre-service physics teachers' laboratory reports is examined. The way students reason and justify the mathematical formulation of the measurement results and how they combine the treatment and presentation of empirical data to their justifications is analyzed. The results show that pre-service physics teachers understand the basic idea of how quantifying experiments establish the quantities and laws but are not able to argue it in a justified manner.

  13. Quantifying near-surface water exchange to assess hydrometeorological models

    NASA Astrophysics Data System (ADS)

    Parent, Annie-Claude; Anctil, François; Morais, Anne

    2013-04-01

    Modelling water exchange from the lower atmosphere, crop and soil system using hydrometeorological models allows processing an actual evapotranspiration (ETa) which is a complex but critical value for numerous hydrological purposes e.g. hydrological modelling and crop irrigation. This poster presents a summary of the hydrometeorological research activity conducted by our research group. The first purpose of this research is to quantify ETa and drainage of a rainfed potato crop located in South-Eastern Canada. Then, the outputs of the hydrometeorological models under study are compared with the observed turbulent fluxes. Afterwards, the sensibility of the hydrometeorological models to different inputs is assessed for an environment under a changing climate. ETa was measured from micrometeorological instrumentation (CSAT3, Campbell SCI Inc.; Li7500, LiCor Inc.), and the eddy covariance techniques. Near surface soil heat flux and soil water content at different layers from 10 cm to 100 cm were also measured. Other parameters required by the hydrometeorological models were observed using meteorological standard instrumentation: shortwave and longwave solar radiation, wind speed, air temperature, atmospheric pressure and precipitation. The cumulative ETa during the growth season (123 days) was 331.5 mm, with a daily maximum of 6.5 mm at full coverage; precipitation was 350.6 mm which is rather small compared with the historical mean (563.3 mm). This experimentation allowed calculating crop coefficients that vary among the growth season for a rainfed potato crop. Land surface schemes as CLASS (Canadian Land Surface Scheme) and c-ISBA (a Canadian version of the model Interaction Sol-Biosphère-Atmosphère) are 1-D physical hydrometeorological models that produce turbulent fluxes (including ETa) for a given crop. The schemes performances were assessed for both energy and water balance, based on the resulting turbulent fluxes and the given observations. CLASS showed overestimating the turbulent fluxes (including ETa) so as the fluctuations in the soil flux which were higher than those measured. ETa and runoff were overestimated by c-ISBA while drainage was weaker, compared to CLASS. On the whole, CLASS showed better modelling drainage. Further works include: 1- comparing observations and results from CLASS to the French model SURFEX (Surface Externalisée), that uses the scheme ISBA, and 2- assessing the sensibility of CLASS to different meteorological inputs (i.e. 6 regional climate models) in producing a consistent ETa, in a context of climate changes.

  14. Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH

    NASA Astrophysics Data System (ADS)

    Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred

    2014-05-01

    It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P

  15. Quantifying proteinuria in hypertensive disorders of pregnancy.

    PubMed

    Amin, Sapna V; Illipilla, Sireesha; Hebbar, Shripad; Rai, Lavanya; Kumar, Pratap; Pai, Muralidhar V

    2014-01-01

    Background. Progressive proteinuria indicates worsening of the condition in hypertensive disorders of pregnancy and hence its quantification guides clinician in decision making and treatment planning. Objective. To evaluate the efficacy of spot dipstick analysis and urinary protein-creatinine ratio (UPCR) in hypertensive disease of pregnancy for predicting 24-hour proteinuria. Subjects and Methods. A total of 102 patients qualifying inclusion criteria were evaluated with preadmission urine dipstick test and UPCR performed on spot voided sample. After admission, the entire 24-hour urine sample was collected and analysed for daily protein excretion. Dipstick estimation and UPCR were compared to the 24-hour results. Results. Seventy-eight patients (76.5%) had significant proteinuria of more than 300?mg/24?h. Dipstick method showed 59% sensitivity and 67% specificity for prediction of significant proteinuria. Area under curve for UPCR was 0.89 (95% CI: 0.83 to 0.95, P < 0.001) showing 82% sensitivity and 12.5% false positive rate for cutoff value of 0.45. Higher cutoff values (1.46 and 1.83) predicted heavy proteinuria (2?g and 3?g/24?h, resp.). Conclusion. This study suggests that random urinary protein?:?creatine ratio is a reliable investigation compared to dipstick method to assess proteinuria in hypertensive pregnant women. However, clinical laboratories should standardize the reference values for their setup. PMID:25302114

  16. Quantifying Proteinuria in Hypertensive Disorders of Pregnancy

    PubMed Central

    Amin, Sapna V.; Illipilla, Sireesha; Rai, Lavanya; Kumar, Pratap; Pai, Muralidhar V.

    2014-01-01

    Background. Progressive proteinuria indicates worsening of the condition in hypertensive disorders of pregnancy and hence its quantification guides clinician in decision making and treatment planning. Objective. To evaluate the efficacy of spot dipstick analysis and urinary protein-creatinine ratio (UPCR) in hypertensive disease of pregnancy for predicting 24-hour proteinuria. Subjects and Methods. A total of 102 patients qualifying inclusion criteria were evaluated with preadmission urine dipstick test and UPCR performed on spot voided sample. After admission, the entire 24-hour urine sample was collected and analysed for daily protein excretion. Dipstick estimation and UPCR were compared to the 24-hour results. Results. Seventy-eight patients (76.5%) had significant proteinuria of more than 300?mg/24?h. Dipstick method showed 59% sensitivity and 67% specificity for prediction of significant proteinuria. Area under curve for UPCR was 0.89 (95% CI: 0.83 to 0.95, P < 0.001) showing 82% sensitivity and 12.5% false positive rate for cutoff value of 0.45. Higher cutoff values (1.46 and 1.83) predicted heavy proteinuria (2?g and 3?g/24?h, resp.). Conclusion. This study suggests that random urinary protein?:?creatine ratio is a reliable investigation compared to dipstick method to assess proteinuria in hypertensive pregnant women. However, clinical laboratories should standardize the reference values for their setup. PMID:25302114

  17. Olaparib shows promise in multiple tumor types.

    PubMed

    2013-07-01

    A phase II study of the PARP inhibitor olaparib (AstraZeneca) for cancer patients with inherited BRCA1 and BRCA2 gene mutations confirmed earlier results showing clinical benefit for advanced breast and ovarian cancers, and demonstrated evidence of effectiveness against pancreatic and prostate cancers. PMID:23847380

  18. Crisis of Japanese vascular flora shown by quantifying extinction risks for 1618 taxa.

    PubMed

    Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

    2014-01-01

    Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994-1995 and in 2003-2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

  19. How to quantify conduits in wood?

    PubMed

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology. PMID:23507674

  20. Quantifying the Anthropogenic Footprint in Eastern China

    NASA Astrophysics Data System (ADS)

    Meng, Chunlei; Dou, Youjun

    2016-04-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities.

  1. Quantifying fault recovery in multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Harary, Frank

    1990-01-01

    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges.

  2. Quantifying morphogenesis in plants in 4D.

    PubMed

    Bassel, George W; Smith, Richard S

    2016-02-01

    Plant development occurs in 3D space over time (4D). Recent advances in image acquisition and computational analysis are now enabling development to be visualized and quantified in its entirety at the cellular level. The simultaneous quantification of reporter abundance and 3D cell shape change enables links between signaling processes and organ morphogenesis to be accomplished organ-wide and at single cell resolution. Current work to integrate this quantitative 3D image data with computational models is enabling causal relationships between gene expression and organ morphogenesis to be uncovered. Further technical advances in imaging and image analysis will enable this approach to be applied to a greater diversity of plant organs and will become a key tool to address many questions in plant development. PMID:26748353

  3. Quantifying International Travel Flows Using Flickr

    PubMed Central

    Barchiesi, Daniele; Moat, Helen Susannah; Alis, Christian; Bishop, Steven; Preis, Tobias

    2015-01-01

    Online social media platforms are opening up new opportunities to analyse human behaviour on an unprecedented scale. In some cases, the fast, cheap measurements of human behaviour gained from these platforms may offer an alternative to gathering such measurements using traditional, time consuming and expensive surveys. Here, we use geotagged photographs uploaded to the photo-sharing website Flickr to quantify international travel flows, by extracting the location of users and inferring trajectories to track their movement across time. We find that Flickr based estimates of the number of visitors to the United Kingdom significantly correlate with the official estimates released by the UK Office for National Statistics, for 28 countries for which official estimates are calculated. Our findings underline the potential for indicators of key aspects of human behaviour, such as mobility, to be generated from data attached to the vast volumes of photographs posted online. PMID:26147500

  4. Quantifying International Travel Flows Using Flickr.

    PubMed

    Barchiesi, Daniele; Moat, Helen Susannah; Alis, Christian; Bishop, Steven; Preis, Tobias

    2015-01-01

    Online social media platforms are opening up new opportunities to analyse human behaviour on an unprecedented scale. In some cases, the fast, cheap measurements of human behaviour gained from these platforms may offer an alternative to gathering such measurements using traditional, time consuming and expensive surveys. Here, we use geotagged photographs uploaded to the photo-sharing website Flickr to quantify international travel flows, by extracting the location of users and inferring trajectories to track their movement across time. We find that Flickr based estimates of the number of visitors to the United Kingdom significantly correlate with the official estimates released by the UK Office for National Statistics, for 28 countries for which official estimates are calculated. Our findings underline the potential for indicators of key aspects of human behaviour, such as mobility, to be generated from data attached to the vast volumes of photographs posted online. PMID:26147500

  5. Crowdsourcing for quantifying transcripts: An exploratory study.

    PubMed

    Azzam, Tarek; Harman, Elena

    2016-02-01

    This exploratory study attempts to demonstrate the potential utility of crowdsourcing as a supplemental technique for quantifying transcribed interviews. Crowdsourcing is the harnessing of the abilities of many people to complete a specific task or a set of tasks. In this study multiple samples of crowdsourced individuals were asked to rate and select supporting quotes from two different transcripts. The findings indicate that the different crowdsourced samples produced nearly identical ratings of the transcripts, and were able to consistently select the same supporting text from the transcripts. These findings suggest that crowdsourcing, with further development, can potentially be used as a mixed method tool to offer a supplemental perspective on transcribed interviews. PMID:26519690

  6. Quantifying the Anthropogenic Footprint in Eastern China.

    PubMed

    Meng, Chunlei; Dou, Youjun

    2016-01-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities. PMID:27067132

  7. Quantifying Power Grid Risk from Geomagnetic Storms

    NASA Astrophysics Data System (ADS)

    Homeier, N.; Wei, L. H.; Gannon, J. L.

    2012-12-01

    We are creating a statistical model of the geophysical environment that can be used to quantify the geomagnetic storm hazard to power grid infrastructure. Our model is developed using a database of surface electric fields for the continental United States during a set of historical geomagnetic storms. These electric fields are derived from the SUPERMAG compilation of worldwide magnetometer data and surface impedances from the United States Geological Survey. This electric field data can be combined with a power grid model to determine GICs per node and reactive MVARs at each minute during a storm. Using publicly available substation locations, we derive relative risk maps by location by combining magnetic latitude and ground conductivity. We also estimate the surface electric fields during the August 1972 geomagnetic storm that caused a telephone cable outage across the middle of the United States. This event produced the largest surface electric fields in the continental U.S. in at least the past 40 years.

  8. Quantifying the Anthropogenic Footprint in Eastern China

    PubMed Central

    Meng, Chunlei; Dou, Youjun

    2016-01-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities. PMID:27067132

  9. 3D Wind: Quantifying wind speed and turbulence intensity

    NASA Astrophysics Data System (ADS)

    Barthelmie, R. J.; Pryor, S. C.; Wang, H.; Crippa, P.

    2013-12-01

    Integrating measurements and modeling of wind characteristics for wind resource assessment and wind farm control is increasingly challenging as the scales of wind farms increases. Even offshore or in relatively homogeneous landscapes, there are significant gradients of both wind speed and turbulence intensity on scales that typify large wind farms. Our project is, therefore, focused on (i) improving methods to integrate remote sensing and in situ measurements with model simulations to produce a 3-dimensional view of the flow field on wind farm scales and (ii) investigating important controls on the spatiotemporal variability of flow fields within the coastal zone. The instrument suite deployed during the field experiments includes; 3-D sonic and cup anemometers deployed on meteorological masts and buoys, anemometers deployed on tethersondes and an Unmanned Aerial Vehicle, multiple vertically-pointing continuous-wave lidars and scanning Doppler lidars. We also integrate data from satellite-borne instrumentation - specifically synthetic aperture radar and scatterometers and output from the Weather Research and Forecast (WRF) model. Spatial wind fields and vertical profiles of wind speed from WRF and from the full in situ observational suite exhibit excellent agreement in a proof-of-principle experiment conducted in north Indiana particularly during convective conditions, but showed some discrepancies during the breakdown of the nocturnal stable layer. Our second experiment in May 2013 focused on triangulating a volume above an area of coastal water extending from the port in Cleveland out to an offshore water intake crib (about 5 km) and back to the coast, and includes extremely high resolution WRF simulations designed to characterize the coastal zone. Vertically pointing continuous-wave lidars were operated at each apex of the triangle, while the scanning Doppler lidar scanned out across the water over 90 degrees azimuth angle. Preliminary results pertaining to objective (i) indicate there is good agreement between wind and turbulence profiles to 200 m as measured by the vertically pointing lidars with the expected modification of the offshore profiles relative to the profiles at the coastline. However, these profiles do not always fully agree with wind speed and direction profiles measured by the scanning Doppler lidar. Further investigation is required to elucidate these results and to analyze whether these discrepancies occur during particular atmospheric conditions. Preliminary results regarding controls on flow in the coastal zone (i.e. objective ii) include clear evidence that the wind profile to 200 m was modified due to swell during unstable conditions even under moderate to high wind speed conditions. The measurement campaigns will be described in detail, with a view to evaluating optimal strategies for offshore measurement campaigns and in the context of quantifying wind and turbulence in a 3D volume.

  10. Quantifying and predicting interpretational uncertainty in cross-sections

    NASA Astrophysics Data System (ADS)

    Randle, Charles; Bond, Clare; Monaghan, Alison; Lark, Murray

    2015-04-01

    Cross-sections are often constructed from data to create a visual impression of the geologist's interpretation of the sub-surface geology. However as with all interpretations, this vision of the sub-surface geology is uncertain. We have designed and carried out an experiment with the aim of quantifying the uncertainty in geological cross-sections created by experts interpreting borehole data. By analysing different attributes of the data and interpretations we reflect on the main controls on uncertainty. A group of ten expert modellers at the British Geological Survey were asked to interpret an 11.4 km long cross-section from south-east Glasgow, UK. The data provided consisted of map and borehole data of the superficial deposits and shallow bedrock. Each modeller had a unique set of 11 boreholes removed from their dataset, to which their interpretations of the top of the bedrock were compared. This methodology allowed quantification of how far from the 'correct answer' each interpretation is at 11 points along each interpreted cross-section line; through comparison of the interpreted and actual bedrock elevations in the boreholes. This resulted in the collection of 110 measurements of the error to use in further analysis. To determine the potential control on uncertainty various attributes relating to the modeller, the interpretation and the data were recorded. Modellers were asked to fill out a questionnaire asking for information; such as how much 3D modelling experience they had, and how long it took them to complete the interpretation. They were also asked to record their confidence in their interpretations graphically, in the form of a confidence level drawn onto the cross-section. Initial analysis showed the majority of the experts' interpreted bedrock elevations within 5 metres of those recorded in the withheld boreholes. Their distribution is peaked and symmetrical about a mean of zero, indicating that there was no tendency for the experts to either under or over estimate the elevation of the bedrock. More complex analysis was completed in the form of linear mixed effects modelling. The modelling was used to determine if there were any correlations between the error and any other parameter recorded in the questionnaire, section or the initial dataset. This has resulted in the determination of both data based and interpreter based controls on uncertainty, adding insight into how uncertainty can be predicted, as well as how interpretation workflows can be improved. Our results will inform further experiments across a wide variety of geological situations to build understanding and best practice workflows for cross-section interpretation to reduce uncertainty.

  11. Quantifying touch feel perception: tribological aspects

    NASA Astrophysics Data System (ADS)

    Liu, X.; Yue, Z.; Cai, Z.; Chetwynd, D. G.; Smith, S. T.

    2008-08-01

    We report a new investigation into how surface topography and friction affect human touch-feel perception. In contrast with previous work based on micro-scale mapping of surface mechanical and tribological properties, this investigation focuses on the direct measurement of the friction generated when a fingertip is stroked on a test specimen. A special friction apparatus was built for the in situ testing, based on a linear flexure mechanism with both contact force and frictional force measured simultaneously. Ten specimens, already independently assessed in a 'perception clinic', with materials including natural wood, leather, engineered plastics and metal were tested and the results compared with the perceived rankings. Because surface geometrical features are suspected to play a significant role in perception, a second set of samples, all of one material, were prepared and tested in order to minimize the influence of properties such as hardness and thermal conductivity. To minimize subjective effects, all specimens were also tested in a roller-on-block configuration based upon the same friction apparatus, with the roller materials being steel, brass and rubber. This paper reports the detailed design and instrumentation of the friction apparatus, the experimental set-up and the friction test results. Attempts have been made to correlate the measured properties and the perceived feelings for both roughness and friction. The results show that the measured roughness and friction coefficient both have a strong correlation with the rough-smooth and grippy-slippery feelings.

  12. Quantifying uncertainty in observational rainfall datasets

    NASA Astrophysics Data System (ADS)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    The CO-ordinated Regional Downscaling Experiment (CORDEX) has to date seen the publication of at least ten journal papers that examine the African domain during 2012 and 2013. Five of these papers consider Africa generally (Nikulin et al. 2012, Kim et al. 2013, Hernandes-Dias et al. 2013, Laprise et al. 2013, Panitz et al. 2013) and five have regional foci: Tramblay et al. (2013) on Northern Africa, Mariotti et al. (2014) and Gbobaniyi el al. (2013) on West Africa, Endris et al. (2013) on East Africa and Kalagnoumou et al. (2013) on southern Africa. There also are a further three papers that the authors know about under review. These papers all use an observed rainfall and/or temperature data to evaluate/validate the regional model output and often proceed to assess projected changes in these variables due to climate change in the context of these observations. The most popular reference rainfall data used are the CRU, GPCP, GPCC, TRMM and UDEL datasets. However, as Kalagnoumou et al. (2013) point out there are many other rainfall datasets available for consideration, for example, CMORPH, FEWS, TAMSAT & RIANNAA, TAMORA and the WATCH & WATCH-DEI data. They, with others (Nikulin et al. 2012, Sylla et al. 2012) show that the observed datasets can have a very wide spread at a particular space-time coordinate. As more ground, space and reanalysis-based rainfall products become available, all which use different methods to produce precipitation data, the selection of reference data is becoming an important factor in model evaluation. A number of factors can contribute to a uncertainty in terms of the reliability and validity of the datasets such as radiance conversion algorithims, the quantity and quality of available station data, interpolation techniques and blending methods used to combine satellite and guage based products. However, to date no comprehensive study has been performed to evaluate the uncertainty in these observational datasets. We assess 18 gridded rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10.1175/JCLI-D-12-00703.1 Kim, J., D. E. Waliser, C. A. Mattmann, C. E. Goodale, A. F. Hart, P. A. Zimdars, D. J. Crichton, C. Jones, G. Nikulin, B. Hewitson, C. Jack, C. Lennard, and A. Favre (2013) Evaluation of the CORDEX-Africa multi-RCM hindcast: systematic model errors. Clim. Dyn. 42:1189-1202. DOI: 10.1007/s00382-013-1751-7 Laprise, R., L. Hernández-Díaz, K. Tete, L. Sushama, L. ?eparovi?, A. Martynov, K. Winger, and M. Valin (2013) Climate projections over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 41:3219-3246. DOI:10.1007/s00382-012-1651-2 Mariotti, L., I. Diallo, E. Coppola, and F. Giorgi (2014) Seasonal and intraseasonal changes of African monsoon climates in 21st century CORDEX projections. Climatic Change, 1-13. DOI: 10.1007/s10584-014-1097-0 Nikulin, G., C. Jones, F. Giorgi, G. Asrar, M. Büchner, R. Cerezo-Mota, O. Bøssing Christensen, M. Déqué, J. Fernandez, A. Hänsler, E.van Meijgaard, P. Samuelsson, M. Bamba Sylla, and L.Sushama (2012) Precipitation Climatology in an Ensemble of CORDEX-Africa Regional Climate Simulations. J. Climate, 25, 6057-6078. 10.1175/JCLI-D-11-00375.1 Panitz, H.-J., , A. Dosio, M. Büchner, D. Lüthi, and K. Keuler (2013) COSMO-CLM (CCLM) climate simulations over CORDEX Africa domain: analysis of the ERA-Interim driven simulations at 0.44 degree and 0.22 degree resolution. Clim. Dyn., DOI:10.1007/s00382-013-1834-5 Sylla, M. B., F. Giorgi, E. Coppola, and L. Mariotti (2012) Uncertainties in daily rainfall over Africa: assessment of gridded observation products and evaluation of a regional climate model simulation. Int. J. Climatol., 33:1805-1817. DOI: 10.1002/joc.3551 Tramblay Y., D. Ruelland, S. Somot, R. Bouaicha, and E. Servat (2013) High-resolution Med-CORDEX regional climate model simulations for hydrological impact studies: a first evaluation of the ALADIN-Climate model in Morocco. Hydrol. Earth Syst. Sci. Discuss., 10, 5687-5737. DOI:10.5194/hessd-10-5687-2013

  13. A mass-balance model to separate and quantify colloidal and solute redistributions in soil

    USGS Publications Warehouse

    Bern, C.R.; Chadwick, O.A.; Hartshorn, A.S.; Khomo, L.M.; Chorover, J.

    2011-01-01

    Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

  14. Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback

    SciTech Connect

    Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.; Garcia-Ojalvo, J.

    2010-07-15

    Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensity dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.

  15. Quantifying nonverbal communicative behavior in face-to-face human dialogues

    NASA Astrophysics Data System (ADS)

    Skhiri, Mustapha; Cerrato, Loredana

    2002-11-01

    The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

  16. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects…

  17. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects

  18. Quantifying Subsidence in the 1999-2000 Arctic Winter Vortex

    NASA Technical Reports Server (NTRS)

    Greenblatt, Jeffery B.; Jost, Hans-juerg; Loewenstein, Max; Podolske, James R.; Bui, T. Paul; Elkins, James W.; Moore, Fred L.; Ray, Eric A.; Sen, Bhaswar; Margitan, James J.; Hipskind, R. Stephen (Technical Monitor)

    2000-01-01

    Quantifying the subsidence of the polar winter stratospheric vortex is essential to the analysis of ozone depletion, as chemical destruction often occurs against a large, altitude-dependent background ozone concentration. Using N2O measurements made during SOLVE on a variety of platforms (ER-2, in-situ balloon and remote balloon), the 1999-2000 Arctic winter subsidence is determined from N2O-potential temperature correlations along several N2O isopleths. The subsidence rates are compared to those determined in other winters, and comparison is also made with results from the SLIMCAT stratospheric chemical transport model.

  19. Quantifying Hierarchy Stimuli in Systematic Desensitization Via GSR: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Barabasz, Arreed F.

    1974-01-01

    The aim of the method for quantifying hierarchy stimuli by Galvanic Skin Resistance recordings is to improve the results of systematic desensitization by attenuating the subjective influences in hierarchy construction which are common in traditional procedures. (Author/CS)

  20. Quantifying Uncertainties in Rainfall Maps from Cellular Communication Networks

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Rios Gaona, M. F.; Overeem, A.; Leijnse, H.

    2014-12-01

    The core idea behind rainfall retrievals from commercial microwave link networks is to measure the decrease in power due to attenuation of the electromagnetic signal by raindrops along the link path. Accurate rainfall measurements are of vital importance in hydrological applications, for instance, flash-flood early-warning systems, agriculture, and climate modeling. Hence, such an alternative technique fulfills the need for measurements with higher resolution in time and space, especially in places where standard rain gauge-networks are scarce or poorly maintained. Rainfall estimation via commercial microwave link networks, at country-wide scales, has recently been demonstrated. Despite their potential applicability in rainfall estimation at higher spatiotemporal resolutions, the uncertainties present in link-based rainfall maps are not yet fully comprehended. Now we attempt to quantify the inherent sources of uncertainty present in interpolated maps computed from commercial microwave link rainfall retrievals. In order to disentangle these sources of uncertainty we identified four main sources of error: 1) microwave link measurements, 2) availability of microwave link measurements, 3) spatial distribution of the network, and 4) interpolation methodology. We computed more than 1000 rainfall fields, for The Netherlands, from real and simulated microwave link data. These rainfall fields were compared to quality-controlled gauge-adjusted radar rainfall maps considered as ground-truth. Thus we were able to quantify the contribution of errors in microwave link measurements to the overall uncertainty. The actual performance of the commercial microwave link network is affected by the intermittent availability of the links, not only in time but also in space. We simulated a fully-operational network in time and space, and thus we quantified the role of the availability of microwave link measurements to the overall uncertainty. This research showed that the largest source of uncertainty is related to the microwave link measurements themselves (~55%). The second largest source of uncertainty (~20%) is attributed to the intermittence in the availability of microwave link data.

  1. Quantifying Different Tactile Sensations Evoked by Cutaneous Electrical Stimulation Using Electroencephalography Features.

    PubMed

    Zhang, Dingguo; Xu, Fei; Xu, Heng; Shull, Peter B; Zhu, Xiangyang

    2016-03-01

    Psychophysical tests and standardized questionnaires are often used to analyze tactile sensation based on subjective judgment in conventional studies. In contrast with the subjective evaluation, a novel method based on electroencephalography (EEG) is proposed to explore the possibility of quantifying tactile sensation in an objective way. The proposed experiments adopt cutaneous electrical stimulation to generate two kinds of sensations (vibration and pressure) with three grades (low/medium/strong) on eight subjects. Event-related potentials (ERPs) and event-related synchronization/desynchronization (ERS/ERD) are extracted from EEG, which are used as evaluation indexes to distinguish between vibration and pressure, and also to discriminate sensation grades. Results show that five-phase P1-N1-P2-N2-P3 deflection is induced in EEG. Using amplitudes of latter ERP components (N2 and P3), vibration and pressure sensations can be discriminated on both individual and grand-averaged ERP ([Formula: see text]). The grand-average ERPs can distinguish the three sensations grades, but there is no significant difference on individuals. In addition, ERS/ERD features of mu rhythm (8-13[Formula: see text]Hz) are adopted. Vibration and pressure sensations can be discriminated on grand-average ERS/ERD ([Formula: see text]), but only some individuals show significant difference. The grand-averaged results show that most sensation grades can be differentiated, and most pairwise comparisons show significant difference on individuals ([Formula: see text]). The work suggests that ERP- and ERS/ERD-based EEG features may have potential to quantify tactile sensations for medical diagnosis or engineering applications. PMID:26762865

  2. Quantifying alosine prey in the diets of marine piscivores in the Gulf of Maine.

    PubMed

    McDermott, S P; Bransome, N C; Sutton, S E; Smith, B E; Link, J S; Miller, T J

    2015-06-01

    The objectives of this work were to quantify the spatial and temporal distribution of the occurrence of anadromous fishes (alewife Alosa pseudoharengus, blueback herring Alosa aestivalis and American shad Alosa sapidissima) in the stomachs of demersal fishes in coastal waters of the north-west Atlantic Ocean. Results show that anadromous fishes were detectable and quantifiable in the diets of common marine piscivores for every season sampled. Even though anadromous fishes were not the most abundant prey, they accounted for c. 5-10% of the diet by mass for several marine piscivores. Statistical comparisons of these data with fish diet data from a broad-scale survey of the north-west Atlantic Ocean indicate that the frequency of this trophic interaction was significantly higher within spatially and temporally focused sampling areas of this study than in the broad-scale survey. Odds ratios of anadromous predation were as much as 460 times higher in the targeted sampling as compared with the broad-scale sampling. Analyses indicate that anadromous prey consumption was more concentrated in the near-coastal waters compared with consumption of a similar, but more widely distributed species, the Atlantic herring Clupea harengus. In the context of ecosystem-based fisheries management, the results suggest that even low-frequency feeding events may be locally important, and should be incorporated into ecosystem models. PMID:25943427

  3. Quantifying the Robustness of the English Sibilant Fricative Contrast in Children

    PubMed Central

    Reidy, Patrick F.; Beckman, Mary E.; Edwards, Jan

    2015-01-01

    Purpose Four measures of children's developing robustness of phonological contrast were compared to see how they correlated with age, vocabulary size, and adult listeners' correctness ratings. Method Word-initial sibilant fricative productions from eighty-one 2- to 5-year-old children and 20 adults were phonetically transcribed and acoustically analyzed. Four measures of robustness of contrast were calculated for each speaker on the basis of the centroid frequency measured from each fricative token. Productions that were transcribed as correct from different children were then used as stimuli in a perception experiment in which adult listeners rated the goodness of each production. Results Results showed that the degree of category overlap, quantified as the percentage of a child's productions whose category could be correctly predicted from the output of a mixed-effects logistic regression model, was the measure that correlated best with listeners' goodness judgments. Conclusions Even when children's productions have been transcribed as correct, adult listeners are sensitive to within-category variation quantified by the child's degree of category overlap. Further research is needed to explore the relationship between the age of a child and adults' sensitivity to different types of within-category variation in children's speech. PMID:25766040

  4. Using nitrate to quantify quick flow in a karst aquifer

    USGS Publications Warehouse

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  5. Quantifying the BICEP2-Planck tension over gravitational waves.

    PubMed

    Smith, Kendrick M; Dvorkin, Cora; Boyle, Latham; Turok, Neil; Halpern, Mark; Hinshaw, Gary; Gold, Ben

    2014-07-18

    The recent BICEP2 measurement of B-mode polarization in the cosmic microwave background (r = 0.2(-0.05)(+0.07)), a possible indication of primordial gravity waves, appears to be in tension with the upper limit from WMAP (r < 0.13 at 95% C.L.) and Planck (r < 0.11 at 95% C.L.). We carefully quantify the level of tension and show that it is very significant (around 0.1% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension and some novel ways to constrain them. PMID:25083631

  6. Quantifying the strength of miRNA-target interactions.

    PubMed

    Breda, Jeremie; Rzepiela, Andrzej J; Gumienny, Rafal; van Nimwegen, Erik; Zavolan, Mihaela

    2015-09-01

    We quantify the strength of miRNA-target interactions with MIRZA, a recently introduced biophysical model. We show that computationally predicted energies of interaction correlate strongly with the energies of interaction estimated from biochemical measurements of Michaelis-Menten constants. We further show that the accuracy of the MIRZA model can be improved taking into account recently emerged experimental data types. In particular, we use chimeric miRNA-mRNA sequences to infer a MIRZA-CHIMERA model and we provide a framework for inferring a similar model from measurements of rate constants of miRNA-mRNA interaction in the context of Argonaute proteins. Finally, based on a simple model of miRNA-based regulation, we discuss the importance of interaction energy and its variability between targets for the modulation of miRNA target expression in vivo. PMID:25892562

  7. Quantifying Biofilm in Porous Media Using Rock Physics Models

    NASA Astrophysics Data System (ADS)

    Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

    2012-12-01

    Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include assessing biofilms used as barriers in CO2 sequestration studies as well as assisting in evaluating microbial enhanced oil recovery methods (MEOR), where microorganisms are used to plug highly porous rocks for efficient oil production.

  8. Quantifying Nitrogen Loss From Flooded Hawaiian Taro Fields

    NASA Astrophysics Data System (ADS)

    Deenik, J. L.; Penton, C. R.; Bruland, G. L.; Popp, B. N.; Engstrom, P.; Mueller, J. A.; Tiedje, J.

    2010-12-01

    In 2004 a field fertilization experiment showed that approximately 80% of the fertilizer nitrogen (N) added to flooded Hawaiian taro (Colocasia esculenta) fields could not be accounted for using classic N balance calculations. To quantify N loss through denitrification and anaerobic ammonium oxidation (anammox) pathways in these taro systems we utilized a slurry-based isotope pairing technique (IPT). Measured nitrification rates and porewater N profiles were also used to model ammonium and nitrate fluxes through the top 10 cm of soil. Quantitative PCR of nitrogen cycling functional genes was used to correlate porewater N dynamics with potential microbial activity. Rates of denitrification calculated using porewater profiles were compared to those obtained using the slurry method. Potential denitrification rates of surficial sediments obtained with the slurry method were found to drastically overestimate the calculated in-situ rates. The largest discrepancies were present in fields greater than one month after initial fertilization, reflecting a microbial community poised to denitrify the initial N pulse. Potential surficial nitrification rates varied between 1.3% of the slurry-measured denitrification potential in a heavily-fertilized site to 100% in an unfertilized site. Compared to the use of urea, fish bone meal fertilizer use resulted in decreased N loss through denitrification in the surface sediment, according to both porewater modeling and IPT measurements. In addition, sub-surface porewater profiles point to root-mediated coupled nitrification/denitrification as a potential N loss pathway that is not captured in surface-based incubations. Profile-based surface plus subsurface coupled nitrification/denitrification estimates were between 1.1 and 12.7 times denitrification estimates from the surface only. These results suggest that the use of a ‘classic’ isotope pairing technique that employs 15NO3- in fertilized agricultural systems can lead to a drastic overestimation of in-situ denitrification rates and that root-associated subsurface coupled nitrification/denitrification may be a major N loss pathway in these flooded agricultural systems.

  9. The Physics of Equestrian Show Jumping

    NASA Astrophysics Data System (ADS)

    Stinner, Art

    2014-04-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this one, but when I searched the Internet for information and looked at YouTube presentations, I could only find simplistic references to Newton's laws and the conservation of mechanical energy principle. Nowhere could I find detailed calculations. On the other hand, there were several biomechanical articles with empirical reports of the results of kinetic and dynamic investigations of show jumping using high-speed digital cameras and force plates. They summarize their results in tables that give information about the motion of a horse jumping over high fences (1.40 m) and the magnitudes of the forces encountered when landing. However, they do not describe the physics of these results.

  10. Quantifying Variability of Avian Colours: Are Signalling Traits More Variable?

    PubMed Central

    Delhey, Kaspar; Peters, Anne

    2008-01-01

    Background Increased variability in sexually selected ornaments, a key assumption of evolutionary theory, is thought to be maintained through condition-dependence. Condition-dependent handicap models of sexual selection predict that (a) sexually selected traits show amplified variability compared to equivalent non-sexually selected traits, and since males are usually the sexually selected sex, that (b) males are more variable than females, and (c) sexually dimorphic traits more variable than monomorphic ones. So far these predictions have only been tested for metric traits. Surprisingly, they have not been examined for bright coloration, one of the most prominent sexual traits. This omission stems from computational difficulties: different types of colours are quantified on different scales precluding the use of coefficients of variation. Methodology/Principal Findings Based on physiological models of avian colour vision we develop an index to quantify the degree of discriminable colour variation as it can be perceived by conspecifics. A comparison of variability in ornamental and non-ornamental colours in six bird species confirmed (a) that those coloured patches that are sexually selected or act as indicators of quality show increased chromatic variability. However, we found no support for (b) that males generally show higher levels of variability than females, or (c) that sexual dichromatism per se is associated with increased variability. Conclusions/Significance We show that it is currently possible to realistically estimate variability of animal colours as perceived by them, something difficult to achieve with other traits. Increased variability of known sexually-selected/quality-indicating colours in the studied species, provides support to the predictions borne from sexual selection theory but the lack of increased overall variability in males or dimorphic colours in general indicates that sexual differences might not always be shaped by similar selective forces. PMID:18301766

  11. Quantifying and Mapping Global Data Poverty

    PubMed Central

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this ‘proof of concept’ study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  12. Quantifying Acute Myocardial Injury Using Ratiometric Fluorometry

    PubMed Central

    Ranji, Mahsa; Matsubara, Muneaki; Leshnower, Bradley G.; Hinmon, Robin H.; Jaggard, Dwight L.; Chance, Britton; Gorman, Robert C.

    2011-01-01

    Early reperfusion is the best therapy for myocardial infarction (MI). Effectiveness, however, varies significantly between patients and has implications for long-term prognosis and treatment. A technique to assess the extent of myocardial salvage after reperfusion therapy would allow for high-risk patients to be identified in the early post-MI period. Mitochondrial dysfunction is associated with cell death following myocardial reperfusion and can be quantified by fluorometry. Therefore, we hypothesized that variations in the fluorescence of mitochondrial nicotinamide adenine dinucleotide (NADH) and flavoprotein (FP) can be used acutely to predict the degree of myocardial injury. Thirteen rabbits had coronary occlusion for 30 min followed by 3 h of reperfusion. To produce a spectrum of infarct sizes, six animals were infused cyclosporine A prior to ischemia. Using a specially designed fluorometric probe, NADH and FP fluorescence were measured in the ischemic area. Changes in NADH and FP fluorescence, as early as 15 min after reperfusion, correlated with postmortem assessment infarct size (r = 0.695, p < 0.01). This correlation strengthened with time (r = 0.827, p < 0.001 after 180 min). Clinical application of catheter-based myocardial fluorometry may provide a minimally invasive technique for assessing the early response to reperfusion therapy. PMID:19272908

  13. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  14. Quantifying the transmission potential of pandemic influenza

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Nishiura, Hiroshi

    2008-03-01

    This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.

  15. Data Used in Quantified Reliability Models

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  16. Concurrent schedules: Quantifying the aversiveness of noise

    PubMed Central

    McAdie, Tina M.; Foster, T. Mary; Temple, William

    1996-01-01

    Four hens worked under independent multiple concurrent variable-interval schedules with an overlaid aversive stimulus (sound of hens in a poultry shed at 100dBA) activated by the first peck on a key. The sound remained on until a response was made on the other key. The key that activated the sound in each component was varied over a series of conditions. When the sound was activated by the left (or right) key in one component, it was activated by the right (or left) key in the other component. Bias was examined under a range of different variable-interval schedules, and the applicability of the generalized matching law was examined. It was found that the hens' behavior was biased away from the sound independently of the schedule in effect and that this bias could be quantified using a modified version of the generalized matching law. Behavior during the changeover delays was not affected by the presence of the noise or by changes in reinforcement rate, even though the total response measures were. Insensitivity shown during the delay suggests that behavior after the changeover delay may be more appropriate as a measure of preference (or aversiveness) of stimuli than are overall behavior measures. PMID:16812802

  17. Automated Counting of Particles To Quantify Cleanliness

    NASA Technical Reports Server (NTRS)

    Rhode, James

    2005-01-01

    A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.

  18. Choosing appropriate techniques for quantifying groundwater recharge

    USGS Publications Warehouse

    Scanlon, B.R.; Healy, R.W.; Cook, P.G.

    2002-01-01

    Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important considerations in choosing a technique include space/time scales, range, and reliability of recharge estimates based on different techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important because it may dictate the required space/time scales of the recharge estimates. Typical study goals include water-resource evaluation, which requires information on recharge over large spatial scales and on decadal time scales; and evaluation of aquifer vulnerability to contamination, which requires detailed information on spatial variability and preferential flow. The range of recharge rates that can be estimated using different approaches should be matched to expected recharge rates at a site. The reliability of recharge estimates using different techniques is variable. Techniques based on surface-water and unsaturated-zone data provide estimates of potential recharge, whereas those based on groundwater data generally provide estimates of actual recharge. Uncertainties in each approach to estimating recharge underscore the need for application of multiple techniques to increase reliability of recharge estimates.

  19. Quantifying instantaneous performance in alpine ski racing.

    PubMed

    Federolf, Peter Andreas

    2012-01-01

    Alpine ski racing is a popular sport in many countries and a lot of research has gone into optimising athlete performance. Two factors influence athlete performance in a ski race: speed and the chosen path between the gates. However, to date there is no objective, quantitative method to determine instantaneous skiing performance that takes both of these factors into account. The purpose of this short communication was to define a variable quantifying instantaneous skiing performance and to study how this variable depended on the skiers' speed and on their chosen path. Instantaneous skiing performance was defined as time loss per elevation difference dt/dz, which depends on the skier's speed v(z), and the distance travelled per elevation difference ds/dz. Using kinematic data collected in an earlier study, it was evaluated how these variables can be used to assess the individual performance of six ski racers in two slalom turns. The performance analysis conducted in this study might be a useful tool not only for athletes and coaches preparing for competition, but also for sports scientists investigating skiing techniques or engineers developing and testing skiing equipment. PMID:22620279

  20. Quantifying truncation errors in effective field theory

    NASA Astrophysics Data System (ADS)

    Furnstahl, R. J.; Klco, N.; Phillips, D. R.; Wesolowski, S.

    2015-10-01

    Bayesian procedures designed to quantify truncation errors in perturbative calculations of QCD observables are adapted to expansions in effective field theory (EFT). In the Bayesian approach, such truncation errors are derived from degree-of-belief (DOB) intervals for EFT predictions. Computation of these intervals requires specification of prior probability distributions (``priors'') for the expansion coefficients. By encoding expectations about the naturalness of these coefficients, this framework provides a statistical interpretation of the standard EFT procedure where truncation errors are estimated using the order-by-order convergence of the expansion. It also permits exploration of the ways in which such error bars are, and are not, sensitive to assumptions about EFT-coefficient naturalness. We demonstrate the calculation of Bayesian DOB intervals for the EFT truncation error in some representative cases and explore several methods by which the convergence properties of the EFT for a set of observables may be used to check the statistical consistency of the EFT expansion parameter. Supported in part by the NSF and the DOE.

  1. Quantifying Climate Risks for Urban Environments

    NASA Astrophysics Data System (ADS)

    Hayhoe, K.; Stoner, A. K.; Dickson, L.

    2013-12-01

    High-density urban areas are both uniquely vulnerable and uniquely able to adapt to climate change. Enabling this potential requires identifying the vulnerabilities, however, and these depend strongly on location: the challenges climate change poses for a southern coastal city such as Miami, for example, have little in common with those facing a northern inland city such as Chicago. By combining local knowledge with climate science, risk assessment, engineering analysis, and adaptation planning, it is possible to develop relevant climate information that feeds directly into vulnerability assessment and long-term planning. Key steps include: developing climate projections tagged to long-term weather stations within the city itself that reflect local characteristics; mining local knowledge to identify existing vulnerabilities to, and costs of, weather and climate extremes; understanding how future projections can be integrated into the planning process; and identifying ways in which the city may adapt. Using examples from our work in the cities of Boston, Chicago, and Mobile we illustrate the practical application of this approach to quantify the impacts of climate change on these cities and identify robust adaptation options as diverse as reducing the urban heat island effect, protecting essential infrastructure, changing design standards and building codes, developing robust emergency management plans, and rebuilding storm sewer systems.

  2. Quantifying and Mapping Global Data Poverty.

    PubMed

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  3. Model for quantifying photoelastic fringe degradation by imperfect retroreflective backings.

    PubMed

    Woolard, D; Hinders, M

    2000-05-01

    In any automated algorithm for interpreting photoelastic fringe patterns it is necessary to understand and quantify sources of error in the measurement system. We have been considering how the various components of the coating affect the photoelastic measurement, because this source of error has received fairly little attention in the literature. Because the reflective backing is not a perfect retroreflector, it does not preserve the polarization of light and thereby introduces noise into the measurement that depends on the angle of obliqueness and roughness of the reflective surface. This is of particular concern in resolving the stress tensor through the combination of thermoelasticity and photoelasticity where the components are sensitive to errors in the principal angle and difference of the principal stresses. We have developed a physical model that accounts for this and other sources of measurement error to be introduced in a systematic way so that the individual effects on the fringe patterns can be quantified. Simulations show altered photoelastic fringes when backing roughness and oblique incident angles are incorporated into the model. PMID:18345104

  4. Quantifying limits to detection of early warning for critical transitions

    PubMed Central

    Boettiger, Carl; Hastings, Alan

    2012-01-01

    Catastrophic regime shifts in complex natural systems may be averted through advanced detection. Recent work has provided a proof-of-principle that many systems approaching a catastrophic transition may be identified through the lens of early warning indicators such as rising variance or increased return times. Despite widespread appreciation of the difficulties and uncertainty involved in such forecasts, proposed methods hardly ever characterize their expected error rates. Without the benefits of replicates, controls or hindsight, applications of these approaches must quantify how reliable different indicators are in avoiding false alarms, and how sensitive they are to missing subtle warning signs. We propose a model-based approach to quantify this trade-off between reliability and sensitivity and allow comparisons between different indicators. We show these error rates can be quite severe for common indicators even under favourable assumptions, and also illustrate how a model-based indicator can improve this performance. We demonstrate how the performance of an early warning indicator varies in different datasets, and suggest that uncertainty quantification become a more central part of early warning predictions. PMID:22593100

  5. SANTA: Quantifying the Functional Content of Molecular Networks

    PubMed Central

    Cornish, Alex J.; Markowetz, Florian

    2014-01-01

    Linking networks of molecular interactions to cellular functions and phenotypes is a key goal in systems biology. Here, we adapt concepts of spatial statistics to assess the functional content of molecular networks. Based on the guilt-by-association principle, our approach (called SANTA) quantifies the strength of association between a gene set and a network, and functionally annotates molecular networks like other enrichment methods annotate lists of genes. As a general association measure, SANTA can (i) functionally annotate experimentally derived networks using a collection of curated gene sets and (ii) annotate experimentally derived gene sets using a collection of curated networks, as well as (iii) prioritize genes for follow-up analyses. We exemplify the efficacy of SANTA in several case studies using the S. cerevisiae genetic interaction network and genome-wide RNAi screens in cancer cell lines. Our theory, simulations, and applications show that SANTA provides a principled statistical way to quantify the association between molecular networks and cellular functions and phenotypes. SANTA is available from http://bioconductor.org/packages/release/bioc/html/SANTA.html. PMID:25210953

  6. A Method to Quantify Mouse Coat-Color Proportions

    PubMed Central

    Ounpraseuth, Songthip; Rafferty, Tonya M.; McDonald-Phillips, Rachel E.; Gammill, Whitney M.; Siegel, Eric R.; Wheeler, Kristin L.; Nilsson, Erik A.; Cooney, Craig A.

    2009-01-01

    Coat-color proportions and patterns in mice are used as assays for many processes such as transgene expression, chimerism, and epigenetics. In many studies, coat-color readouts are estimated from subjective scoring of individual mice. Here we show a method by which mouse coat color is quantified as the proportion of coat shown in one or more digital images. We use the yellow-agouti mouse model of epigenetic variegation to demonstrate this method. We apply this method to live mice using a conventional digital camera for data collection. We use a raster graphics editing program to convert agouti regions of the coat to a standard, uniform, brown color and the yellow regions of the coat to a standard, uniform, yellow color. We use a second program to quantify the proportions of these standard colors. This method provides quantification that relates directly to the visual appearance of the live animal. It also provides an objective analysis with a traceable record, and it should allow for precise comparisons of mouse coats and mouse cohorts within and between studies. PMID:19404391

  7. A framework for quantifying net benefits of alternative prognostic models‡

    PubMed Central

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-01

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. Copyright © 2011 John Wiley & Sons, Ltd. PMID:21905066

  8. Quantifying magma mixing with the Shannon entropy: Application to simulations and experiments

    NASA Astrophysics Data System (ADS)

    Perugini, D.; De Campos, C. P.; Petrelli, M.; Morgavi, D.; Vetere, F. P.; Dingwell, D. B.

    2015-11-01

    We introduce a new quantity to petrology, the Shannon entropy, as a tool for quantifying mixing as well as the rate of production of hybrid compositions in the mixing system. The Shannon entropy approach is applied to time series numerical simulations and high-temperature experiments performed with natural melts. We note that in both cases the Shannon entropy increases linearly during the initial stages of mixing and then saturates toward constant values. Furthermore, chemical elements with different mobilities display different rates of increase of the Shannon entropy. This indicates that the hybrid composition for the different elements is attained at different times generating a wide range of spatio-compositional domains which further increase the apparent complexity of the mixing process. Results from the application of the Shannon entropy analysis are compared with the concept of Relaxation of Concentration Variance (RCV), a measure recently introduced in petrology to quantify chemical exchanges during magma mixing. We derive a linear expression relating the change of concentration variance during mixing and the Shannon entropy. We show that the combined use of Shannon entropy and RCV provides the most complete information about the space and time complexity of magma mixing. As a consequence, detailed information about this fundamental petrogenetic and volcanic process can be gathered. In particular, the Shannon entropy can be used as complement to the RCV method to quantify the mobility of chemical elements in magma mixing systems, to obtain information about the rate of production of compositional heterogeneities, and to derive empirical relationships linking the rate of chemical exchanges between interacting magmas and mixing time.

  9. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  10. The processing of polar quantifiers, and numerosity perception.

    PubMed

    Deschamps, Isabelle; Agmon, Galit; Loewenstein, Yonatan; Grodzinsky, Yosef

    2015-10-01

    We investigated the course of language processing in the context of a verification task that required numerical estimation and comparison. Participants listened to sentences with complex quantifiers that contrasted in Polarity, a logical property (e.g., more-than-half, less-than-half), and then performed speeded verification on visual scenarios that displayed a proportion between 2 discrete quantities. We varied systematically not only the sentences, but also the visual materials, in order to study their effect on the verification process. Next, we used the same visual scenarios with analogous non-verbal probes that featured arithmetical inequality symbols (<, >). This manipulation enabled us to measure not only Polarity effects, but also, to compare the effect of different probe types (linguistic, non-linguistic) on processing. Like many previous studies, our results demonstrate that perceptual difficulty affects error rate and reaction time in keeping with Weber's Law. Interestingly, these performance parameters are also affected by the Polarity of the quantifiers used, despite the fact that sentences had the exact same meaning, sentence structure, number of words, syllables, and temporal structure. Moreover, an analogous contrast between the non-linguistic probes (<, >) had no effect on performance. Finally, we observed no interaction between performance parameters governed by Weber's Law and those affected by Polarity. We consider 4 possible accounts of the results (syntactic, semantic, pragmatic, frequency-based), and discuss their relative merit. PMID:26142825

  11. Quantifying uncertainty in LCA-modelling of waste management systems

    SciTech Connect

    Clavreul, Julie; Guyonnet, Dominique; Christensen, Thomas H.

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Uncertainty in LCA-modelling of waste management is significant. Black-Right-Pointing-Pointer Model, scenario and parameter uncertainties contribute. Black-Right-Pointing-Pointer Sequential procedure for quantifying uncertainty is proposed. Black-Right-Pointing-Pointer Application of procedure is illustrated by a case-study. - Abstract: Uncertainty analysis in LCA studies has been subject to major progress over the last years. In the context of waste management, various methods have been implemented but a systematic method for uncertainty analysis of waste-LCA studies is lacking. The objective of this paper is (1) to present the sources of uncertainty specifically inherent to waste-LCA studies, (2) to select and apply several methods for uncertainty analysis and (3) to develop a general framework for quantitative uncertainty assessment of LCA of waste management systems. The suggested method is a sequence of four steps combining the selected methods: (Step 1) a sensitivity analysis evaluating the sensitivities of the results with respect to the input uncertainties, (Step 2) an uncertainty propagation providing appropriate tools for representing uncertainties and calculating the overall uncertainty of the model results, (Step 3) an uncertainty contribution analysis quantifying the contribution of each parameter uncertainty to the final uncertainty and (Step 4) as a new approach, a combined sensitivity analysis providing a visualisation of the shift in the ranking of different options due to variations of selected key parameters. This tiered approach optimises the resources available to LCA practitioners by only propagating the most influential uncertainties.

  12. The quantified process approach: an emerging methodology to neuropsychological assessment.

    PubMed

    Poreh, A M

    2000-05-01

    An important development in the field of neuropsychological assessment is the quantification of the process by which individuals solve common neuropsychological tasks. The present article outlines the history leading to this development, the Quantified Process Approach, and suggests that this line of applied research bridges the gap between the clinical and statistical approaches to neuropsychological assessment. It is argued that the enterprise of quantifying the process approach proceeds via three major methodologies: (1) the "Satellite" Testing Paradigm: an approach by which new tasks are developed to complement existing tests so as to clarify a given test performance; (2) the Composition Paradigm: an approach by which data on a given test that have been largely overlooked are compiled and subsequently analyzed, resulting in new indices that are believed to reflect underlying constructs accounting for test performance; and (3) the Decomposition Paradigm: an approach which investigates the relationship between test items of a given measure according to underlying facets, resulting in the development of new subscores. The article illustrates each of the above paradigms, offers a critique of this new field according to prevailing professional standards for psychological measures, and provides suggestions for future research. PMID:10916196

  13. Quantifying VOC emissions for the strategic petroleum reserve.

    SciTech Connect

    Knowlton, Robert G.; Lord, David L.

    2013-06-01

    A very important aspect of the Department of Energy's (DOE's) Strategic Petroleum Reserve (SPR) program is regulatory compliance. One of the regulatory compliance issues deals with limiting the amount of volatile organic compounds (VOCs) that are emitted into the atmosphere from brine wastes when they are discharged to brine holding ponds. The US Environmental Protection Agency (USEPA) has set limits on the amount of VOCs that can be discharged to the atmosphere. Several attempts have been made to quantify the VOC emissions associated with the brine ponds going back to the late 1970's. There are potential issues associated with each of these quantification efforts. Two efforts were made to quantify VOC emissions by analyzing VOC content of brine samples obtained from wells. Efforts to measure air concentrations were mentioned in historical reports but no data have been located to confirm these assertions. A modeling effort was also performed to quantify the VOC emissions. More recently in 2011- 2013, additional brine sampling has been performed to update the VOC emissions estimate. An analysis of the statistical confidence in these results is presented here. Arguably, there are uncertainties associated with each of these efforts. The analysis herein indicates that the upper confidence limit in VOC emissions based on recent brine sampling is very close to the 0.42 ton/MMB limit used historically on the project. Refining this estimate would require considerable investment in additional sampling, analysis, and monitoring. An analysis of the VOC emissions at each site suggests that additional discharges could be made and stay within current regulatory limits.

  14. A Simplified Score to Quantify Comorbidity in COPD

    PubMed Central

    Putcha, Nirupama; Puhan, Milo A.; Drummond, M. Bradley; Han, MeiLan K.; Regan, Elizabeth A.; Hanania, Nicola A.; Martinez, Carlos H.; Foreman, Marilyn; Bhatt, Surya P.; Make, Barry; Ramsdell, Joe; DeMeo, Dawn L.; Barr, R. Graham; Rennard, Stephen I.; Martinez, Fernando; Silverman, Edwin K.; Crapo, James; Wise, Robert A.; Hansel, Nadia N.

    2014-01-01

    Importance Comorbidities are common in COPD, but quantifying their burden is difficult. Currently there is a COPD-specific comorbidity index to predict mortality and another to predict general quality of life. We sought to develop and validate a COPD-specific comorbidity score that reflects comorbidity burden on patient-centered outcomes. Materials and Methods Using the COPDGene study (GOLD II-IV COPD), we developed comorbidity scores to describe patient-centered outcomes employing three techniques: 1) simple count, 2) weighted score, and 3) weighted score based upon statistical selection procedure. We tested associations, area under the Curve (AUC) and calibration statistics to validate scores internally with outcomes of respiratory disease-specific quality of life (St. George's Respiratory Questionnaire, SGRQ), six minute walk distance (6MWD), modified Medical Research Council (mMRC) dyspnea score and exacerbation risk, ultimately choosing one score for external validation in SPIROMICS. Results Associations between comorbidities and all outcomes were comparable across the three scores. All scores added predictive ability to models including age, gender, race, current smoking status, pack-years smoked and FEV1 (p<0.001 for all comparisons). Area under the curve (AUC) was similar between all three scores across outcomes: SGRQ (range 0·7624–0·7676), MMRC (0·7590–0·7644), 6MWD (0·7531–0·7560) and exacerbation risk (0·6831–0·6919). Because of similar performance, the comorbidity count was used for external validation. In the SPIROMICS cohort, the comorbidity count performed well to predict SGRQ (AUC 0·7891), MMRC (AUC 0·7611), 6MWD (AUC 0·7086), and exacerbation risk (AUC 0·7341). Conclusions Quantifying comorbidity provides a more thorough understanding of the risk for patient-centered outcomes in COPD. A comorbidity count performs well to quantify comorbidity in a diverse population with COPD. PMID:25514500

  15. Quantifying the Behavioural Relevance of Hippocampal Neurogenesis

    PubMed Central

    Lazic, Stanley E.; Fuss, Johannes; Gass, Peter

    2014-01-01

    Few studies that examine the neurogenesis–behaviour relationship formally establish covariation between neurogenesis and behaviour or rule out competing explanations. The behavioural relevance of neurogenesis might therefore be overestimated if other mechanisms account for some, or even all, of the experimental effects. A systematic review of the literature was conducted and the data reanalysed using causal mediation analysis, which can estimate the behavioural contribution of new hippocampal neurons separately from other mechanisms that might be operating. Results from eleven eligible individual studies were then combined in a meta-analysis to increase precision (representing data from 215 animals) and showed that neurogenesis made a negligible contribution to behaviour (standarised effect  = 0.15; 95% CI  = −0.04 to 0.34; p = 0.128); other mechanisms accounted for the majority of experimental effects (standardised effect  = 1.06; 95% CI  = 0.74 to 1.38; p = 1.7×10−11). PMID:25426717

  16. Quantifying compositional impacts of ambient aerosol on cloud droplet formation

    NASA Astrophysics Data System (ADS)

    Lance, Sara

    It has been historically assumed that most of the uncertainty associated with the aerosol indirect effect on climate can be attributed to the unpredictability of updrafts. In Chapter 1, we analyze the sensitivity of cloud droplet number density, to realistic variations in aerosol chemical properties and to variable updraft velocities using a 1-dimensional cloud parcel model in three important environmental cases (continental, polluted and remote marine). The results suggest that aerosol chemical variability may be as important to the aerosol indirect effect as the effect of unresolved cloud dynamics, especially in polluted environments. We next used a continuous flow streamwise thermal gradient Cloud Condensation Nuclei counter (CCNc) to study the water-uptake properties of the ambient aerosol, by exposing an aerosol sample to a controlled water vapor supersaturation and counting the resulting number of droplets. In Chapter 2, we modeled and experimentally characterized the heat transfer properties and droplet growth within the CCNc. Chapter 3 describes results from the MIRAGE field campaign, in which the CCNc and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA) were deployed at a ground-based site during March, 2006. Size-resolved CCN activation spectra and growth factor distributions of the ambient aerosol in Mexico City were obtained, and an analytical technique was developed to quantify a probability distribution of solute volume fractions for the CCN in addition to the aerosol mixing-state. The CCN were shown to be much less CCN active than ammonium sulfate, with water uptake properties more consistent with low molecular weight organic compounds. The pollution outflow from Mexico City was shown to have CCN with an even lower fraction of soluble material. "Chemical Closure" was attained for the CCN, by comparing the inferred solute volume fraction with that from direct chemical measurements. A clear diurnal pattern was observed for the CCN solute volume fraction, showing that measurable aging of the aerosol population occurs during the day, on the timescale of a few hours. The mixing state of the aerosol, also showing a consistent diurnal pattern, clearly correlates with a chemical tracer for local combustion sources. Chapter 4 describes results from the GoMACCS field study, in which the CCNc was subsequently deployed on an airborne field campaign in Houston, Texas during August-September, 2006. GoMACCS tested our ability to predict CCN for highly polluted conditions with limited chemical information. Assuming the particles were composed purely of ammonium sulfate, CCN closure was obtained with a 10% overprediction bias on average for CCN concentrations ranging from less than 100 cm-3 to over 10,000 cm-3, but with on average 50% variability. Assuming measured concentrations of organics to be internally mixed and insoluble tended to reduce the overprediction bias for less polluted conditions, but led to underprediction bias in the most polluted conditions. A likely explanation is that the high organic concentrations in the polluted environments depress the surface tension of the droplets, thereby enabling activation at lower soluble fractions.

  17. Constraining Habitable Environments on Mars by Quantifying Available Geochemical Energy

    NASA Astrophysics Data System (ADS)

    Tierney, L. L.; Jakosky, B. M.

    2009-12-01

    The search for life on Mars includes the availability of liquid water, access to biogenic elements and an energy source. In the past, when water was more abundant on Mars, a source of energy may have been the limiting factor for potential life. Energy, either from photosynthesis or chemosynthesis, is required in order to drive metabolism. Potential martian organisms most likely took advantage of chemosynthetic reactions at and below the surface. Terrestrial chemolithoautotrophs, for example, thrive off of chemical disequilibrium that exists in many environments and use inorganic redox (reduction-oxidation) reactions to drive metabolism and create cellular biomass. The chemical disequilibrium of six different martian environments were modeled in this study and analyzed incorporating a range of water and rock compositions, water:rock mass ratios, atmospheric fugacities, pH, and temperatures. All of these models can be applied to specific sites on Mars including environments similar to Meridiani Planum and Gusev Crater. Both a mass transfer geochemical model of groundwater-basalt interaction and a mixing model of groundwater-hydrothermal fluid interaction were used to estimate hypothetical martian fluid compositions that results from mixing over the entire reaction path. By determining the overall Gibbs free energy yields for redox reactions in the H-O-C-S-Fe-Mn system, the amount of geochemical energy that was available for potential chemolithoautotrophic microorganisms was quantified and the amount of biomass that could have been sustained was estimated. The quantity of biomass that can be formed and supported within a system depends on energy availability, thus sites that have higher levels and fluxes of energy have greater potential to support life. Results show that iron- and sulfur-oxidation reactions would have been the most favorable redox reactions in aqueous systems where groundwater and rock interacted at or near the surface. These types of reactions could have supported between 0.05 and 1.0 grams (dry weight) of biomass per mole of iron or sulfur. The hydrothermal environments would have had numerous redox reactions in the H-O-C-S-Fe-Mn system that could have provided sufficient metabolic energy for potential microorganisms. Methanotrophy, for example, provides the greatest amount of energy at ~760 kJ per mole of methane, which is equivalent to 0.6 grams (dry weight) of biomass. Additional results show that varying the amount of CO2 in the martian atmosphere or adjusting the water:rock ratios has little effect on the resulting Gibbs free energies. The martian values that are reported for available free energy in this study are similar to values that have been calculated for terrestrial systems in hydrothermal settings in which life is known to be abundant. In summary, the models indicate that martian aqueous environments were likely to have been habitable at a wide range of conditions when liquid water was more abundant and would have been able to supply a large amount of energy for potential organisms.

  18. Species determination - Can we detect and quantify meat adulteration?

    PubMed

    Ballin, Nicolai Z; Vogensen, Finn K; Karlsson, Anders H

    2009-10-01

    Proper labelling of meat products is important to help fair-trade, and to enable consumers to make informed choices. However, it has been shown that labelling of species, expressed as weight/weight (w/w), on meat product labels was incorrect in more than 20% of cases. Enforcement of labelling regulations requires reliable analytical methods. Analytical methods are often based on protein or DNA measurements, which are not directly comparable to labelled meat expressed as w/w. This review discusses a wide range of analytical methods with focus on their ability to quantify and their limits of detection (LOD). In particular, problems associated with a correlation from quantitative DNA based results to meat content (w/w) are discussed. The hope is to make researchers aware of the problems of expressing DNA results as meat content (w/w) in order to find better alternatives. One alternative is to express DNA results as genome/genome equivalents. PMID:20416768

  19. Quantifying methane flux from lake sediments using multibeam sonar

    NASA Astrophysics Data System (ADS)

    Scandella, B.; Urban, P.; Delwiche, K.; Greinert, J.; Hemond, H.; Ruppel, C. D.; Juanes, R.

    2013-12-01

    Methane is a potent greenhouse gas, and the production and emission of methane from sediments in wetlands, lakes and rivers both contributes to and may be exacerbated by climate change. In some of these shallow-water settings, methane fluxes may be largely controlled by episodic venting that can be triggered by drops in hydrostatic pressure. Even with better constraints on the mechanisms for gas release, quantifying these fluxes has remained a challenge due to rapid spatiotemporal changes in the patterns of bubble emissions from the sediments. The research presented here uses a fixed-location Imagenex DeltaT 837B multibeam sonar to estimate methane-venting fluxes from organic-rich lake sediments over a large area (~400 m2) and over a multi-season deployment period with unprecedented spatial and temporal resolution. Simpler, single-beam sonar systems have been used in the past to estimate bubble fluxes in a variety of settings. Here we extend this methodology to a multibeam system by means of: (1) detailed calibration of the sonar signal against imposed bubble streams, and (2) validation against an in situ independent record of gas flux captured by overlying bubble traps. The calibrated sonar signals then yield estimates of the methane flux with high spatial resolution (~1 m) and temporal frequency (6 Hz) from a portion of the deepwater basin of Upper Mystic Lake, MA, USA, a temperate eutrophic kettle lake. These results in turn inform mathematical models of methane transport and release from the sediments, which reproduce with high fidelity the ebullitive response to hydrostatic pressure variations. In addition, the detailed information about spatial variability of methane flux derived from sonar records is used to estimate the uncertainty associated with upscaling flux measurements from bubble traps to the scale of the sonar observation area. Taken together, these multibeam sonar measurements and analysis provide a novel quantitative approach for the assessment of methane fluxes from shallow-water bodies. Time series showing how the uncalibrated, sonar-detected flux estimate (black) varies inversely with the hydrostatic pressure (meters of water, blue) at 5-minute resolution during April 2012. Overlain is the time series of scaled gas flux from a mechanistic numerical model forced by the same hydrostatic pressure signal (orange).

  20. Quantifying Flow Resistance of Mountain Streams Using the HHT Approach

    NASA Astrophysics Data System (ADS)

    Zhang, L.; Fu, X.

    2014-12-01

    This study quantifies the flow resistance of mountain streams with gravel bed and remarkable bed forms. The motivation is to follow the previous ideas (Robert, A. 1990) that the bed surface can be divided into micro-scale and macro-scale roughness, respectively. We processed the field data of longitudinal bed profiles of the Longxi River, Sichuan Province, China, using the Hilbert-Huang Transformation Method (HHT). Each longitudinal profile was decomposed into a set of curves with different frequencies of spatial fluctuation. The spectrogram was accordingly obtained. We supposed that a certain high and low frequency curves correspond to the micro- and macro-roughness of stream bed, respectively. We specified the characteristic height and length with the spectrogram, which represent the macro bed form accounting for bed form roughness. We then estimated the bed form roughness as being proportional to the ratio of the height to length multiplied by the height(Yang et al,2005). We also assumed the parameter, Sp, defined as the sinuosity of the highest frequency curve as the measure of the micro-scale roughness. We then took into account the effect of bed material sizes through using the product of d50/R and Sp, where d50 is the sediment median size and R is the hydraulic radius. The macro- and micro-scale roughness parameters were merged together nonlinearly to evaluate the flow resistance caused by the interplaying friction and form drag forces. Validation results show that the square of the determinant coefficient can reach as high as 0.84 in the case of the Longxi River. Future studies will focus on the verification against more field data as well as the combination of skin friction and form drag. Key words: flow resistance; roughness; HHT; spectrogram; form drag Robert, A. (1990), Boundary roughness in coarse-grained channels, Prog. Phys. Geogr., 14(1), 42-69. Yang, S.-Q., S.-K. Tan, and S.-Y. Lim. (2005), Flow resistance and bed form geometry in a wide alluvial channel, Water Resour. Res., 41, W09419.

  1. Quantifying fluvial bedrock erosion using repeat terrestrial Lidar

    NASA Astrophysics Data System (ADS)

    Cook, Kristen

    2013-04-01

    The Da'an River Gorge in western Taiwan provides a unique opportunity to observe the formation and evolution of a natural bedrock gorge. The 1.2 km long and up to 20 m deep gorge has formed since 1999 in response to uplift of the riverbed during the Chi-Chi earthquake. The extremely rapid pace of erosion enables us to observe both downcutting and channel widening over short time periods. We have monitored the evolution of the gorge since 2009 using repeat RTK GPS surveys and terrestrial Lidar scans. GPS surveys of the channel profile are conducted frequently, with 24 surveys to date, while Lidar scans are conducted after major floods, or after 5-9 months without a flood, for a total of 8 scans to date. The Lidar data are most useful for recording erosion of channel walls, which is quite episodic and highly variable along the channel. By quantifying the distribution of wall erosion in space and time, we can improve our understanding of channel widening processes and of the development of the channel planform, particularly the growth of bends. During the summer of 2012, the Da'an catchment experienced two large storm events, a meiyu (plum rain) event on June 10-13 that brought 800 mm of rain and a typhoon on August 1-3 that brought 650 mm of rain. The resulting floods had significant geomorphic effects on the Da'an gorge, including up to 10s of meters of erosion in some sections of the gorge walls. We quantify these changes using Lidar surveys conducted on June 7, July 3, and August 30. Channel wall collapses also occur in the absence of large floods, and we use scans from August 23, 2011 and June 7, 2012 to quantify erosion during a period that included a number of small floods, but no large ones. This allows us to compare the impact of 9 months of normal conditions to the impact of short-duration extreme events. The observed variability of erosion in space and time highlights the need for 3D techniques such as terrestrial Lidar to properly quantify erosion in this setting.

  2. Quantifying unsteadiness and dynamics of pulsatory volcanic activity

    NASA Astrophysics Data System (ADS)

    Dominguez, L.; Pioli, L.; Bonadonna, C.; Connor, C. B.; Andronico, D.; Harris, A. J. L.; Ripepe, M.

    2016-06-01

    Pulsatory eruptions are marked by a sequence of explosions which can be separated by time intervals ranging from a few seconds to several hours. The quantification of the periodicities associated with these eruptions is essential not only for the comprehension of the mechanisms controlling explosivity, but also for classification purposes. We focus on the dynamics of pulsatory activity and quantify unsteadiness based on the distribution of the repose time intervals between single explosive events in relation to magma properties and eruptive styles. A broad range of pulsatory eruption styles are considered, including Strombolian, violent Strombolian and Vulcanian explosions. We find a general relationship between the median of the observed repose times in eruptive sequences and the viscosity of magma given by η ≈ 100 ṡtmedian. This relationship applies to the complete range of magma viscosities considered in our study (102 to 109 Pa s) regardless of the eruption length, eruptive style and associated plume heights, suggesting that viscosity is the main magma property controlling eruption periodicity. Furthermore, the analysis of the explosive sequences in terms of failure time through statistical survival analysis provides further information: dynamics of pulsatory activity can be successfully described in terms of frequency and regularity of the explosions, quantified based on the log-logistic distribution. A linear relationship is identified between the log-logistic parameters, μ and s. This relationship is useful for quantifying differences among eruptive styles from very frequent and regular mafic events (Strombolian activity) to more sporadic and irregular Vulcanian explosions in silicic systems. The time scale controlled by the parameter μ, as a function of the median of the distribution, can be therefore correlated with the viscosity of magmas; while the complexity of the erupting system, including magma rise rate, degassing and fragmentation efficiency, can be also described based on the log-logistic parameter s, which is found to increase from regular mafic systems to highly variable silicic systems. These results suggest that the periodicity of explosions, quantified in terms of the distribution of repose times, can give fundamental information about the system dynamics and change regularly across eruptive styles (i.e., Strombolian to Vulcanian), allowing for direct comparison and quantification of different types of pulsatory activity during these eruptions.

  3. Quantifying scale relationships in snow distributions

    NASA Astrophysics Data System (ADS)

    Deems, Jeffrey S.

    2007-12-01

    Spatial distributions of snow in mountain environments represent the time integration of accumulation and ablation processes, and are strongly and dynamically linked to mountain hydrologic, ecologic, and climatic systems. Accurate measurement and modeling of the spatial distribution and variability of the seasonal mountain snowpack at different scales are imperative for water supply and hydropower decision-making, for investigations of land-atmosphere interaction or biogeochemical cycling, and for accurate simulation of earth system processes and feedbacks. Assessment and prediction of snow distributions in complex terrain are heavily dependent on scale effects, as the pattern and magnitude of variability in snow distributions depends on the scale of observation. Measurement and model scales are usually different from process scales, and thereby introduce a scale bias to the estimate or prediction. To quantify this bias, or to properly design measurement schemes and model applications, the process scale must be known or estimated. Airborne Light Detection And Ranging (lidar) products provide high-resolution, broad-extent altimetry data for terrain and snowpack mapping, and allow an application of variogram fractal analysis techniques to characterize snow depth scaling properties over lag distances from 1 to 1000 meters. Snow depth patterns as measured by lidar at three Colorado mountain sites exhibit fractal (power law) scaling patterns over two distinct scale ranges, separated by a distinct break at the 15-40 m lag distance, depending on the site. Each fractal range represents a range of separation distances over which snow depth processes remain consistent. The scale break between fractal regions is a characteristic scale at which snow depth process relationships change fundamentally. Similar scale break distances in vegetation topography datasets suggest that the snow depth scale break represents a change in wind redistribution processes from wind/vegetation interactions at small lags to wind/terrain interactions at larger lags. These snow depth scale characteristics are internally consistent, directly describe the scales of action of snow accumulation, redistribution, and ablation processes, and inform scale considerations for measurement and modeling. Snow process models are designed to represent processes acting over specific scale ranges. However, since the incorporated processes vary with scale, the model performance cannot be scale-independent. Thus, distributed snow models must represent the appropriate process interactions at each scale in order to produce reasonable simulations of snow depth or snow water equivalent (WE) variability. By comparing fractal dimensions and scale break lengths of modeled snow depth patterns to those derived from lidar observations, the model process representations can be evaluated and subsequently refined. Snow depth simulations from the Snowmobile seasonal snow process model exhibit fractal patterns, and a scale break can be produced by including a sub-model that simulates fine-scale wind drifting patterns. The fractal dimensions provide important spatial scaling information that can inform refinement of process representations. This collection of work provides a new application of methods developed in other geophysical fields for quantifying scale and variability relationships.

  4. Quantifying Riverscape Connectivity with Graph Theory

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the connectivity structure of the Gangetic riverscape with fluvial remote sensing. Our study reach extends from the heavily dammed headwaters of the Bhagirathi, Mandakini and Alaknanda rivers which form the source of the Ganga to Allahabad ~900 km downstream on the main stem. We use Landsat-8 imagery as the baseline dataset. Channel width along the Ganga (i.e. Ganges) is often several kilometres. Therefore, the pan-sharpened 15m pixels of Landsat-8 are in fact capable of resolving inner channel features for over 80% of the channel length thus allowing a riverscape approach to be adopted. We examine the following connectivity metrics: size distribution of connected components, betweeness centrality and the integrated index of connectivity. A geographic perspective is added by mapping local (25 km-scale) values for these metrics in order to examine spatial patterns of connectivity. This approach allows us to map impacts of dam construction and has the potential to inform policy decisions in the area as well as open-up new avenues of investigation.

  5. Quantifying landscape resilience using vegetation indices

    NASA Astrophysics Data System (ADS)

    Eddy, I. M. S.; Gergel, S. E.

    2014-12-01

    Landscape resilience refers to the ability of systems to adapt to and recover from disturbance. In pastoral landscapes, degradation can be measured in terms of increased desertification and/or shrub encroachment. In many countries across Central Asia, the use and resilience of pastoral systems has changed markedly over the past 25 years, influenced by centralized Soviet governance, private property rights and recently, communal resource governance. In Kyrgyzstan, recent governance reforms were in response to the increasing degradation of pastures attributed to livestock overgrazing. Our goal is to examine and map the landscape-level factors that influence overgrazing throughout successive governance periods. Here, we map and examine some of the spatial factors influencing landscape resilience in agro-pastoral systems in the Kyrgyzstan Republic where pastures occupy >50% of the country's area. We ask three questions: 1) which mechanisms of pasture degradation (desertification vs. shrub encroachment), are detectable using remote sensing vegetation indices?; 2) Are these degraded pastures associated with landscape features that influence herder mobility and accessibility (e.g., terrain, distance to other pastures)?; and 3) Have these patterns changed through successive governance periods? Using a chronosequence of Landsat imagery (1999-2014), NDVI and other VIs were used to identify trends in pasture condition during the growing season. Least-cost path distances as well as graph theoretic indices were derived from topographic factors to assess landscape connectivity (from villages to pastures and among pastures). Fieldwork was used to assess the feasibility and accuracy of this approach using the most recent imagery. Previous research concluded that low herder mobility hindered pasture use, thus we expect the distance from pasture to village to be an important predictor of pasture condition. This research will quantify the magnitude of pastoral degradation and test assumptions regarding sustainable pastoral management. As grazing is the most extensive land use on Earth, understanding the broad-scale factors that influence the resilience of pastoral systems is an important issue globally.

  6. Quantifying human vitamin kinetics using AMS

    SciTech Connect

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  7. Quantifying collective attention from tweet stream.

    PubMed

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  8. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper

    NASA Astrophysics Data System (ADS)

    Hirn, Ulrich; Schennach, Robert

    2015-05-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption.

  9. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper.

    PubMed

    Hirn, Ulrich; Schennach, Robert

    2015-01-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption. PMID:26000898

  10. Predicting Outcome in Comatose Patients: The Role of EEG Reactivity to Quantifiable Electrical Stimuli

    PubMed Central

    Liu, Gang; Su, Yingying; Liu, Yifei; Jiang, Mengdi; Zhang, Yan; Zhang, Yunzhou; Gao, Daiquan

    2016-01-01

    Objective. To test the value of quantifiable electrical stimuli as a reliable method to assess electroencephalogram reactivity (EEG-R) for the early prognostication of outcome in comatose patients. Methods. EEG was recorded in consecutive adults in coma after cardiopulmonary resuscitation (CPR) or stroke. EEG-R to standard electrical stimuli was tested. Each patient received a 3-month follow-up by the Glasgow-Pittsburgh cerebral performance categories (CPC) or modified Rankin scale (mRS) score. Results. Twenty-two patients met the inclusion criteria. In the CPR group, 6 of 7 patients with EEG-R had good outcomes (positive predictive value (PPV), 85.7%) and 4 of 5 patients without EEG-R had poor outcomes (negative predictive value (NPV), 80%). The sensitivity and specificity were 85.7% and 80%, respectively. In the stroke group, 6 of 7 patients with EEG-R had good outcomes (PPV, 85.7%); all of the 3 patients without EEG-R had poor outcomes (NPV, 100%). The sensitivity and specificity were 100% and 75%, respectively. Of all patients, the presence of EEG-R showed 92.3% sensitivity, 77.7% specificity, 85.7% PPV, and 87.5% NPV. Conclusion. EEG-R to quantifiable electrical stimuli might be a good positive predictive factor for the prognosis of outcome in comatose patients after CPR or stroke. PMID:27127529

  11. Quantifying temporal bone morphology of great apes and humans: an approach using geometric morphometrics

    PubMed Central

    Lockwood, Charles A; Lynch, John M; Kimbel, William H

    2002-01-01

    The hominid temporal bone offers a complex array of morphology that is linked to several different functional systems. Its frequent preservation in the fossil record gives the temporal bone added significance in the study of human evolution, but its morphology has proven difficult to quantify. In this study we use techniques of 3D geometric morphometrics to quantify differences among humans and great apes and discuss the results in a phylogenetic context. Twenty-three landmarks on the ectocranial surface of the temporal bone provide a high level of anatomical detail. Generalized Procrustes analysis (GPA) is used to register (adjust for position, orientation and scale) landmark data from 405 adults representing Homo, Pan, Gorilla and Pongo. Principal components analysis of residuals from the GPA shows that the major source of variation is between humans and apes. Human characteristics such as a coronally orientated petrous axis, a deep mandibular fossa, a projecting mastoid process, and reduced lateral extension of the tympanic element strongly impact the analysis. In phenetic cluster analyses, gorillas and orangutans group together with respect to chimpanzees, and all apes group together with respect to humans. Thus, the analysis contradicts depictions of African apes as a single morphotype. Gorillas and orangutans lack the extensive preglenoid surface of chimpanzees, and their mastoid processes are less medially inflected. These and other characters shared by gorillas and orangutans are probably primitive for the African hominid clade. PMID:12489757

  12. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar

    PubMed Central

    Wallace, Andrea P. C.; Milner-Gulland, E. J.; Jones, Julia P. G.; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  13. Quantifying DMS-cloud-climate interactions using the ECHAM5-HAMMOZ model

    NASA Astrophysics Data System (ADS)

    Thomas, M.; Suntharalingam, P.; Rast, S.; Pozzoli, L.; Feichter, J.; Lenton, T.

    2009-04-01

    The CLAW hypothesis (Charlson et al. 1987) proposes a feedback loop between ocean ecosystems and the earth's climate. The exact contribution of each process in their proposed feedback loop is still uncertain. Here we use a state of the art general circulation model, ECHAM5-HAMMOZ, to assess changes in cloud microphysical properties arising from prescribed perturbations to oceanic dimethyl sulphide (DMS) emissions in a present day climate scenario. ECHAM5-HAMMOZ consists of three interlinked modules, the atmospheric model ECHAM5, the aerosol module HAM and the tropospheric chemistry module MOZ. This study focuses on the atmosphere over the southern oceans where anthropogenic influence is minimal. We investigate changes in a range of aerosol and cloud properties to establish and quantify the linkages between them. We focus on changes in cloud droplet number concentration (CDNC), cloud droplet effective radii, total cloud cover and radiative forcing due to changes in DMS. Our preliminary results suggest that ECHAM5-HAMMOZ produces a realistic simulation of the first and second indirect aerosols effects over the Southern Ocean. The regions with higher DMS emissions show an increase in CDNC, a decrease in cloud effective radius and an increase in cloud cover. The magnitude of these changes is quantified with the ECHAM5-HAMMOZ model and will be discussed in detail.

  14. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  15. Quantifying Land Use Impacts on Biodiversity: Combining Species-Area Models and Vulnerability Indicators.

    PubMed

    Chaudhary, Abhishek; Verones, Francesca; de Baan, Laura; Hellweg, Stefanie

    2015-08-18

    Habitat degradation and subsequent biodiversity damage often take place far from the place of consumption because of globalization and the increasing level of international trade. Informing consumers and policy makers about the biodiversity impacts "hidden" in the life cycle of imported products is an important step toward achieving sustainable consumption patterns. Spatially explicit methods are needed in life cycle assessment to accurately quantify biodiversity impacts of products and processes. We use the Countryside species-area relationship (SAR) to quantify regional species loss due to land occupation and transformation for five taxa and six land use types in 804 terrestrial ecoregions. Further, we calculate vulnerability scores for each ecoregion based on the fraction of each species' geographic range (endemic richness) hosted by the ecoregion and the IUCN assigned threat level of each species. Vulnerability scores are multiplied with SAR-predicted regional species loss to estimate potential global extinctions per unit of land use. As a case study, we assess the land use biodiversity impacts of 1 kg of bioethanol produced using six different feed stocks in different parts of the world. Results show that the regions with highest biodiversity impacts differed markedly when the vulnerability of species was included. PMID:26197362

  16. Quantifying the threat of extinction from Muller's ratchet in the diploid Amazon molly (Poecilia formosa)

    PubMed Central

    2008-01-01

    Background The Amazon molly (Poecilia formosa) is a small unisexual fish that has been suspected of being threatened by extinction from the stochastic accumulation of slightly deleterious mutations that is caused by Muller's ratchet in non-recombining populations. However, no detailed quantification of the extent of this threat is available. Results Here we quantify genomic decay in this fish by using a simple model of Muller's ratchet with the most realistic parameter combinations available employing the evolution@home global computing system. We also describe simple extensions of the standard model of Muller's ratchet that allow us to deal with selfing diploids, triploids and mitotic recombination. We show that Muller's ratchet creates a threat of extinction for the Amazon molly for many biologically realistic parameter combinations. In most cases, extinction is expected to occur within a time frame that is less than previous estimates of the age of the species, leading to a genomic decay paradox. Conclusion How then does the Amazon molly survive? Several biological processes could individually or in combination solve this genomic decay paradox, including paternal leakage of undamaged DNA from sexual sister species, compensatory mutations and many others. More research is needed to quantify the contribution of these potential solutions towards the survival of the Amazon molly and other (ancient) asexual species. PMID:18366680

  17. VA-Index: Quantifying Assortativity Patterns in Networks with Multidimensional Nodal Attributes

    PubMed Central

    Pelechrinis, Konstantinos; Wei, Dong

    2016-01-01

    Network connections have been shown to be correlated with structural or external attributes of the network vertices in a variety of cases. Given the prevalence of this phenomenon network scientists have developed metrics to quantify its extent. In particular, the assortativity coefficient is used to capture the level of correlation between a single-dimensional attribute (categorical or scalar) of the network nodes and the observed connections, i.e., the edges. Nevertheless, in many cases a multi-dimensional, i.e., vector feature of the nodes is of interest. Similar attributes can describe complex behavioral patterns (e.g., mobility) of the network entities. To date little attention has been given to this setting and there has not been a general and formal treatment of this problem. In this study we develop a metric, the vector assortativity index (VA-index for short), based on network randomization and (empirical) statistical hypothesis testing that is able to quantify the assortativity patterns of a network with respect to a vector attribute. Our extensive experimental results on synthetic network data show that the VA-index outperforms a baseline extension of the assortativity coefficient, which has been used in the literature to cope with similar cases. Furthermore, the VA-index can be calibrated (in terms of parameters) fairly easy, while its benefits increase with the (co-)variance of the vector elements, where the baseline systematically over(under)estimate the true mixing patterns of the network. PMID:26816262

  18. A comparative analysis of alternative approaches for quantifying nonlinear dynamics in cardiovascular system.

    PubMed

    Chen, Yun; Yang, Hui

    2013-01-01

    Heart rate variability (HRV) analysis has emerged as an important research topic to evaluate autonomic cardiac function. However, traditional time and frequency-domain analysis characterizes and quantify only linear and stationary phenomena. In the present investigation, we made a comparative analysis of three alternative approaches (i.e., wavelet multifractal analysis, Lyapunov exponents and multiscale entropy analysis) for quantifying nonlinear dynamics in heart rate time series. Note that these extracted nonlinear features provide information about nonlinear scaling behaviors and the complexity of cardiac systems. To evaluate the performance, we used 24-hour HRV recordings from 54 healthy subjects and 29 heart failure patients, available in PhysioNet. Three nonlinear methods are evaluated not only individually but also in combination using three classification algorithms, i.e., linear discriminate analysis, quadratic discriminate analysis and k-nearest neighbors. Experimental results show that three nonlinear methods capture nonlinear dynamics from different perspectives and the combined feature set achieves the best performance, i.e., sensitivity 97.7% and specificity 91.5%. Collectively, nonlinear HRV features are shown to have the promise to identify the disorders in autonomic cardiovascular function. PMID:24110259

  19. Characterizing uncertainties for quantifying bathymetry change between time-separated multibeam echo-sounder surveys

    NASA Astrophysics Data System (ADS)

    Schmitt, Thierry; Mitchell, Neil C.; Ramsay, A. Tony S.

    2008-05-01

    Changes of bathymetry derived from multibeam sonars are useful for quantifying the effects of many sedimentary, tectonic and volcanic processes, but depth changes also require an assessment of their uncertainty. Here, we outline and illustrate a simple technique that aims both to quantify uncertainties and to help reveal the spatial character of errors. An area of immobile seafloor is mapped in each survey, providing a common 'benchmark'. Each survey dataset over the benchmark is filtered with a simple moving-averaging window and depth differences between the two surveys are collated to derive a difference histogram. The procedure is repeated using different length-scales of filtering. By plotting the variability of the differences versus the length-scale of the filter, the different effects of spatially uncorrelated and correlated noise can be deduced. The former causes variability to decrease systematically as predicted by the Central Limit Theorem, whereas the remaining variability not predicted by the Central Limit Theorem then represents the effect of spatially correlated noise. Calculations made separately for different beams can reveal whether problems are due to heave, roll, etc., which affect inner and outer beams differently. We show how the results can be applied to create a map of uncertainties, which can be used to remove insignificant data from the bathymetric change map. We illustrate the technique by characterizing changes in nearshore bed morphology over one annual cycle using data from a subtidal bay, bedrock headland and a banner sand bank in the Bristol Channel UK.

  20. Quantifying the effect of environment stability on the transcription factor repertoire of marine microbes

    PubMed Central

    2011-01-01

    Background DNA-binding transcription factors (TFs) regulate cellular functions in prokaryotes, often in response to environmental stimuli. Thus, the environment exerts constant selective pressure on the TF gene content of microbial communities. Recently a study on marine Synechococcus strains detected differences in their genomic TF content related to environmental adaptation, but so far the effect of environmental parameters on the content of TFs in bacterial communities has not been systematically investigated. Results We quantified the effect of environment stability on the transcription factor repertoire of marine pelagic microbes from the Global Ocean Sampling (GOS) metagenome using interpolated physico-chemical parameters and multivariate statistics. Thirty-five percent of the difference in relative TF abundances between samples could be explained by environment stability. Six percent was attributable to spatial distance but none to a combination of both spatial distance and stability. Some individual TFs showed a stronger relationship to environment stability and space than the total TF pool. Conclusions Environmental stability appears to have a clearly detectable effect on TF gene content in bacterioplanktonic communities described by the GOS metagenome. Interpolated environmental parameters were shown to compare well to in situ measurements and were essential for quantifying the effect of the environment on the TF content. It is demonstrated that comprehensive and well-structured contextual data will strongly enhance our ability to interpret the functional potential of microbes from metagenomic data. PMID:22587903

  1. Quantifying uncertainties in travel time tomography using the null space shuttle

    NASA Astrophysics Data System (ADS)

    de Wit, R. W.; Trampert, J.; van der Hilst, R. D.

    2010-12-01

    Due to the underdetermined nature of large tomographic inverse problems, a sizable nullspace exists. It is therefore important to investigate the uncertainty of tomographic models produced by inverse problems with multiple solutions. The nullspace shuttle (Deal and Nolet, 1996) has been designed to exploit components of the nullspace, along with a priori information or a physical model, in order to improve or enhance the original minimum-norm solution. We generalize the null space shuttle technique to quantify uncertainties in a classical study of travel time tomography (Li et al.,2008) and examine a range of models that are consistent with the travel time data. The family of resulting tomographic model is used to quantify the uncertainties in the original tomographic image, a global P-wave speed perturbation mantle model. We further show what is and what is not resolved in the aforementioned tomographic image. With the null space shuttle we are able to alter or remove structures (e.g. slabs, plumes) in the original tomographic image. We suggest that this technique should be routinely applied before physical interpretations of tomographic images are made.

  2. Normalization of gas shows improves evaluation

    SciTech Connect

    Whittaker, A.; Sellens, G.

    1987-04-20

    A normalization scheme has been developed that allows mud-log gas curves to be correlated with each other and with other well logs. The normalized mud logs may also be used to enhance formation and geopressure evaluation. The method, which requires relatively simple calculations and uses data already available in the mud logging unit, overcomes a major weakness of traditional mud logging methods: too many factors, other than reservoir content, affected gas show magnitude. As a result, mud log gas analyses could not be numerically integrated with other well logs. Often, even mud logs from nearby wells might not be reliably correlated with each other.

  3. The Use of Micro-CT with Image Segmentation to Quantify Leakage in Dental Restorations

    PubMed Central

    Carrera, Carola A.; Lan, Caixia; Escobar-Sanabria, David; Li, Yuping; Rudney, Joel; Aparicio, Conrado; Fok, Alex

    2015-01-01

    Objective To develop a method for quantifying leakage in composite resin restorations after curing, using non-destructive X-ray micro-computed tomography (micro-CT) and image segmentation. Methods Class-I cavity preparations were made in 20 human third molars, which were divided into 2 groups. Group I was restored with Z100 and Group II with Filtek LS. Micro-CT scans were taken for both groups before and after they were submerged in silver nitrate solution (AgNO3 50%) to reveal any interfacial gap and leakage at the tooth restoration interface. Image segmentation was carried out by first performing image correlation to align the before- and after-treatment images and then by image subtraction to isolate the silver nitrate penetrant for precise volume calculation. Two-tailed Student’s t-test was used to analyze the results, with the level of significance set at p<0.05. Results All samples from Group I showed silver nitrate penetration with a mean volume of 1.3 ± 0.7 mm3. In Group II, only 2 out of the 10 restorations displayed infiltration along the interface, giving a mean volume of 0.3 ± 0.3 mm3. The difference between the two groups was statistically significant (p < 0.05). The infiltration showed non-uniform patterns within the interface. Significance We have developed a method to quantify the volume of leakage using non-destructive micro-CT, silver nitrate infiltration and image segmentation. Our results confirmed that substantial leakage could occur in composite restorations that have imperfections in the adhesive layer or interfacial debonding through polymerization shrinkage. For the restorative systems investigated in this study, this occurred mostly at the interface between the adhesive system and the tooth structure. PMID:25649496

  4. Quantifying the Magnitude of Anomalous Solar Absorption

    SciTech Connect

    Ackerman, Thomas P.; Flynn, Donna M.; Marchand, Roger T.

    2003-05-16

    The data set from ARESE II, sponsored by the Atmospheric Radiation Measurement Program, provides a unique opportunity to understand solar absorption in the atmosphere because of the combination of three sets of broadband solar radiometers mounted on the Twin Otter aircraft and the ground based instruments at the ARM Southern Great Plains facility. In this study, we analyze the measurements taken on two clear sky days and three cloudy days and model the solar radiative transfer in each case with two different models. On the two clear days, the calculated and measured column absorptions agree to better than 10 Wm-2, which is about 10% of the total column absorption. Because both the model fluxes and the individual radiometer measurements are accurate to no better than 10 Wm-2, we conclude that the models and measurements are essentially in agreement. For the three cloudy days, the model calculations agree very well with each other and on two of the three days agree with the measurements to 20 Wm-2 or less out of a total column absorption of more than 200 Wm-2, which is again agreement at better than 10%. On the third day, the model and measurements agree to either 8% or 14% depending on which value of surface albedo is used. Differences exceeding 10% represent a significant absorption difference between model and observations. In addition to the uncertainty in absorption due to surface albedo, we show that including aerosol with an optical depth similar to that found on clear days can reduce the difference between model and measurement by 5% or more. Thus, we conclude that the ARESE II results are incompatible with previous studies reporting extreme anomalous absorption and can be modeled with our current understanding of radiative transfer.

  5. Quantifying variability on thermal resistance of Listeria monocytogenes.

    PubMed

    Aryani, D C; den Besten, H M W; Hazeleger, W C; Zwietering, M H

    2015-01-16

    Knowledge of the impact of strain variability and growth history on thermal resistance is needed to provide a realistic prediction and an adequate design of thermal treatments. In the present study, apart from quantifying strain variability on thermal resistance of Listeria monocytogenes, also biological variability and experimental variability were determined to prioritize their importance. Experimental variability was defined as the repeatability of parallel experimental replicates and biological variability was defined as the reproducibility of biologically independent reproductions. Furthermore, the effect of growth history was quantified. The thermal inactivation curves of 20 L. monocytogenes strains were fitted using the modified Weibull model, resulting in total 360 D-value estimates. The D-value ranged from 9 to 30 min at 55 °C; from 0.6 to 4 min at 60 °C; and from 0.08 to 0.6 min at 65 °C. The estimated z-values of all strains ranged from 4.4 to 5.7 °C. The strain variability was ten times higher than the experimental variability and four times higher than the biological variability. Furthermore, the effect of growth history on thermal resistance variability was not significantly different from that of strain variability and was mainly determined by the growth phase. PMID:25462932

  6. Quantifying the auditory saltation illusion: An objective psychophysical methodology

    NASA Astrophysics Data System (ADS)

    Kidd, Joanna C.; Hogben, John H.

    2004-08-01

    Under conditions of rapid presentation, brief acoustic stimuli repeatedly delivered first at one location, then at another, are systematically mislocalized, with stimuli perceived as traveling smoothly between the two locations. This robust illusory motion percept is termed ``auditory saltation.'' Currently, the characteristics and mechanisms of auditory saltation are not well understood. The lack of objective methods capable of quantifying the illusion on an individual basis seems a limiting factor for this area of research. In this study, we outline an objective psychophysical task that estimates the interstimulus interval at which the saltation illusion is reliably distinguishable from simulated motion. Experiment 1 examined the psychophysical function relating task performance to ISI and addressed the suitability of the task for use with adaptive psychophysical procedures. Experiment 2 directly compared performance on the task with that of another quantification method. The results suggested that this objective approach to the study of auditory saltation overcomes difficulties associated with more subjective methods, and provides a reliable paradigm within which to quantify the temporal parameters of saltation on an individual basis.

  7. Quantifiable outcomes from corporate and higher education learning collaborations

    NASA Astrophysics Data System (ADS)

    Devine, Thomas G.

    The study investigated the existence of measurable learning outcomes that emerged out of the shared strengths of collaborating sponsors. The study identified quantifiable learning outcomes that confirm corporate, academic and learner participation in learning collaborations. Each of the three hypotheses and the synergy indicator quantitatively and qualitatively confirmed learning outcomes benefiting participants. The academic-indicator quantitatively confirmed that learning outcomes attract learners to the institution. The corporate-indicator confirmed that learning outcomes include knowledge exchange and enhanced workforce talents for careers in the energy-utility industry. The learner-indicator confirmed that learning outcomes provide professional development opportunities for employment. The synergy-indicator confirmed that best learning practices in learning collaborations emanate out of the sponsors' shared strengths, and that partnerships can be elevated to strategic alliances, going beyond response to the desires of sponsors to create learner-centered cultures. The synergy-indicator confirmed the value of organizational processes that elevate sponsors' interactions to sharing strength, to create a learner-centered culture. The study's series of qualitative questions confirmed prior success factors, while verifying the hypothesis results and providing insight not available from quantitative data. The direct benefactors of the study are the energy-utility learning-collaboration participants of the study, and corporation, academic institutions, and learners of the collaboration. The indirect benefactors are the stakeholders of future learning collaborations, through improved knowledge of the existence or absence of quantifiable learning outcomes.

  8. Methods for quantifying uncertainty in fast reactor analyses.

    SciTech Connect

    Fanning, T. H.; Fischer, P. F.

    2008-04-07

    Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

  9. Quantifying Relative Diver Effects in Underwater Visual Censuses

    PubMed Central

    Dickens, Luke C.; Goatley, Christopher H. R.; Tanner, Jennifer K.; Bellwood, David R.

    2011-01-01

    Diver-based Underwater Visual Censuses (UVCs), particularly transect-based surveys, are key tools in the study of coral reef fish ecology. These techniques, however, have inherent problems that make it difficult to collect accurate numerical data. One of these problems is the diver effect (defined as the reaction of fish to a diver). Although widely recognised, its effects have yet to be quantified and the extent of taxonomic variation remains to be determined. We therefore examined relative diver effects on a reef fish assemblage on the Great Barrier Reef. Using common UVC methods, the recorded abundance of seven reef fish groups were significantly affected by the ongoing presence of SCUBA divers. Overall, the diver effect resulted in a 52% decrease in the mean number of individuals recorded, with declines of up to 70% in individual families. Although the diver effect appears to be a significant problem, UVCs remain a useful approach for quantifying spatial and temporal variation in relative fish abundances, especially if using methods that minimise the exposure of fishes to divers. Fixed distance transects using tapes or lines deployed by a second diver (or GPS-calibrated timed swims) would appear to maximise fish counts and minimise diver effects. PMID:21533039

  10. Quantifying radionuclide signatures from a γ-γ coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ-γ system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ-γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. PMID:26254208

  11. Quantifying complexity in translational research: an integrated approach

    PubMed Central

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  12. Gradient approach to quantify the gradation smoothness for output media

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  13. Quantifying Genome Editing Outcomes at Endogenous Loci using SMRT Sequencing

    PubMed Central

    Clark, Joseph; Punjya, Niraj; Sebastiano, Vittorio; Bao, Gang; Porteus, Matthew H

    2014-01-01

    SUMMARY Targeted genome editing with engineered nucleases has transformed the ability to introduce precise sequence modifications at almost any site within the genome. A major obstacle to probing the efficiency and consequences of genome editing is that no existing method enables the frequency of different editing events to be simultaneously measured across a cell population at any endogenous genomic locus. We have developed a novel method for quantifying individual genome editing outcomes at any site of interest using single molecule real time (SMRT) DNA sequencing. We show that this approach can be applied at various loci, using multiple engineered nuclease platforms including TALENs, RNA guided endonucleases (CRISPR/Cas9), and ZFNs, and in different cell lines to identify conditions and strategies in which the desired engineering outcome has occurred. This approach facilitates the evaluation of new gene editing technologies and permits sensitive quantification of editing outcomes in almost every experimental system used. PMID:24685129

  14. Quantifying chaotic dynamics from integrate-and-fire processes

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-01

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  15. Quantifying chaotic dynamics from integrate-and-fire processes

    SciTech Connect

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  16. Quantifying Age-dependent Extinction from Species Phylogenies

    PubMed Central

    Alexander, Helen K.; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  17. Quantifying Age-dependent Extinction from Species Phylogenies.

    PubMed

    Alexander, Helen K; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  18. Quantifying Spin Hall Angles from Spin Pumping: Experiments and Theory

    NASA Astrophysics Data System (ADS)

    Mosendz, O.; Pearson, J. E.; Fradin, F. Y.; Bauer, G. E. W.; Bader, S. D.; Hoffmann, A.

    2010-01-01

    Spin Hall effects intermix spin and charge currents even in nonmagnetic materials and, therefore, ultimately may allow the use of spin transport without the need for ferromagnets. We show how spin Hall effects can be quantified by integrating Ni80Fe20|normal metal (N) bilayers into a coplanar waveguide. A dc spin current in N can be generated by spin pumping in a controllable way by ferromagnetic resonance. The transverse dc voltage detected along the Ni80Fe20|N has contributions from both the anisotropic magnetoresistance and the spin Hall effect, which can be distinguished by their symmetries. We developed a theory that accounts for both. In this way, we determine the spin Hall angle quantitatively for Pt, Au, and Mo. This approach can readily be adapted to any conducting material with even very small spin Hall angles.

  19. Quantifying the exploratory behaviour of Amphibalanus amphitrite cyprids.

    PubMed

    Chaw, Kuan Chun; Birch, William R

    2009-10-01

    The behavioural response of cypris larvae from A. amphitrite (=Balanus amphitrite) exploring three model glass surfaces is quantified by close-range microscopy. Step length and step duration measurements reveal a response to both surface properties and flow. Without flow, 2-day-old cyprids took larger steps with shorter step duration on hydrophilic glass surfaces (bare and NH2-treated) vs hydrophobic glass (CH3-treated). These parameters suggest a more detailed, local inspection of hydrophobic surfaces and a more extensive exploration for hydrophilic surfaces. Cyprids under flow took longer steps and exhibited shorter probing times on hydrophobic glass. On hydrophilic glass, cyprids increased their step duration under flow. This active response is attributed to drag and lift forces challenging the cyprids' temporary anchoring to the substratum. Seven-day-old cyprids showed almost no discrimination between the model surfaces. Microscopic-scale observation of cyprid exploration is expected to provide new insights into interactions between cyprids and surfaces. PMID:20183120

  20. Quantifying uncertainty in discharge measurements: A new approach

    USGS Publications Warehouse

    Kiang, J.E.; Cohn, T.A.; Mason, R.R.

    2009-01-01

    The accuracy of discharge measurements using velocity meters and the velocity-area method is typically assessed based on empirical studies that may not correspond to conditions encountered in practice. In this paper, a statistical approach for assessing uncertainty based on interpolated variance estimation (IVE) is introduced. The IVE method quantifies all sources of random uncertainty in the measured data. This paper presents results employing data from sites where substantial over-sampling allowed for the comparison of IVE-estimated uncertainty and observed variability among repeated measurements. These results suggest that the IVE approach can provide approximate estimates of measurement uncertainty. The use of IVE to estimate the uncertainty of a discharge measurement would provide the hydrographer an immediate determination of uncertainty and help determine whether there is a need for additional sampling in problematic river cross sections. ?? 2009 ASCE.

  1. Quantifying Carbon Bioavailability in Northeast Siberian Soils

    NASA Astrophysics Data System (ADS)

    Heslop, J.; Chandra, S.; Sobczak, W. V.; Spektor, V.; Davydova, A.; Holmes, R. M.; Bulygina, E. B.; Schade, J. D.; Frey, K. E.; Bunn, A. G.; Walter Anthony, K.; Zimov, S. A.; Zimov, N.

    2010-12-01

    Soils in Northeast Siberia, particularly carbon-rich yedoma (Pleistocene permafrost) soils, have the potential to release large amounts of carbon dioxide and methane due to permafrost thaw and thermokarst activity. In order to quantify the amount of carbon release potential in these soils, it is important to understand carbon bioavailability for microbial consumption in the permafrost. In this study we measured amounts of bioavailable soil carbon across five locations in the Kolyma River Basin, NE Siberia. At each location, we sampled four horizons (top active layer, bottom active layer, Holocene optimum permafrost, and Pleistocene permafrost) and conducted soil extracts for each sample. Filtered and unfiltered extracts were used in biological oxygen demand experiments to determine the dissolved and particulate bioavailable carbon potential for consumption in the soil. Concentrations of bioavailable carbon were 102-608 mg C/kg dry soil for filtered extracts and 115-703 mg C/kg dry soil for unfiltered extracts. Concentrations of carbon respired per gram of dry soil were roughly equal for both the DOC and POC extracts (P<0.001), suggesting that bioavailable soil carbon is predominately in the dissolved form or the presence of an additional unknown limitation preventing organisms from utilizing carbon in the particulate form. Concentrations of bioavailable carbon were similar across the different sampling locations but differed among horizons. The top active layer (102-703 mg C/kg dry soil), Holocene optimum permafrost (193-481 mg C/kg dry soil), and Pleistocene permafrost (151-589 mg C/kg dry soil) horizons had the highest amounts of bioavailable carbon, and the bottom active layer (115-179 mg C/kg dry soil) horizon had the lowest amounts. For comparison, ice wedges had bioavailable carbon concentrations of 23.0 mg C/L and yedoma runoff from Duvyanni Yar had concentrations of 306 mg C/L. Pleistocene permafrost soils had similar concentrations of bioavailable carbon to the modern top active layer soils and the Holocene optimum permafrost soils, suggesting yedoma does not have the large carbon release potential previously thought.

  2. Quantifying the impacts of global disasters

    NASA Astrophysics Data System (ADS)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the maximum in Hawaii, and the largest plausible tsunami in southern California. To support the analysis of global impacts, we begin with the Ports of Los Angeles and Long Beach which account for >40% of the imports to the United States. We expand from there throughout California for the first level economic analysis. We are looking to work with Alaska and Hawaii, especially on similar economic issues in ports, over the next year and to expand the analysis to consideration of economic interactions between the regions.

  3. Quantifying Permafrost Characteristics with DCR-ERT

    NASA Astrophysics Data System (ADS)

    Schnabel, W.; Trochim, E.; Munk, J.; Kanevskiy, M. Z.; Shur, Y.; Fortier, R.

    2012-12-01

    Geophysical methods are an efficient method for quantifying permafrost characteristics for Arctic road design and engineering. In the Alaskan Arctic construction and maintenance of roads requires integration of permafrost; ground that is below 0 degrees C for two or more years. Features such as ice content and temperature are critical for understanding current and future ground conditions for planning, design and evaluation of engineering applications. This study focused on the proposed Foothills West Transportation Access project corridor where the purpose is to construct a new all-season road connecting the Dalton Highway to Umiat. Four major areas were chosen that represented a range of conditions including gravel bars, alluvial plains, tussock tundra (both unburned and burned conditions), high and low centered ice-wedge polygons and an active thermokarst feature. Direct-current resistivity using galvanic contact (DCR-ERT) was applied over transects. In conjunction complimentary site data including boreholes, active layer depths, vegetation descriptions and site photographs was obtained. The boreholes provided information on soil morphology, ice texture and gravimetric moisture content. Horizontal and vertical resolutions in the DCR-ERT were varied to determine the presence or absence of ground ice; subsurface heterogeneity; and the depth to groundwater (if present). The four main DCR-ERT methods used were: 84 electrodes with 2 m spacing; 42 electrodes with 0.5 m spacing; 42 electrodes with 2 m spacing; and 84 electrodes with 1 m spacing. In terms of identifying the ground ice characteristics the higher horizontal resolution DCR-ERT transects with either 42 or 84 electrodes and 0.5 or 1 m spacing were best able to differentiate wedge-ice. This evaluation is based on a combination of both borehole stratigraphy and surface characteristics. Simulated apparent resistivity values for permafrost areas varied from a low of 4582 Ω m to a high of 10034 Ω m. Previous studies found permafrost conditions with corresponding resistivity values as low as 5000 Ω m. This work emphasizes the necessity of tailoring the DCR-ERT survey to verified ground ice characteristics.

  4. Quantifying lithological variability in the mantle

    NASA Astrophysics Data System (ADS)

    Shorttle, Oliver; Maclennan, John; Lambart, Sarah

    2014-06-01

    We present a method that can be used to estimate the amount of recycled material present in the source region of mid-ocean ridge basalts by combining three key constraints: (1) the melting behaviour of the lithologies identified to be present in a mantle source, (2) the overall volume of melt production, and (3) the proportion of melt production attributable to melting of each lithology. These constraints are unified in a three-lithology melting model containing lherzolite, pyroxenite and harzburgite, representative products of mantle differentiation, to quantify their abundance in igneous source regions. As a case study we apply this method to Iceland, a location with sufficient geochemical and geophysical data to meet the required observational constraints. We find that to generate the 20 km of igneous crustal thickness at Iceland's coasts, with 30±10% of the crust produced from melting a pyroxenitic lithology, requires an excess mantle potential temperature (ΔTp) of ⩾130 °C (Tp⩾1460 °C) and a source consisting of at least 5% recycled basalt. Therefore, the mantle beneath Iceland requires a significant excess temperature to match geophysical and geochemical observations: lithological variation alone cannot account for the high crustal thickness. Determining a unique source solution is only possible if mantle potential temperature is known precisely and independently, otherwise a family of possible lithology mixtures is obtained across the range of viable ΔTp. For Iceland this uncertainty in ΔTp means that the mantle could be >20% harzburgitic if ΔTp>150 °C (Tp>1480 °C). The consequences of lithological heterogeneity for plume dynamics in various geological contexts are also explored through thermodynamic modelling of the densities of lherzolite, basalt, and harzburgite mixtures in the mantle. All lithology solutions for Iceland are buoyant in the shallow mantle at the ΔTp for which they are valid, however only lithology mixtures incorporating a significant harzburgite component are able to reproduce recent estimates of the Iceland plume's volume flux. Using the literature estimates of the amount of recycled basalt in the sources of Hawaiian and Siberian volcanism, we found that they are negatively buoyant in the upper mantle, even at the extremes of their expected ΔTp. One solution to this problem is that low density refractory harzburgite is a more ubiquitous component in mantle plumes than previously acknowledged.

  5. Quantifying distributed damage in composites via the thermoelastic effect

    SciTech Connect

    Mahoney, B.J.

    1992-01-01

    A new approach toward quantifying transverse matrix cracking in composite laminates using the thermoelastic effect is developed. The thermoelastic effect refers to the small temperature changes that are generated in components under dynamic loading. Two models are derived, and the theoretical predictions are experimentally verified for three types of laminates. Both models include damage-induced changes in the lamina stress state and lamina coefficients of thermal expansion conduction effects, and epoxy thickness. The first model relates changes in the laminate TSA signal to changes in longitudinal laminate stiffness and Poisson's ratio. This model is based on gross simplifying assumptions and can be used on any composite laminate layup undergoing transverse matrix cracking. The second model relates TSA signal changes to longitudinal laminate stiffness, Poisson's ratio, and microcrack density for (0[sub p]90[sub q])[sub s] and (90[sub q]/0[sub p])[sub s] cross-ply laminates. Both models yield virtually identical results for the cross-ply laminates considered. A sensitivity analysis is performed on both models to quantify the effects of reasonable property variations on the normalized stiffness vs. normalized TSA signal results for the three laminates under consideration. The results for the cross-ply laminates are very insensitive, while the (+/- 45)[sub 5s] are particularly sensitive to epoxy thickness and longitudinal lamina coefficient of thermal expansion. Experiments are conducted on (0[sub 3]/90[sub 3])[sub s] and (90[sub 3]/0[sub 3])[sub s] Gl/Ep laminates and (+/- 45)[sub 5s] Gr/Ep laminates to confirm the theoretical developments of the thesis. There is a very good correlation between the theoretical predictions and experimental results for the Gl/Ep laminates.

  6. Life cycle assessment of urban wastewater systems: Quantifying the relative contribution of sewer systems.

    PubMed

    Risch, Eva; Gutierrez, Oriol; Roux, Philippe; Boutin, Catherine; Corominas, Lluís

    2015-06-15

    This study aims to propose a holistic, life cycle assessment (LCA) of urban wastewater systems (UWS) based on a comprehensive inventory including detailed construction and operation of sewer systems and wastewater treatment plants (WWTPs). For the first time, the inventory of sewers infrastructure construction includes piping materials and aggregates, manholes, connections, civil works and road rehabilitation. The operation stage comprises energy consumption in pumping stations together with air emissions of methane and hydrogen sulphide, and water emissions from sewer leaks. Using a real case study, this LCA aims to quantify the contributions of sewer systems to the total environmental impacts of the UWS. The results show that the construction of sewer infrastructures has an environmental impact (on half of the 18 studied impact categories) larger than both the construction and operation of the WWTP. This study highlights the importance of including the construction and operation of sewer systems in the environmental assessment of centralised versus decentralised options for UWS. PMID:25839834

  7. Quantifying the complexity of human colonic pressure signals using an entropy measure.

    PubMed

    Xu, Fei; Yan, Guozheng; Zhao, Kai; Lu, Li; Wang, Zhiwu; Gao, Jinyang

    2016-02-01

    Studying the complexity of human colonic pressure signals is important in understanding this intricate, evolved, dynamic system. This article presents a method for quantifying the complexity of colonic pressure signals using an entropy measure. As a self-adaptive non-stationary signal analysis algorithm, empirical mode decomposition can decompose a complex pressure signal into a set of intrinsic mode functions (IMFs). Considering that IMF2, IMF3, and IMF4 represent crucial characteristics of colonic motility, a new signal was reconstructed with these three signals. Then, the time entropy (TE), power spectral entropy (PSE), and approximate entropy (AE) of the reconstructed signal were calculated. For subjects with constipation and healthy individuals, experimental results showed that the entropies of reconstructed signals between these two classes were distinguishable. Moreover, the TE, PSE, and AE can be extracted as features for further subject classification. PMID:26043437

  8. A methodology to quantify the differences between alternative methods of heart rate variability measurement.

    PubMed

    García-González, M A; Fernández-Chimeno, M; Guede-Fernández, F; Ferrer-Mileo, V; Argelagós-Palau, A; Álvarez-Gómez, L; Parrado, E; Moreno, J; Capdevila, L; Ramos-Castro, J

    2016-01-01

    This work proposes a systematic procedure to report the differences between heart rate variability time series obtained from alternative measurements reporting the spread and mean of the differences as well as the agreement between measuring procedures and quantifying how stationary, random and normal the differences between alternative measurements are. A description of the complete automatic procedure to obtain a differences time series (DTS) from two alternative methods, a proposal of a battery of statistical tests, and a set of statistical indicators to better describe the differences in RR interval estimation are also provided. Results show that the spread and agreement depend on the choice of alternative measurements and that the DTS cannot be considered generally as a white or as a normally distributed process. Nevertheless, in controlled measurements the DTS can be considered as a stationary process. PMID:26657196

  9. Quantifying moisture transport in cementitious materials using neutron radiography

    NASA Astrophysics Data System (ADS)

    Lucero, Catherine L.

    A portion of the concrete pavements in the US have recently been observed to have premature joint deterioration. This damage is caused in part by the ingress of fluids, like water, salt water, or deicing salts. The ingress of these fluids can damage concrete when they freeze and expand or can react with the cementitious matrix causing damage. To determine the quality of concrete for assessing potential service life it is often necessary to measure the rate of fluid ingress, or sorptivity. Neutron imaging is a powerful method for quantifying fluid penetration since it can describe where water has penetrated, how quickly it has penetrated and the volume of water in the concrete or mortar. Neutrons are sensitive to light atoms such as hydrogen and thus clearly detect water at high spatial and temporal resolution. It can be used to detect small changes in moisture content and is ideal for monitoring wetting and drying in mortar exposed to various fluids. This study aimed at developing a method to accurately estimate moisture content in mortar. The common practice is to image the material dry as a reference before exposing to fluid and normalizing subsequent images to the reference. The volume of water can then be computed using the Beer-Lambert law. This method can be limiting because it requires exact image alignment between the reference image and all subsequent images. A model of neutron attenuation in a multi-phase cementitious composite was developed to be used in cases where a reference image is not available. The attenuation coefficients for water, un-hydrated cement, and sand were directly calculated from the neutron images. The attenuation coefficient for the hydration products was then back-calculated. The model can estimate the degree of saturation in a mortar with known mixture proportions without using a reference image for calculation. Absorption in mortars exposed to various fluids (i.e., deionized water and calcium chloride solutions) were investigated. It has been found through this study that small pores, namely voids created by chemical shrinkage, gel pores, and capillary pores, ranging from 0.5 nm to 50 microm, fill quickly through capillary action. However, large entrapped and entrained air voids ranging from 0.05 to 1.25 mm remain empty during the initial filling process. In mortar exposed to calcium chloride solution, a decrease in sorptivity was observed due to an increase in viscosity and surface tension of the solution as proposed by Spragg et al 2011. This work however also noted a decrease in the rate of absorption due to a reaction between the salt and matrix which results in the filling of the pores in the concrete. The results from neutron imaging can help in the interpretation of standard absorption tests. ASTM C1585 test results can be further analyzed in several ways that could give an accurate indication of the durability of the concrete. Results can be reported in depth of penetration versus the square root of time rather than mm3 of fluid per mm2 of exposed surface area. Since a known fraction of pores are initially filling before reaching the edge of the sample, the actual depth of penetration can be calculated. This work is compared with an 'intrinsic sorptivity' that can be used to interpret mass measurements. Furthermore, the influence of shrinkage reducing admixtures (SRAs) on drying was studied. Neutron radiographs showed that systems saturated in water remain "wetter" than systems saturated in 5% SRA solution. The SRA in the system reduces the moisture diffusion coefficient due an increase in viscosity and decrease in surface tension. Neutron radiography provided spatial information of the drying front that cannot be achieved using other methods.

  10. Quantifying the effect size of changing environmental controls on carbon release from permafrost-affected soils

    NASA Astrophysics Data System (ADS)

    Schaedel, C.; Bader, M. K. F.; Schuur, E. A. G.; Bracho, R. G.; Capek, P.; De Baets, S. L.; Diakova, K.; Ernakovich, J. G.; Hartley, I. P.; Iversen, C. M.; Kane, E. S.; Knoblauch, C.; Lupascu, M.; Natali, S.; Norby, R. J.; O'Donnell, J. A.; Roy Chowdhury, T.; Santruckova, H.; Shaver, G. R.; Sloan, V. L.; Treat, C. C.; Waldrop, M. P.

    2014-12-01

    High-latitude surface air temperatures are rising twice as fast as the global mean, causing permafrost to thaw and thereby exposing large quantities of previously frozen organic carbon (C) to microbial decomposition. Increasing temperatures in high latitude ecosystems not only increase C emissions from previously frozen C in permafrost but also indirectly affect the C cycle through changes in regional and local hydrology. Warmer temperatures increase thawing of ice-rich permafrost, causing land surface subsidence where soils become waterlogged, anoxic conditions prevail and C is released in form of anaerobic CO2 and CH4. Although substrate quality, physical protection, and nutrient availability affect C decomposition, increasing temperatures and changes in surface and sub-surface hydrology are likely the dominant factors affecting the rate and form of C release from permafrost; however, their effect size on C release is poorly quantified. We have compiled a database of 24 incubation studies with soils from active layer and permafrost from across the entire permafrost zone to quantify a) the effect size of increasing temperatures and b) the changes from aerobic to anaerobic environmental soil conditions on C release. Results from two different meta-analyses show that a 10°C increase in temperature increased C release by a factor of two in boreal forest, peatland and tundra ecosystems. Under aerobic incubation conditions, soils released on average three times more C than under anaerobic conditions with large variation among the different ecosystems. While peatlands showed similar amounts of C release under aerobic and anaerobic soil conditions, tundra and boreal forest ecosystems released up to 8 times more C under anoxic conditions. This pan-arctic synthesis shows that boreal forest and tundra soils will have a larger impact on climate change when newly thawed permafrost C decomposes in an aerobic environment compared to an anaerobic environment even when accounting for the higher heat trapping capacity of CH4 over a 100-year timescale.

  11. Quantifying Surface Fluctuations Using Optical Flow Techniques and Multi-Temporal LiDAR

    NASA Astrophysics Data System (ADS)

    Finnegan, D. C.; Farid, H.; Lawson, D. E.; Krabill, W. B.

    2006-12-01

    In recent decades scientific communities have seen a significant increase in technological innovations and applications using airborne and spaceborne remote sensing. In particular, airborne laser altimetry has provided the opportunity to characterize large-scale terrain and geologic processes such as glaciers and ice sheets at fine-scale resolutions. Although, processing and deriving information from these data can still pose significant challenges. To this end, we describe a novel approach that combines the use of a multi-temporal LiDAR (Light Detection and Ranging) topographic dataset and optical flow techniques, adapted from the computer vision community, to quantify ice flow dynamics of the Hubbard glacier. Using NASA's Airborne Topographic Mapper (ATM-IV) LiDAR as a source of high-resolution (~5cm) topographic data, repeat airborne surveys of the Hubbard Glacier terminus were acquired on August 22nd and 26th, 2005. From the resulting Digital Elevation Model (DEM) we seek to measure a dense motion field that describe both the shift and change in elevation of the glacier. The change in the DEM is modeled spatially as locally affine but globally smooth. The model also explicitly accounts for changes in elevation, and for missing data. This approach is built upon a differential multi-scale framework, allowing for the measurement of both large and small scale motions. The resulting measurement yields a dense 2-D motion vector field for each point in the DEM. On the Hubbard Glacier, we achieve an average accuracy within 8% as compared with manual measurements. These results are encouraging and show that repeat high-resolution elevation data that LiDAR provides allows us to quantify surface processes in a precise yet timely manner. These results may then be incorporated as essential boundary conditions into models that seek to predict geologic behavior such as glacier and ice sheet flow.

  12. Design and Analysis of a Micromechanical Three-Component Force Sensor for Characterizing and Quantifying Surface Roughness

    NASA Astrophysics Data System (ADS)

    Liang, Q.; Wu, W.; Zhang, D.; Wei, B.; Sun, W.; Wang, Y.; Ge, Y.

    2015-10-01

    Roughness, which can represent the trade-off between manufacturing cost and performance of mechanical components, is a critical predictor of cracks, corrosion and fatigue damage. In order to measure polished or super-finished surfaces, a novel touch probe based on three-component force sensor for characterizing and quantifying surface roughness is proposed by using silicon micromachining technology. The sensor design is based on a cross-beam structure, which ensures that the system possesses high sensitivity and low coupling. The results show that the proposed sensor possesses high sensitivity, low coupling error, and temperature compensation function. The proposed system can be used to investigate micromechanical structures with nanometer accuracy.

  13. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways.

    PubMed

    Seyler, Sean L; Kumar, Avishek; Thorpe, M F; Beckstein, Oliver

    2015-10-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  14. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that the geometry-based FRODA occasionally sampled the pathway space of force field-based DIMS MD. For the AdK transition, the new concept of a Hausdorff-pair map enabled us to extract the molecular structural determinants responsible for differences in pathways, namely a set of conserved salt bridges whose charge-charge interactions are fully modelled in DIMS MD but not in FRODA. PSA has the potential to enhance our understanding of transition path sampling methods, validate them, and to provide a new approach to analyzing conformational transitions. PMID:26488417

  15. Lemurs and macaques show similar numerical sensitivity

    PubMed Central

    Jones, Sarah M.; Pearson, John; DeWind, Nicholas K.; Paulsen, David; Tenekedjieva, Ana-Maria; Brannon, Elizabeth M.

    2013-01-01

    We investigated the precision of the approximate number system (ANS) in three lemur species (Lemur catta, Eulemur mongoz, and Eulemur macaco flavifrons), one Old World monkey species (Macaca mulatta) and humans (Homo sapiens). In Experiment 1, four individuals of each nonhuman primate species were trained to select the numerically larger of two visual arrays on a touchscreen. We estimated numerical acuity by modeling Weber fractions (w) and found quantitatively equivalent performance among all four nonhuman primate species. In Experiment 2, we tested adult humans in a similar procedure, and they outperformed the four nonhuman species but showed qualitatively similar performance. These results indicate that the ANS is conserved over the primate order. PMID:24068469

  16. Quantifying Unnecessary Normal Tissue Complication Risks due to Suboptimal Planning: A Secondary Study of RTOG 0126

    SciTech Connect

    Moore, Kevin L.; Schmidt, Rachel; Moiseenko, Vitali; Olsen, Lindsey A.; Tan, Jun; Xiao, Ying; Galvin, James; Pugh, Stephanie; Seider, Michael J.; Dicker, Adam P.; Bosch, Walter; Michalski, Jeff; Mutic, Sasa

    2015-06-01

    Purpose: The purpose of this study was to quantify the frequency and clinical severity of quality deficiencies in intensity modulated radiation therapy (IMRT) planning in the Radiation Therapy Oncology Group 0126 protocol. Methods and Materials: A total of 219 IMRT patients from the high-dose arm (79.2 Gy) of RTOG 0126 were analyzed. To quantify plan quality, we used established knowledge-based methods for patient-specific dose-volume histogram (DVH) prediction of organs at risk and a Lyman-Kutcher-Burman (LKB) model for grade ≥2 rectal complications to convert DVHs into normal tissue complication probabilities (NTCPs). The LKB model was validated by fitting dose-response parameters relative to observed toxicities. The 90th percentile (22 of 219) of plans with the lowest excess risk (difference between clinical and model-predicted NTCP) were used to create a model for the presumed best practices in the protocol (pDVH{sub 0126,top10%}). Applying the resultant model to the entire sample enabled comparisons between DVHs that patients could have received to DVHs they actually received. Excess risk quantified the clinical impact of suboptimal planning. Accuracy of pDVH predictions was validated by replanning 30 of 219 patients (13.7%), including equal numbers of presumed “high-quality,” “low-quality,” and randomly sampled plans. NTCP-predicted toxicities were compared to adverse events on protocol. Results: Existing models showed that bladder-sparing variations were less prevalent than rectum quality variations and that increased rectal sparing was not correlated with target metrics (dose received by 98% and 2% of the PTV, respectively). Observed toxicities were consistent with current LKB parameters. Converting DVH and pDVH{sub 0126,top10%} to rectal NTCPs, we observed 94 of 219 patients (42.9%) with ≥5% excess risk, 20 of 219 patients (9.1%) with ≥10% excess risk, and 2 of 219 patients (0.9%) with ≥15% excess risk. Replanning demonstrated the predicted NTCP reductions while maintaining the volume of the PTV receiving prescription dose. An equivalent sample of high-quality plans showed fewer toxicities than low-quality plans, 6 of 73 versus 10 of 73 respectively, although these differences were not significant (P=.21) due to insufficient statistical power in this retrospective study. Conclusions: Plan quality deficiencies in RTOG 0126 exposed patients to substantial excess risk for rectal complications.

  17. Quantifying the role of forest soil and bedrock in the acid neutralization of surface water in steep hillslopes.

    PubMed

    Asano, Yuko; Uchida, Taro

    2005-02-01

    The role of soil and bedrock in acid neutralizing processes has been difficult to quantify because of hydrological and biogeochemical uncertainties. To quantify those roles, hydrochemical observations were conducted at two hydrologically well-defined, steep granitic hillslopes in the Tanakami Mountains of Japan. These paired hillslopes are similar except for their soils; Fudoji is leached of base cations (base saturation <6%), while Rachidani is covered with fresh soil (base saturation >30%), because the erosion rate is 100-1000 times greater. The results showed that (1) soil solution pH at the soil-bedrock interface at Fudoji (4.3) was significantly lower than that of Rachidani (5.5), (2) the hillslope discharge pH in both hillslopes was similar (6.7-6.8), and (3) at Fudoji, 60% of the base cations leaching from the hillslope were derived from bedrock, whereas only 20% were derived from bedrock in Rachidani. Further, previously published results showed that the stream pH could not be predicted from the acid deposition rate and soil base saturation status. These results demonstrate that bedrock plays an especially important role when the overlying soil has been leached of base cations. These results indicate that while the status of soil acidification is a first-order control on vulnerability to surface water acidification, in some cases such as at Fudoji, subsurface interaction with the bedrock determines the sensitivity of surface water to acidic deposition. PMID:15519722

  18. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.; Mili, Ali

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability. This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.

  19. Quantifying light exposure patterns in young adult students

    PubMed Central

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2014-01-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects’ estimates of time spent indoors and outdoors. Subjects’ estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  20. Quantifying light exposure patterns in young adult students.

    PubMed

    Alvarez, Amanda A; Wildsoet, Christine F

    2013-10-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires. PMID:25342873

  1. Quantifying the Relationship Between Financial News and the Stock Market

    NASA Astrophysics Data System (ADS)

    Alanyali, Merve; Moat, Helen Susannah; Preis, Tobias

    2013-12-01

    The complex behavior of financial markets emerges from decisions made by many traders. Here, we exploit a large corpus of daily print issues of the Financial Times from 2nd January 2007 until 31st December 2012 to quantify the relationship between decisions taken in financial markets and developments in financial news. We find a positive correlation between the daily number of mentions of a company in the Financial Times and the daily transaction volume of a company's stock both on the day before the news is released, and on the same day as the news is released. Our results provide quantitative support for the suggestion that movements in financial markets and movements in financial news are intrinsically interlinked.

  2. Quantifying the Behavior of Stock Correlations Under Market Stress

    NASA Astrophysics Data System (ADS)

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-10-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios.

  3. Quantifying the behavior of stock correlations under market stress.

    PubMed

    Preis, Tobias; Kenett, Dror Y; Stanley, H Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  4. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  5. Quantifying bushfire penetration into urban areas in Australia

    NASA Astrophysics Data System (ADS)

    Chen, Keping; McAneney, John

    2004-06-01

    The extent and trajectory of bushfire penetration at the bushland-urban interface are quantified using data from major historical fires in Australia. We find that the maximum distance at which homes are destroyed is typically less than 700 m. The probability of home destruction emerges as a simple linear and decreasing function of distance from the bushland-urban boundary but with a variable slope that presumably depends upon fire regime and human intervention. The collective data suggest that the probability of home destruction at the forest edge is around 60%. Spatial patterns of destroyed homes display significant neighbourhood clustering. Our results provide revealing spatial evidence for estimating fire risk to properties and suggest an ember-attack model.

  6. The SEGUE K Giant Survey. III. Quantifying Galactic Halo Substructure

    NASA Astrophysics Data System (ADS)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo; Rockosi, Constance; Starkenburg, Else; Xue, Xiang Xiang; Rix, Hans-Walter; Harding, Paul; Beers, Timothy C.; Johnson, Jennifer; Lee, Young Sun; Schneider, Donald P.

    2016-01-01

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5-125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position-velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (˜33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.

  7. Quantifying realized inbreeding in wild and captive animal populations.

    PubMed

    Knief, U; Hemmrich-Stanisak, G; Wittig, M; Franke, A; Griffith, S C; Kempenaers, B; Forstmeier, W

    2015-04-01

    Most molecular measures of inbreeding do not measure inbreeding at the scale that is most relevant for understanding inbreeding depression-namely the proportion of the genome that is identical-by-descent (IBD). The inbreeding coefficient FPed obtained from pedigrees is a valuable estimator of IBD, but pedigrees are not always available, and cannot capture inbreeding loops that reach back in time further than the pedigree. We here propose a molecular approach to quantify the realized proportion of the genome that is IBD (propIBD), and we apply this method to a wild and a captive population of zebra finches (Taeniopygia guttata). In each of 948 wild and 1057 captive individuals we analyzed available single-nucleotide polymorphism (SNP) data (260 SNPs) spread over four different genomic regions in each population. This allowed us to determine whether any of these four regions was completely homozygous within an individual, which indicates IBD with high confidence. In the highly nomadic wild population, we did not find a single case of IBD, implying that inbreeding must be extremely rare (propIBD=0-0.00094, 95% CI). In the captive population, a five-generation pedigree strongly underestimated the average amount of realized inbreeding (FPed=0.013quantifying inbreeding at the individual or population level, and we show analytically that it can capture inbreeding loops that reach back up to a few hundred generations. PMID:25585923

  8. Sodium borohydride/chloranil-based assay for quantifying total flavonoids.

    PubMed

    He, Xiangjiu; Liu, Dong; Liu, Rui Hai

    2008-10-22

    A novel sodium borohydride/chloranil-based (SBC) assay for quantifying total flavonoids, including flavones, flavonols, flavonones, flavononols, isoflavonoids, flavanols, and anthocyanins, has been developed. Flavonoids with a 4-carbonyl group were reduced to flavanols using sodium borohydride catalyzed with aluminum chloride. Then the flavan-4-ols were oxidized to anthocyanins by chloranil in an acetic acid solution. The anthocyanins were reacted with vanillin in concentrated hydrochloric acid and then quantified spectrophotometrically at 490 nm. A representative of each common flavonoid class including flavones (baicalein), flavonols (quercetin), flavonones (hesperetin), flavononols (silibinin), isoflavonoids (biochanin A), and flavanols (catechin) showed excellent linear dose-responses in the general range of 0.1-10.0 mM. For most flavonoids, the detection limit was about 0.1 mM in this assay. The recoveries of quercetin from spiked samples of apples and red peppers were 96.5 +/- 1.4% (CV = 1.4%, n = 4) and 99.0 +/- 4.2% (CV = 4.2%, n = 4), respectively. The recovery of catechin from spiked samples of cranberry extracts was 97.9 +/- 2.0% (CV = 2.0%, n = 4). The total flavonoids of selected common fruits and vegetables were measured using this assay. Among the samples tested, blueberry had the highest total flavonoid content (689.5 +/- 10.7 mg of catechin equiv per 100 g of sample), followed by cranberry, apple, broccoli, and red pepper. This novel SBC total flavonoid assay can be widely used to measure the total flavonoid content of fruits, vegetables, whole grains, herbal products, dietary supplements, and nutraceutical products. PMID:18798633

  9. Quantifying induced effects of subsurface renewable energy storage

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry of Education and Research (BMBF).

  10. Quantifying circular-linear associations: hippocampal phase precession.

    PubMed

    Kempter, Richard; Leibold, Christian; Buzsáki, György; Diba, Kamran; Schmidt, Robert

    2012-05-30

    When a rat crosses the place field of a hippocampal pyramidal cell, this cell typically fires a series of spikes. Spike phases, measured with respect to theta oscillations of the local field potential, on average decrease as a function of the spatial distance traveled. This relation between phase and position of spikes might be a neural basis for encoding and is called phase precession. The degree of association between the circular phase variable and the linear spatial variable is commonly quantified through, however, a linear-linear correlation coefficient where the circular variable is converted to a linear variable by restricting the phase to an arbitrarily chosen range, which may bias the estimated correlation. Here we introduce a new measure to quantify circular-linear associations. This measure leads to a robust estimate of the slope and phase offset of the regression line, and it provides a correlation coefficient for circular-linear data that is a natural analog of Pearson's product-moment correlation coefficient for linear-linear data. Using surrogate data, we show that the new method outperforms the standard linear-linear approach with respect to estimates of the regression line and the correlation, and that the new method is less dependent on noise and sample size. We confirm these findings in a large data set of experimental recordings from hippocampal place cells and theta oscillations, and we discuss remaining problems that are relevant for the analysis and interpretation of phase precession. In summary, we provide a new method for the quantification of circular-linear associations. PMID:22487609

  11. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate…

  12. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate

  13. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    SciTech Connect

    Krummel, J.R.; Su, Haiping; Fox, J.; Yarnasan, S.; Ekasingh, M.

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  14. Quantifying the Nonlinear, Anisotropic Material Response of Spinal Ligaments

    NASA Astrophysics Data System (ADS)

    Robertson, Daniel J.

    Spinal ligaments may be a significant source of chronic back pain, yet they are often disregarded by the clinical community due to a lack of information with regards to their material response, and innervation characteristics. The purpose of this dissertation was to characterize the material response of spinal ligaments and to review their innervation characteristics. Review of relevant literature revealed that all of the major spinal ligaments are innervated. They cause painful sensations when irritated and provide reflexive control of the deep spinal musculature. As such, including the neurologic implications of iatrogenic ligament damage in the evaluation of surgical procedures aimed at relieving back pain will likely result in more effective long-term solutions. The material response of spinal ligaments has not previously been fully quantified due to limitations associated with standard soft tissue testing techniques. The present work presents and validates a novel testing methodology capable of overcoming these limitations. In particular, the anisotropic, inhomogeneous material constitutive properties of the human supraspinous ligament are quantified and methods for determining the response of the other spinal ligaments are presented. In addition, a method for determining the anisotropic, inhomogeneous pre-strain distribution of the spinal ligaments is presented. The multi-axial pre-strain distributions of the human anterior longitudinal ligament, ligamentum flavum and supraspinous ligament were determined using this methodology. Results from this work clearly demonstrate that spinal ligaments are not uniaxial structures, and that finite element models which account for pre-strain and incorporate ligament's complex material properties may provide increased fidelity to the in vivo condition.

  15. An organotypic spinal cord slice culture model to quantify neurodegeneration.

    PubMed

    Ravikumar, Madhumitha; Jain, Seema; Miller, Robert H; Capadona, Jeffrey R; Selkirk, Stephen M

    2012-11-15

    Activated microglia cells have been implicated in the neurodegenerative process of Alzheimer's disease, Parkinson's disease, Huntington's disease, amyotrophic lateral sclerosis, and multiple sclerosis; however, the precise roles of microglia in disease progression are unclear. Despite these diseases having been described for more than a century, current FDA approved therapeutics are symptomatic in nature with little evidence to supporting a neuroprotective effect. Furthermore, identifying novel therapeutics remains challenging due to undetermined etiology, a variable disease course, and the paucity of validated targets. Here, we describe the use of a novel ex vivo spinal cord culture system that offers the ability to screen potential neuroprotective agents, while maintaining the complexity of the in vivo environment. To this end, we treated spinal cord slice cultures with lipopolysaccharide and quantified neuron viability in culture using measurements of axon length and FluoroJadeC intensity. To simulate a microglia-mediated response to cellular debris, antigens, or implanted materials/devices, we supplemented the culture media with increasing densities of microspheres, facilitating microglia-mediated phagocytosis of the particles, which demonstrated a direct correlation between the phagocytic activities of microglia and neuronal health. To validate our model's capacity to accurately depict neuroprotection, cultures were treated with resveratrol, which demonstrated enhanced neuronal health. Our results successfully demonstrate the use of this model to reproducibly quantify the extent of neurodegeneration through the measurement of axon length and FluoroJadeC intensity, and we suggest this model will allow for accurate, high-throughput screening, which could result in expedited success in translational efficacy of therapeutic agents to clinical trials. PMID:22975474

  16. Quantifying Local Radiation-Induced Lung Damage From Computed Tomography

    SciTech Connect

    Ghobadi, Ghazaleh; Hogeweg, Laurens E.; Brandenburg, Sytze; Langendijk, Johannes A.

    2010-02-01

    Purpose: Optimal implementation of new radiotherapy techniques requires accurate predictive models for normal tissue complications. Since clinically used dose distributions are nonuniform, local tissue damage needs to be measured and related to local tissue dose. In lung, radiation-induced damage results in density changes that have been measured by computed tomography (CT) imaging noninvasively, but not yet on a localized scale. Therefore, the aim of the present study was to develop a method for quantification of local radiation-induced lung tissue damage using CT. Methods and Materials: CT images of the thorax were made 8 and 26 weeks after irradiation of 100%, 75%, 50%, and 25% lung volume of rats. Local lung tissue structure (S{sub L}) was quantified from local mean and local standard deviation of the CT density in Hounsfield units in 1-mm{sup 3} subvolumes. The relation of changes in S{sub L} (DELTAS{sub L}) to histologic changes and breathing rate was investigated. Feasibility for clinical application was tested by applying the method to CT images of a patient with non-small-cell lung carcinoma and investigating the local dose-effect relationship of DELTAS{sub L}. Results: In rats, a clear dose-response relationship of DELTAS{sub L} was observed at different time points after radiation. Furthermore, DELTAS{sub L} correlated strongly to histologic endpoints (infiltrates and inflammatory cells) and breathing rate. In the patient, progressive local dose-dependent increases in DELTAS{sub L} were observed. Conclusion: We developed a method to quantify local radiation-induced tissue damage in the lung using CT. This method can be used in the development of more accurate predictive models for normal tissue complications.

  17. Quantifying the Benefits of Active Debris Removal in a Range of Scenarios

    NASA Astrophysics Data System (ADS)

    White, Adam E.; Lewis, Hugh G.

    2013-08-01

    Long-term space debris modelling studies have suggested that the ≥10 cm low Earth orbit debris population will continue to grow even with the widespread adoption of mitigation measures recommended by the Inter-Agency Space Debris Coordination Committee. However, a number of recent studies have shown that, with additional removal of a small number of debris objects, it is possible to prevent the growth of debris in LEO. These modelling studies were based on assumptions constraining future launch and explosion rates, solar activity and mitigation, amongst others, to a limited number of cases. As a result, the effectiveness of Active Debris Removal (ADR) has only been established and quantified for a narrow range of possible outcomes. Therefore, the potential benefits of ADR, in practice, remain uncertain and there is a need to investigate a wider range of potential future scenarios to help establish ADR requirements. In this paper, we present results of a study to model and quantify the influence of four essential assumptions on the effectiveness of ADR: (1) launch activity, (2) explosion activity, (3) solar activity and (4) compliance with post-mission disposal. Each assumption is given a realistic range based upon historic worst-case data and an optimistic best-case. Using the University of Southampton's Debris Analysis and Monitoring Architecture to the Geosynchronous Environment (DAMAGE) tool, these assumptions were modelled randomly from their permitted range in Monte Carlo projections from 2009 to 2209 of the ≥5 cm LEO debris environment. In addition, two yearly ADR rates were investigated: five and ten objects per year. The results show an increase in the variance of the mean LEO debris population at the 2209 epoch. The uncertainty is such that, in some cases, ADR was not sufficient to prevent the long-term growth of the population, whilst in others ADR is not required to prevent population growth.

  18. Quantifying commuter exposures to volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the laboratory using standard BTEX gases. The LODs for the Tenax TA sampling tubes (determined with a sample volume of 1,000 standard cubic centimeters which is close to the approximate commuter sample volumes collected) were orders of magnitude lower (0.04 to 0.7 parts per billion (ppb) for individual compounds of BTEX) compared to the PIDs' LODs (9.3 to 15 ppb of a BTEX mixture), which makes the Tenax TA sampling method more suitable to measure BTEX concentrations in the sub-parts per billion (ppb) range. PID and Tenax TA data for commuter exposures were inversely related. The concentrations of VOCs measured by the PID were substantially higher than BTEX concentrations measured by collocated Tenax TA samplers. The inverse trend and the large difference in magnitude between PID responses and Tenax TA BTEX measurements indicates the two methods may have been measuring different air pollutants that are negatively correlated. Drivers in Fort Collins, Colorado with closed windows experienced greater time-weighted average BTEX exposures than cyclists (p: 0.04). Commuter BTEX exposures measured in Fort Collins were lower than commuter exposures measured in prior studies that occurred in larger cities (Boston and Copenhagen). Although route and intake may affect a commuter's BTEX dose, these variables are outside of the scope of this study. Within the limitations of this study (including: small sample size, small representative area of Fort Collins, and respiration rates not taken into account), it appears health risks associated with traffic-induced BTEX exposures may be reduced by commuting via cycling instead of driving with windows closed and living in a less populous area that has less vehicle traffic. Although the PID did not reliably measure low-level commuter BTEX exposures, the Tenax TA sampling method did. The PID measured BTEX concentrations reliably in a controlled environment, at high concentrations (300-800 ppb), and in the absence of other air pollutants. In environments where there could be multiple chemicals present that may produce a PID signal (such as nitrogen dioxide), Tenax TA samplers may be a better choice for measuring BTEX. Tenax TA measurements were the only suitable method within this study to measure commuter's BTEX exposure in Fort Collins, Colorado.

  19. QUANTIFYING DIFFUSIVE MASS TRANSFER IN FRACTURED SHALEBEDROCK

    EPA Science Inventory

    A significant limitation in defining remediation needs at contaminated sites often results from aninsufficient understanding of the transport processes that control contaminant migration. Theobjectives of this research were to help resolve this dilemma by providing an improved...

  20. New primers for detecting and quantifying denitrifying anaerobic methane oxidation archaea in different ecological niches.

    PubMed

    Ding, Jing; Ding, Zhao-Wei; Fu, Liang; Lu, Yong-Ze; Cheng, Shuk H; Zeng, Raymond J

    2015-11-01

    The significance of ANME-2d in methane sink in the environment has been overlooked, and there was no any study evaluating the distribution of ANME-2d in the environment. New primers were thus needed to be designed for following research. In this paper, a pair of primers (DP397F and DP569R) was designed to quantify ANME-2d. The specificity and amplification efficiency of this primer pair were acceptable. PCR amplification of another pair of primers (DP142F and DP779R) generated a single, bright targeted band from the enrichment sample, but yielded faint, multiple bands from the environmental samples. Nested PCR was conducted using the primers DP142F/DP779R in the first round and DP142F/DP569R in the second round, which generated a bright targeted band. Further phylogenetic analysis showed that these targeted bands were ANME-2d-related sequences. Real-time PCR showed that the copies of the 16s ribosomal RNA gene of ANME-2d in these samples ranged from 3.72 × 10(4) to 2.30 × 10(5) copies μg(-1) DNA, indicating that the percentage of ANME-2d was greatest in a polluted river sample and least in a rice paddy sample. These results demonstrate that the newly developed real-time PCR primers could sufficiently quantify ANME-2d and that nested PCR with an appropriate combination of the new primers could successfully detect ANME-2d in environmental samples; the latter finding suggests that ANME-2d may spread in environments. PMID:26300291

  1. Hyperspectral remote sensing tools for quantifying plant litter and invasive species in arid ecosystems

    USGS Publications Warehouse

    Nagler, Pamela L.; Sridhar, B.B. Maruthi; Olsson, Aaryn Dyami; Glenn, Edward P.; van Leeuwen, Willem J.D.

    2012-01-01

    Green vegetation can be distinguished using visible and infrared multi-band and hyperspectral remote sensing methods. The problem has been in identifying and distinguishing the non-photosynthetically active radiation (PAR) landscape components, such as litter and soils, and from green vegetation. Additionally, distinguishing different species of green vegetation is challenging using the relatively few bands available on most satellite sensors. This chapter focuses on hyperspectral remote sensing characteristics that aim to distinguish between green vegetation, soil, and litter (or senescent vegetation). Quantifying litter by remote sensing methods is important in constructing carbon budgets of natural and agricultural ecosystems. Distinguishing between plant types is important in tracking the spread of invasive species. Green leaves of different species usually have similar spectra, making it difficult to distinguish between species. However, in this chapter we show that phenological differences between species can be used to detect some invasive species by their distinct patterns of greening and dormancy over an annual cycle based on hyperspectral data. Both applications require methods to quantify the non-green cellulosic fractions of plant tissues by remote sensing even in the presence of soil and green plant cover. We explore these methods and offer three case studies. The first concerns distinguishing surface litter from soil using the Cellulose Absorption Index (CAI), as applied to no-till farming practices where plant litter is left on the soil after harvest. The second involves using different band combinations to distinguish invasive saltcedar from agricultural and native riparian plants on the Lower Colorado River. The third illustrates the use of the CAI and NDVI in time-series analyses to distinguish between invasive buffelgrass and native plants in a desert environment in Arizona. Together the results show how hyperspectral imagery can be applied to solve problems that are not amendable to solution by the simple band combinations normally used in remote sensing.

  2. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    SciTech Connect

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at large spatial scales as to generalize the directionality of hydrologic responses.

  3. A LC-MS method to quantify tenofovir urinary concentrations in treated patients.

    PubMed

    Simiele, Marco; Carcieri, Chiara; De Nicolò, Amedeo; Ariaudo, Alessandra; Sciandra, Mauro; Calcagno, Andrea; Bonora, Stefano; Di Perri, Giovanni; D'Avolio, Antonio

    2015-10-10

    Tenofovir disoproxil fumarate is a prodrug of tenofovir used in the treatment of HIV and HBV infections: it is the most used antiretroviral worldwide. Tenofovir is nucleotidic HIV reverse trascriptase inhibitor that showed excellent long-term efficacy and tolerability. However renal and bone complications (proximal tubulopathy, hypophosphatemia, decreased bone mineral density, and reduced creatinine clearance) limit its use. Tenofovir renal toxicity has been suggested as the consequence of drug entrapment in proximal tubular cells: measuring tenofovir urinary concentrations may be a proxy of this event and it may be used as predictor of tenofovir side effects. No method is currently available for quantifying tenofovir in this matrix: then, the aim of this work was to validate a new LC-MS method for the quantification of urinary tenofovir. Chromatographic separation was achieved with a gradient (acetonitrile and water with formic acid 0.05%) on an Atlantis 5 μm T3, 4.6 mm × 150 mm, reversed phase analytical column. Detection of tenofovir and internal standard was achieved by electrospray ionization mass spectrometry in the positive ion mode. Calibration ranged from 391 to 100,000 ng/mL. The limit of quantification was 391 ng/mL and the limit of detection was 195 ng/mL. Mean recovery of tenofovir and internal standard were consistent and stable, while matrix effect resulted low and stable. The method was tested on 35 urine samples from HIV-positive patients treated with tenofovir-based HAARTs and did not show any significant interference with antiretrovirals or other concomitantly administered drugs. All the observed concentrations in real samples fitted the calibration range, confirming the capability of this method for the use in clinical routine. Whether confirmed in ad hoc studies this method may be used for quantifying tenofovir urinary concentrations and help managing HIV-positive patients treated with tenofovir. PMID:25997174

  4. Experimental Drug for Rheumatoid Arthritis Shows Promise

    MedlinePlus

    ... medlineplus/news/fullstory_158076.html Experimental Drug for Rheumatoid Arthritis Shows Promise Baricitinib helped patients who failed other ... 2016 (HealthDay News) -- An experimental drug to treat rheumatoid arthritis showed promise in a new six-month trial. ...

  5. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking

  6. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  7. Quantifying terpenes in rumen fluid, serum, and plasma from sheep

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Determining the fate of terpenes consumed by browsing ruminants require methods to quantify their presence in blood and rumen fluid. Our objective was to modify an existing procedure for plasma terpenes to quantify 25 structurally diverse mono- and sesquiterpenes in serum, plasma, and rumen fluid fr...

  8. A Unified Account of Quantifier Perspective Effects in Discourse

    ERIC Educational Resources Information Center

    Sanford, Anthony J.; Dawydiak, Eugene J.; Moxey, Linda M.

    2007-01-01

    Positive and negative quantifiers induce two very different perspectives in comprehenders--perspectives that have strong applications to rhetoric and communication. These are briefly reviewed. A potential mechanism, based on earlier work, is introduced, resting on the idea that negatively quantified sentences (like Not all of the boys went to the…

  9. Children with Autism Show Reduced Somatosensory Response: An MEG Study

    PubMed Central

    Marco, Elysa J.; Khatibi, Kasra; Hill, Susanna S.; Siegel, Bryna; Arroyo, Monica S.; Dowling, Anne F.; Neuhaus, John M.; Sherr, Elliott H.; Hinkley, Leighton N. B.; Nagarajan, Srikantan S.

    2012-01-01

    Lay Abstract Autism spectrum disorders are reported to affect nearly one out of every one hundred children, with over 90% of these children showing behavioral disturbances related to the processing of basic sensory information. Behavioral sensitivity to light touch, such as profound discomfort with clothing tags and physical contact, is a ubiquitous finding in children on the autism spectrum. In this study, we investigate the strength and timing of brain activity in response to simple, light taps to the fingertip. Our results suggest that children with autism show a diminished early response in the primary somatosensory cortex (S1). This finding is most evident in the left hemisphere. In exploratory analysis, we also show that tactile sensory behavior, as measured by the Sensory Profile, may be a better predictor of the intensity and timing of brain activity related to touch than a clinical autism diagnosis. We report that children with atypical tactile behavior have significantly lower amplitude somatosensory cortical responses in both hemispheres. Thus sensory behavioral phenotype appears to be a more powerful strategy for investigating neural activity in this cohort. This study provides evidence for atypical brain activity during sensory processing in autistic children and suggests that our sensory behavior based methodology may be an important approach to investigating brain activity in people with autism and neurodevelopmental disorders. Scientific Abstract The neural underpinnings of sensory processing differences in autism remain poorly understood. This prospective magnetoencephalography (MEG) study investigates whether children with autism show atypical cortical activity in the primary somatosensory cortex (S1) in comparison to matched controls. Tactile stimuli were clearly detectable, painless taps applied to the distal phalanx of the second (D2) and third (D3) fingers of the right and left hands. Three tactile paradigms were administered: an oddball paradigm (standard taps to D3 at an inter-stimulus interval (ISI) of 0.33 and deviant taps to D2 with ISI ranging from 1.32–1.64s); a slow-rate paradigm (D2) with an ISI matching the deviant taps in the oddball paradigm; and a fast-rate paradigm (D2) with an ISI matching the standard taps in the oddball. Study subjects were boys (age 7–11 years) with and without autism disorder. Sensory behavior was quantified using the Sensory Profile questionnaire. Boys with autism exhibited smaller amplitude left hemisphere S1 response to slow and deviant stimuli during the right hand paradigms. In post-hoc analysis, tactile behavior directly correlated with the amplitude of cortical response. Consequently, the children were re-categorized by degree of parent-report tactile sensitivity. This regrouping created a more robust distinction between the groups with amplitude diminution in the left and right hemispheres and latency prolongation in the right hemisphere in the deviant and slow-rate paradigms for the affected children. This study suggests that children with autism have early differences in somatosensory processing, which likely influence later stages of cortical activity from integration to motor response. PMID:22933354

  10. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    NASA Astrophysics Data System (ADS)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  11. Quantifying selective pressures driving bacterial evolution using lineage analysis

    PubMed Central

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population’s rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages –i.e. the life-histories of individuals and their ancestors– to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to E. coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life-history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection, and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems. PMID:26213639

  12. A new device to quantify tactile sensation in neuropathy

    PubMed Central

    Selim, M.M.; Brink, T.S.; Hodges, J.S.; Wendelschafer-Crabb, G.; Foster, S.X.Y.-L.; Nolano, M.; Provitera, V.; Simone, D.A.

    2011-01-01

    Objective: To devise a rapid, sensitive method to quantify tactile threshold of finger pads for early detection and staging of peripheral neuropathy and for use in clinical trials. Methods: Subjects were 166 healthy controls and 103 patients with, or at risk for, peripheral neuropathy. Subjects were screened by questionnaire. The test device, the Bumps, is a checkerboard-like smooth surface with 12 squares; each square encloses 5 colored circles. The subject explores the circles of each square with the index finger pad to locate the one circle containing a small bump. Bumps in different squares have different heights. Detection threshold is defined as the smallest bump height detected. In some subjects, a 3-mm skin biopsy from the tested finger pad was taken to compare density of Meissner corpuscles (MCs) to bump detection thresholds. Results: The mean (±SEM) bump detection threshold for control subjects was 3.3 ± 0.10 μm. Threshold and test time were age related, older subjects having slightly higher thresholds and using more time. Mean detection threshold of patients with neuropathy (6.2 ± 0.35 μm) differed from controls (p < 0.001). A proposed threshold for identifying impaired sensation had a sensitivity of 71% and specificity of 74%. Detection threshold was higher when MC density was decreased. Conclusions: These preliminary studies suggest that the Bumps test is a rapid, sensitive, inexpensive method to quantify tactile sensation of finger pads. It has potential for early diagnosis of tactile deficiency in subjects suspected of having neuropathy, for staging degree of tactile deficit, and for monitoring change over time. PMID:21555731

  13. Monitoring microemboli during cardiopulmonary bypass with the EDAC quantifier.

    PubMed

    Lynch, John E; Wells, Christopher; Akers, Tom; Frantz, Paul; Garrett, Donna; Scott, M Lance; Williamson, Lisa; Agnew, Barbara; Lynch, John K

    2010-09-01

    Gaseous emboli may be introduced into the bypass circuit both from the surgical field and during perfusionist interventions. While circuits provide good protection against massive air embolism, they do not remove gaseous microemboli (GME) from the bypass circuit. The purpose of this preliminary study is to assess the incidence of GME during bypass surgery and determine if increased GME counts were associated with specific events during bypass surgery. In 30 cases divided between 15 coronary artery bypass grafts and 15 valve repairs, GME were counted and sizedt the three locations on the bypass circuit using the EDAC" Quantifier (Luna Innovations, Roanoke, VA). A mean of 45,276 GME were detected after the arterial line filter during these 30 cases, with significantly more detected (p = .04) post filter during valve cases (mean = 72,137 +/- 22,113) than coronary artery bypass graft cases (mean = 18,416 +/- 7831). GME detected post filter were significantly correlated in time with counts detected in the venous line (p < .001). Specific events associated with high counts included the initiation of cardiopulmonary bypass, heart manipulations, insertion and removal of clamps, and the administration of drugs. Global factors associated with increased counts post filter included higher venous line counts and higher post reservoir/bubble trap counts. The mean number of microemboli detected during bypass surgery was much higher than reported in other studies of emboli incidence, most likely due to the increased sensitivity of the EDAC Quantifier compared to other detection modalities. The results furthermore suggest the need for further study of the clinical significance of these microemboli and what practices may be used to reduce GME incidence. Increased in vitro testing of the air handling capability of different circuit designs, along with more clinical studies assessing best clinical practices for reducing GME activity, is recommended. PMID:21114224

  14. Quantifying MCMC Exploration of Phylogenetic Tree Space

    PubMed Central

    Whidden, Chris; Matsen, Frederick A.

    2015-01-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  15. Quantifying the Ease of Scientific Discovery.

    PubMed

    Arbesman, Samuel

    2011-02-01

    It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science. PMID:22328796

  16. Quantifying tissue mechanical properties using photoplethysmography

    PubMed Central

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; Coté, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young’s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance. PMID:25071970

  17. Quantifying tissue mechanical properties using photoplethysmography

    SciTech Connect

    Akl, Tony; Wilson, Mark A.; Ericson, Milton Nance; Cote, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance.

  18. Quantifying Effects Of Water Stress On Sunflowers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This poster presentation describes the data collection and analysis procedures and results for 2009 from a research grant funded by the National Sunflower Association. The primary objective was to evaluate the use of crop canopy temperature measured with infrared temperature sensors, as a more time ...

  19. Quantifying Security Threats and Their Impact

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2009-01-01

    In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper we illustrate this infrastructure by means of a sample example involving an e-commerce application.

  20. Quantifying the astrocytoma cell response to candidate pharmaceutical from F-ACTIN image analysis.

    PubMed

    Cui, Chi; JaJa, Joseph; Turbyville, Thomas; Beutler, John; Gudla, Prabhakar; Nandy, Kaustav; Lockett, Stephen

    2009-01-01

    The distribution, directionality and motility of the actin fibers control cell shape, affect cell function and are different in cancer versus normal cells. Quantification of actin structural changes is important for further understanding differences between cell types and for elucidation of the effects and dynamics of drug interactions. We have developed an image analysis framework for quantifying F-actin organization patterns in confocal microscope images in response to different candidate pharmaceutical treatments. The main problem solved was to determine which quantitative features to compute from the images that both capture the visually-observed F-actin patterns and correlate with predicted biological outcomes. The resultant numerical features were effective to quantitatively profile the changes in the spatial distribution of F-actin and facilitate the comparison of different pharmaceuticals. The validation for the segmentation was done through visual inspection and correlation to expected biological outcomes. This is the first study quantifying different structural formations of the same protein in intact cells. Preliminary results show uniquely significant increases in cortical F-actin to stress fiber ratio for increasing doses of OSW-1 and Schweinfurthin A(SA) and a less marked increase for cephalostatin 1 derivative (ceph). This increase was not observed for the actin inhibitors: cytochalasin B (cytoB) and Y-27632 (Y). Ongoing studies are further validating the algorithms, elucidating the underlying molecular pathways and will utilize the algorithms for understanding the kinetics of the F-actin changes. Since many anti-cancer drugs target the cytoskeleton, we believe that the quantitative image analysis method reported here will have broad applications to understanding the mechanisms of action of candidate pharmaceuticals. PMID:19963655

  1. Quantifying vertical and horizontal stand structure using terrestrial LiDAR in Pacific Northwest forests

    NASA Astrophysics Data System (ADS)

    Kazakova, Alexandra N.

    Stand level spatial distribution is a fundamental part of forest structure that influences many ecological processes and ecosystem functions. Vertical and horizontal spatial structure provides key information for forest management. Although horizontal stand complexity can be measured through stem mapping and spatial analysis, vertical complexity within the stand remains a mostly visual and highly subjective process. Tools and techniques in remote sensing, specifically LiDAR, provide three dimensional datasets that can help get at three dimensional forest stand structure. Although aerial LiDAR (ALS) is the most widespread form of remote sensing for measuring forest structure, it has a high omission rate in dense and structurally complex forests. In this study we used terrestrial LiDAR (TLS) to obtain high resolution three dimensional point clouds of plots from stands that vary by density and composition in the second-growth Pacific Northwest forest ecosystem. We used point cloud slicing techniques and object-based image analysis (OBIA) to produce canopy profiles at multiple points of vertical gradient. At each height point we produced segments that represented canopies or parts of canopies for each tree within the dataset. The resulting canopy segments were further analyzed using landscape metrics to quantify vertical canopy complexity within a single stand. Based on the developed method, we have successfully created a tool that utilizes three dimensional spatial information to accurately quantify the vertical structure of forest stands. Results show significant differences in the number and the total area of the canopy segments and gap fraction between each vertical slice within and between individual forest management plots. We found a significant relationship between the stand density and composition and the vertical canopy complexity. The methods described in this research make it possible to create horizontal stand profiles at any point along the vertical gradient of forest stands with high frequency, therefore providing ecologists with measures of horizontal and vertical stand structure. Key Words: Terrestrial laser scanning, canopy structure, landscape metrics, aerial laser scanning, lidar, calibration, Pacific Northwest.

  2. Quantifying Digit Force Vector Coordination during Precision Pinch

    PubMed Central

    Marquardt, Tamara L.; Li, Zong-Ming

    2013-01-01

    A methodology was established to investigate the contact mechanics of the thumb and the index finger at the digit-object interface during precision pinch. Two force/torque transducers were incorporated into an apparatus designed to overcome the thickness of each transducer and provide a flexible pinch span for digit placement and force application. To demonstrate the utility of the device, five subjects completed a pinch task with the pulps of their thumb and index finger. Inter-digit force vector coordination was quantified by examining the 1) force vector component magnitudes, 2) resultant force vector magnitudes, 3) coordination angle – the angle formed by the resultant vectors of each digit, 4) direction angles – the angle formed by each vector and the coordinate axes, and 5) center of pressure locations. It was shown that the resultant force magnitude of the index finger exceeded that of the thumb by 0.8 ± 0.3 N and that the coordination angle between the digit resultant force vectors was 160.2 ± 4.6°. The experimental apparatus and analysis methods provide a valuable tool for the quantitative examination of biomechanics and motor control during dexterous manipulation. PMID:24443624

  3. Quantifying Non-Markovianity with Temporal Steering

    NASA Astrophysics Data System (ADS)

    Chen, Shin-Liang; Lambert, Neill; Li, Che-Ming; Miranowicz, Adam; Chen, Yueh-Nan; Nori, Franco

    2016-01-01

    Einstein-Podolsky-Rosen (EPR) steering is a type of quantum correlation which allows one to remotely prepare, or steer, the state of a distant quantum system. While EPR steering can be thought of as a purely spatial correlation, there does exist a temporal analogue, in the form of single-system temporal steering. However, a precise quantification of such temporal steering has been lacking. Here, we show that it can be measured, via semidefinite programing, with a temporal steerable weight, in direct analogy to the recently proposed EPR steerable weight. We find a useful property of the temporal steerable weight in that it is a nonincreasing function under completely positive trace-preserving maps and can be used to define a sufficient and practical measure of strong non-Markovianity.

  4. Using automated comparisons to quantify handwriting individuality.

    PubMed

    Saunders, Christopher P; Davis, Linda J; Buscaglia, JoAnn

    2011-05-01

    The proposition that writing profiles are unique is considered a key premise underlying forensic handwriting comparisons. An empirical study cannot validate this proposition because of the impossibility of observing sample documents written by every individual. The goal of this paper is to illustrate what can be stated about the individuality of writing profiles using a database of handwriting samples and an automated comparison procedure. In this paper, we provide a strategy for bounding the probability of observing two writers with indistinguishable writing profiles (regardless of the comparison methodology used) with a random match probability that can be estimated statistically. We illustrate computation of this bound using a convenience sample of documents and an automated comparison procedure based on Pearson's chi-squared statistic applied to frequency distributions of letter shapes extracted from handwriting samples. We also show how this bound can be used when designing an empirical study of individuality. PMID:21391999

  5. Quantifying Irregularity in Pulsating Red Giants

    NASA Astrophysics Data System (ADS)

    Percy, J. R.; Esteves, S.; Lin, A.; Menezes, C.; Wu, S.

    2009-12-01

    Hundreds of red giant variable stars are classified as “type L,” which the General Catalogue of Variable Stars (GCVS) defines as “slow irregular variables of late spectral type...which show no evidence of periodicity, or any periodicity present is very poorly defined....” Self-correlation (Percy and Muhammed 2004) is a simple form of time-series analysis which determines the cycle-to-cycle behavior of a star, averaged over all the available data. It is well suited for analyzing stars which are not strictly periodic. Even for non-periodic stars, it provides a “profile” of the variability, including the average “characteristic time” of variability. We have applied this method to twenty-three L-type variables which have been measured extensively by AAVSO visual observers. We find a continuous spectrum of behavior, from irregular to semiregular.

  6. Quantifying non-Gaussianity for quantum information

    SciTech Connect

    Genoni, Marco G.; Paris, Matteo G. A.

    2010-11-15

    We address the quantification of non-Gaussianity (nG) of states and operations in continuous-variable systems and its use in quantum information. We start by illustrating in detail the properties and the relationships of two recently proposed measures of nG based on the Hilbert-Schmidt distance and the quantum relative entropy (QRE) between the state under examination and a reference Gaussian state. We then evaluate the non-Gaussianities of several families of non-Gaussian quantum states and show that the two measures have the same basic properties and also share the same qualitative behavior in most of the examples taken into account. However, we also show that they introduce a different relation of order; that is, they are not strictly monotone to each other. We exploit the nG measures for states in order to introduce a measure of nG for quantum operations, to assess Gaussification and de-Gaussification protocols, and to investigate in detail the role played by nG in entanglement-distillation protocols. Besides, we exploit the QRE-based nG measure to provide different insight on the extremality of Gaussian states for some entropic quantities such as conditional entropy, mutual information, and the Holevo bound. We also deal with parameter estimation and present a theorem connecting the QRE nG to the quantum Fisher information. Finally, since evaluation of the QRE nG measure requires the knowledge of the full density matrix, we derive some experimentally friendly lower bounds to nG for some classes of states and by considering the possibility of performing on the states only certain efficient or inefficient measurements.

  7. Quantifying covalency and metallicity in correlated compounds undergoing metal-insulator transitions

    NASA Astrophysics Data System (ADS)

    Chainani, Ashish; Yamamoto, Ayako; Matsunami, Masaharu; Eguchi, Ritsuko; Taguchi, Munetaka; Takata, Yasutaka; Takagi, Hidenori; Shin, Shik; Nishino, Yoshinori; Yabashi, Makina; Tamasaku, Kenji; Ishikawa, Tetsuya

    2013-01-01

    The tunability of bonding character in transition-metal compounds controls phase transitions and their fascinating properties such as high-temperature superconductivity, colossal magnetoresistance, spin-charge ordering, etc. However, separating out and quantifying the roles of covalency and metallicity derived from the same set of transition-metal d and ligand p electrons remains a fundamental challenge. In this study, we use bulk-sensitive photoelectron spectroscopy and configuration-interaction calculations for quantifying the covalency and metallicity in correlated compounds. The method is applied to study the first-order temperature- (T-) dependent metal-insulator transitions (MITs) in the cubic pyrochlore ruthenates Tl2Ru2O7 and Hg2Ru2O7. Core-level spectroscopy shows drastic T-dependent modifications which are well explained by including ligand-screening and metallic-screening channels. The core-level metallic-origin features get quenched upon gap formation in valence band spectra, while ionic and covalent components remain intact across the MIT. The results establish temperature-driven Mott-Hubbard MITs in three-dimensional ruthenates and reveal three energy scales: (a) 4d electronic changes occur on the largest (˜eV) energy scale, (b) the band-gap energies/charge gaps (Eg˜160-200 meV) are intermediate, and (c) the lowest-energy scale corresponds to the transition temperature TMIT (˜10 meV), which is also the spin gap energy of Tl2Ru2O7 and the magnetic-ordering temperature of Hg2Ru2O7. The method is general for doping- and T-induced transitions and is valid for V2O3, CrN, La1-xSrxMnO3, La2-xSrxCuO4, etc. The obtained transition-metal-ligand (d-p) bonding energies (V˜45-90 kcal/mol) are consistent with thermochemical data, and with energies of typical heteronuclear covalent bonds such as C-H, C-O, C-N, etc. In contrast, the metallic-screening energies of correlated compounds form a weaker class (V*˜10-40 kcal/mol) but are still stronger than van der Waals and hydrogen bonding. The results identify and quantify the roles of covalency and metallicity in 3d and 4d correlated compounds undergoing metal-insulator transitions.

  8. Structural property of soybean lunasin and development of a method to quantify lunasin in plasma using an optimized immunoassay protocol.

    PubMed

    Dia, Vermont P; Frankland-Searby, Sarah; del Hierro, Francisco Laso; Garcia, Guadalupe; de Mejia, Elvira Gonzalez

    2013-05-01

    Lunasin is a 43-amino acid naturally occurring chemopreventive peptide with demonstrated anti-cancer and anti-inflammatory properties. The objectives of this study were to determine the effect of temperature on the secondary structure of lunasin, to develop a method of isolating lunasin from human plasma using an ion-exchange microspin column and to quantify the amount of lunasin using an optimized enzyme-linked immunosorbent assay. Lunasin was purified using a combination of ion-exchange chromatography, ultrafiltration and gel filtration chromatography. Circular dichroism showed that increased in temperature from 25 to 100 °C resulted in changes on the secondary structure of lunasin and its capability to interact with rabbit polyclonal antibody. Enzyme linked immunosorbent assay showed that lunasin rabbit polyclonal antibody has a titer of 250 and a specific activity of 0.05 mL/μg. A linear response was detected between 16 to 48 ng lunasin per mL (y=0.03x-0.38, R(2)=0.96). The use of diethylaminoethyl microspin column to isolate spiked lunasin in human plasma showed that most lunasin (37.8-46.5%) bound to the column eluted with Tris-HCl buffer, pH 7.5 with a yield up to 76.6%. In conclusion, lunasin can be isolated from human plasma by a simple DEAE microspin column technique and can be quantified using a validated and optimized immunoassay procedure. This method can be used directly to quantify lunasin from plasma in different human and animal studies aiming to determine its bioavailability. PMID:23265496

  9. Thermoplasmonics: quantifying plasmonic heating in single nanowires.

    PubMed

    Herzog, Joseph B; Knight, Mark W; Natelson, Douglas

    2014-02-12

    Plasmonic absorption of light can lead to significant local heating in metallic nanostructures, an effect that defines the subfield of thermoplasmonics and has been leveraged in diverse applications from biomedical technology to optoelectronics. Quantitatively characterizing the resulting local temperature increase can be very challenging in isolated nanostructures. By measuring the optically induced change in resistance of metal nanowires with a transverse plasmon mode, we quantitatively determine the temperature increase in single nanostructures with the dependence on incident polarization clearly revealing the plasmonic heating mechanism. Computational modeling explains the resonant and nonresonant contributions to the optical heating and the dominant pathways for thermal transport. These results, obtained by combining electronic and optical measurements, place a bound on the role of optical heating in prior experiments and suggest design guidelines for engineered structures meant to leverage such effects. PMID:24382140

  10. Quantifying human response capabilities towards tsunami threats at community level

    NASA Astrophysics Data System (ADS)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    Decision makers at the community level need detailed information on tsunami risks in their area. Knowledge on potential hazard impact, exposed elements such as people, critical facilities and lifelines, people's coping capacity and recovery potential are crucial to plan precautionary measures for adaptation and to mitigate potential impacts of tsunamis on society and the environment. A crucial point within a people-centred tsunami risk assessment is to quantify the human response capabilities towards tsunami threats. Based on this quantification and spatial representation in maps tsunami affected and safe areas, difficult-to-evacuate areas, evacuation target points and evacuation routes can be assigned and used as an important contribution to e.g. community level evacuation planning. Major component in the quantification of human response capabilities towards tsunami impacts is the factor time. The human response capabilities depend on the estimated time of arrival (ETA) of a tsunami, the time until technical or natural warning signs (ToNW) can be received, the reaction time (RT) of the population (human understanding of a tsunami warning and the decision to take appropriate action), the evacuation time (ET, time people need to reach a safe area) and the actual available response time (RsT = ETA - ToNW - RT). If RsT is larger than ET, people in the respective areas are able to reach a safe area and rescue themselves. Critical areas possess RsT values equal or even smaller ET and hence people whin these areas will be directly affected by a tsunami. Quantifying the factor time is challenging and an attempt to this is presented here. The ETA can be derived by analyzing pre-computed tsunami scenarios for a respective area. For ToNW we assume that the early warning center is able to fulfil the Indonesian presidential decree to issue a warning within 5 minutes. RT is difficult as here human intrinsic factors as educational level, believe, tsunami knowledge and experience besides others play a role. An attempt to quantify this variable under high uncertainty is also presented. Quantifying ET is based on a GIS modelling using a Cost Weighted Distance approach. Basic principle is to define the best evacuation path from a given point to the next safe area (shelter location). Here the fastest path from that point to the shelter location has to be found. Thereby the impact of land cover, slope, population density, population age and gender distribution are taken into account as literature studies prove these factors as highly important. Knowing the fastest path and the distance to the next safe area together with a spatially distributed pattern of evacuation speed delivers the time needed from each location to a safe area. By considering now the obtained time value for RsT the coverage area of an evacuation target point (safe area) can be assigned. Incorporating knowledge on people capacity of an evacuation target point the respective coverage area is refined. Hence areas with weak, moderate and good human response capabilities can be detected. This allows calculation of potential amount of people affected (dead or injured) and amount of people dislocated. First results for Kuta (Bali) for a worst case tsunami event deliver people affected of approx. 25 000 when RT = 0 minutes (direct evacuation when receiving a tsunami warning to 120 000 when RT > ETA (no evacuation action until tsunami hits the land). Additionally fastest evacuation routes to the evacuation target points can be assigned. Areas with weak response capabilities can be assigned as priority areas to install e.g. additional evacuation target points or to increase tsunami knowledge and awareness to promote a faster reaction time. Especially in analyzing underlying socio-economic properties causing deficiencies in responding to a tsunami threat can lead to valuable information and direct planning of adaptation measures. Keywords: Community level, Risk and vulnerability assessment, Early warning, Disaster management, Tsunami, Indonesia

  11. Quantifying Visual Similarity in Clinical Iconic Graphics

    PubMed Central

    Payne, Philip R.O.; Starren, Justin B.

    2005-01-01

    Objective: The use of icons and other graphical components in user interfaces has become nearly ubiquitous. The interpretation of such icons is based on the assumption that different users perceive the shapes similarly. At the most basic level, different users must agree on which shapes are similar and which are different. If this similarity can be measured, it may be usable as the basis to design better icons. Design: The purpose of this study was to evaluate a novel method for categorizing the visual similarity of graphical primitives, called Presentation Discovery, in the domain of mammography. Six domain experts were given 50 common textual mammography findings and asked to draw how they would represent those findings graphically. Nondomain experts sorted the resulting graphics into groups based on their visual characteristics. The resulting groups were then analyzed using traditional statistics and hypothesis discovery tools. Strength of agreement was evaluated using computational simulations of sorting behavior. Measurements: Sorter agreement was measured at both the individual graphical and concept-group levels using a novel simulation-based method. “Consensus clusters” of graphics were derived using a hierarchical clustering algorithm. Results: The multiple sorters were able to reliably group graphics into similar groups that strongly correlated with underlying domain concepts. Visual inspection of the resulting consensus clusters indicated that graphical primitives that could be informative in the design of icons were present. Conclusion: The method described provides a rigorous alternative to intuitive design processes frequently employed in the design of icons and other graphical interface components. PMID:15684136

  12. Quantifying the Electrocatalytic Turnover of Vitamin B12 -Mediated Dehalogenation on Single Soft Nanoparticles.

    PubMed

    Cheng, Wei; Compton, Richard G

    2016-02-01

    We report the electrocatalytic dehalogenation of trichloroethylene (TCE) by single soft nanoparticles in the form of Vitamin B12 -containing droplets. We quantify the turnover number of the catalytic reaction at the single soft nanoparticle level. The kinetic data shows that the binding of TCE with the electro-reduced vitamin in the Co(I) oxidation state is chemically reversible. PMID:26806226

  13. A graph-theoretic method to quantify the airline route authority

    NASA Technical Reports Server (NTRS)

    Chan, Y.

    1979-01-01

    The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

  14. Quantifying oil filtration effects on bearing life

    SciTech Connect

    Needelman, W.M.; Zaretsky, E.V.

    1991-06-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L{sub 10} or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  15. Quantifying oil filtration effects on bearing life

    NASA Technical Reports Server (NTRS)

    Needelman, William M.; Zaretsky, Erwin V.

    1991-01-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L(sub 10) or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  16. Quantified EEG in different G situations

    NASA Astrophysics Data System (ADS)

    de Metz, K.; Quadens, O.; De Graeve, M.

    The electrical activity of the brain (EEG) has been recorded during parabolic flights in trained astronauts and non trained volunteers as well. The Fast Fourier analysis of the EEG activity evidenced more asymmetry between the two brain hemispheres in the subjects who suffered from motion sickness than in the others. However, such a FFT classification does not lead to a discrimination between deterministic and stochastic events. Therefore, a first attempt was made to calculate the dimensionality of "chaotic attractors" in the EEG patterns as a function of the different g-epochs of one parabola. Very preliminary results are given here.

  17. Stretching DNA to quantify nonspecific protein binding

    NASA Astrophysics Data System (ADS)

    Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura

    2012-07-01

    Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of λ DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (˜100 nM), and to obtain a measurement of the induced DNA compaction (˜10%) by CI.

  18. Quantifying the origin of metallic glass formation

    NASA Astrophysics Data System (ADS)

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a `nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless `fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*.

  19. Stretching DNA to quantify nonspecific protein binding.

    PubMed

    Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura

    2012-07-01

    Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of ? DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (~100 nM), and to obtain a measurement of the induced DNA compaction (~10%) by CI. PMID:23005450

  20. Quantifying Transmission Investment in Malaria Parasites

    PubMed Central

    Greischar, Megan A.; Mideo, Nicole; Read, Andrew F.; Bjørnstad, Ottar N.

    2016-01-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  1. Quantifying the origin of metallic glass formation

    PubMed Central

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a ‘nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless ‘fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*. PMID:26786966

  2. Quantifying Transmission Investment in Malaria Parasites.

    PubMed

    Greischar, Megan A; Mideo, Nicole; Read, Andrew F; Bjørnstad, Ottar N

    2016-02-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  3. Quantifying the origin of metallic glass formation.

    PubMed

    Johnson, W L; Na, J H; Demetriou, M D

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a 'nose temperature' T(*) located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless 'fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*. PMID:26786966

  4. Quantifying Differential Rotation Across the Main Sequence

    NASA Astrophysics Data System (ADS)

    Ule, Nicholas M.

    We have constructed a sample of eight stars from the Kepler field covering a broad range of spectral types, from F7 to K3. These stars have well defined rotation rates and show evidence of differential rotation in their lightcurves. In order to robustly determine differential rotation the inclination of a star must first be known. Thus, we have obtained moderate resolution spectra of these targets and obtained their radial velocities (v sin i), which is then used to determine inclinations. The photometric variations often seen in stars are created by star spots which we model in order to determine differential rotation. We have adapted the starspotz model developed by Croll (2006) with an asexual genetic algorithm to measure the strength of differential rotation (described with the parameter k). The photometric data was broken into 167 segments which were modeled for 6--8 values of k, with each model producing 50,000+ solutions. The value of k with a solution which produced the closest fit to the data was determined to be the most correct value of k for that lightcurve segment. With this data we also performed signal analysis which indicated the presence of long lived, latitudinally dependant active regions on stars. For our eight targets we successfully determined differential rotation rates and evaluated those values in relation to stellar temperature and rotational period. Coupled with previously published values for nine additional targets we find no temperature relation with differential rotation, but we do find a strong trend with rotation rates.

  5. Quantifying heat losses using aerial thermography

    SciTech Connect

    Haigh, G.A.; Pritchard, S.E.

    1980-01-01

    A theoretical model is described for calculating flat roof total heat losses and thermal conductances from aerial infrared data. Three empirical methods for estimating convective losses are described. The disagreement between the methods shows that they are prone to large (20%) errors, and that the survey should be carried out in low wind speeds, in order to minimize the effect of these errors on the calculation of total heat loss. The errors associated with knowledge of ground truth data are discussed for a high emissivity roof and three sets of environmental conditions. It is shown that the error in the net radiative loss is strongly dependent on the error in measuring the broad-band radiation incident on the roof. This is minimized for clear skies, but should be measured. Accurate knowledge of roof emissivity and the radiation reflected from the roof is shown to be less important. Simple techniques are described for measuring all three factors. Using these techniques in good conditions it should be possible to measure total heat losses to within 15%.

  6. Quantifying uncertainty in material damage from vibrational data

    SciTech Connect

    Butler, T.; Huhtala, A.; Juntunen, M.

    2015-02-15

    The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.

  7. Research of a boundary condition quantifiable correction method in the assembly homogenization

    SciTech Connect

    Peng, L. H.; Liu, Z. H.; Zhao, J.; Li, W. H.

    2012-07-01

    The methods and codes currently used in assembly homogenization calculation mostly adopt the reflection boundary conditions. The influences of real boundary conditions on the assembly homogenized parameters were analyzed. They were summarized into four quantifiable effects, and then the mathematical expressions could be got by linearization hypothesis. Through the calculation of a test model, it had been found that the result was close to transport calculation result when considering four boundary quantifiable effects. This method would greatly improve the precision of a core design code which using the assembly homogenization methods, but without much increase of the computing time. (authors)

  8. Quantifying the biodiversity value of tropical primary, secondary, and plantation forests

    PubMed Central

    Barlow, J.; Gardner, T. A.; Araujo, I. S.; Ávila-Pires, T. C.; Bonaldo, A. B.; Costa, J. E.; Esposito, M. C.; Ferreira, L. V.; Hawes, J.; Hernandez, M. I. M.; Hoogmoed, M. S.; Leite, R. N.; Lo-Man-Hung, N. F.; Malcolm, J. R.; Martins, M. B.; Mestre, L. A. M.; Miranda-Santos, R.; Nunes-Gutjahr, A. L.; Overal, W. L.; Parry, L.; Peters, S. L.; Ribeiro-Junior, M. A.; da Silva, M. N. F.; da Silva Motta, C.; Peres, C. A.

    2007-01-01

    Biodiversity loss from deforestation may be partly offset by the expansion of secondary forests and plantation forestry in the tropics. However, our current knowledge of the value of these habitats for biodiversity conservation is limited to very few taxa, and many studies are severely confounded by methodological shortcomings. We examined the conservation value of tropical primary, secondary, and plantation forests for 15 taxonomic groups using a robust and replicated sample design that minimized edge effects. Different taxa varied markedly in their response to patterns of land use in terms of species richness and the percentage of species restricted to primary forest (varying from 5% to 57%), yet almost all between-forest comparisons showed marked differences in community structure and composition. Cross-taxon congruence in response patterns was very weak when evaluated using abundance or species richness data, but much stronger when using metrics based upon community similarity. Our results show that, whereas the biodiversity indicator group concept may hold some validity for several taxa that are frequently sampled (such as birds and fruit-feeding butterflies), it fails for those exhibiting highly idiosyncratic responses to tropical land-use change (including highly vagile species groups such as bats and orchid bees), highlighting the problems associated with quantifying the biodiversity value of anthropogenic habitats. Finally, although we show that areas of native regeneration and exotic tree plantations can provide complementary conservation services, we also provide clear empirical evidence demonstrating the irreplaceable value of primary forests. PMID:18003934

  9. Quantifying changes in groundwater level and chemistry in Shahrood, northeastern Iran

    NASA Astrophysics Data System (ADS)

    Ajdary, Khalil; Kazemi, Gholam A.

    2014-03-01

    Temporal changes in the quantity and chemical status of groundwater resources must be accurately quantified to aid sustainable management of aquifers. Monitoring data show that the groundwater level in Shahrood alluvial aquifer, northeastern Iran, continuously declined from 1993 to 2009, falling 11.4 m in 16 years. This constitutes a loss of 216 million m3 from the aquifer's stored groundwater reserve. Overexploitation and reduction in rainfall intensified the declining trend. In contrast, the reduced abstraction rate, the result of reduced borehole productivity (related to the reduction in saturated-zone thickness over time), slowed down the declining trend. Groundwater salinity varied substantially showing a minor rising trend. For the same 16-year period, increases were recorded in the order of 24% for electrical conductivity, 12.4% for major ions, and 9.9% for pH. This research shows that the groundwater-level declining trend was not interrupted by fluctuation in rainfall and it does not necessarily lead to water-quality deterioration. Water-level drop is greater near the aquifer's recharging boundary, while greater rates of salinity rise occur around the end of groundwater flow lines. Also, fresher groundwater experiences a greater rate of salinity increase. These findings are of significance for predicting the groundwater level and salinity of exhausted aquifers.

  10. Quantifying the Consistency of Scientific Databases

    PubMed Central

    Šubelj, Lovro; Bajec, Marko; Mileva Boshkoska, Biljana; Kastrin, Andrej; Levnajić, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies. PMID:25984946

  11. Quantifying uncertainties in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Patlakas, Platon; Galanis, George; Kallos, George

    2015-04-01

    The constant rise of wind energy production and the subsequent penetration in global energy markets during the last decades resulted in new sites selection with various types of problems. Such problems arise due to the variability and the uncertainty of wind speed. The study of the wind speed distribution lower and upper tail may support the quantification of these uncertainties. Such approaches focused on extreme wind conditions or periods below the energy production threshold are necessary for a better management of operations. Towards this direction, different methodologies are presented for the credible evaluation of potential non-frequent/extreme values for these environmental conditions. The approaches used, take into consideration the structural design of the wind turbines according to their lifespan, the turbine failures, the time needed for repairing as well as the energy production distribution. In this work, a multi-parametric approach for studying extreme wind speed values will be discussed based on tools of Extreme Value Theory. In particular, the study is focused on extreme wind speed return periods and the persistence of no energy production based on a weather modeling system/hind cast/10-year dataset. More specifically, two methods (Annual Maxima and Peaks Over Threshold) were used for the estimation of extreme wind speeds and their recurrence intervals. Additionally, two different methodologies (intensity given duration and duration given intensity, both based on Annual Maxima method) were implied to calculate the extreme events duration, combined with their intensity as well as the event frequency. The obtained results prove that the proposed approaches converge, at least on the main findings, for each case. It is also remarkable that, despite the moderate wind speed climate of the area, several consequent days of no energy production are observed.

  12. Beyond compliance using environmental, health and safety management information systems (EMISs) to provide quantified competitive advantages

    SciTech Connect

    Schroeder, J.V.; Mayer, G.

    1999-07-01

    In the last 20 years, federal, state and local regulations have provided regulatory incentives for industry to better manage environmental, health and safety (EHS) practices. In order for voluntary EHS management practices to move beyond compliance and continue improving, specific, quantifiable benefits must result. That is, companies must achieve some competitive advantage from implementing EHS improvements that are considered voluntary. Recently, many private companies and public agencies have been giving significant consideration toward the implementation of an EHS management information system (EMIS). Currently considered voluntary, the automation of EHS data collection, storage, retrieval and reporting is subject to the same benefit expectations that other EHS improvements are subject to. The benefits resulting from an EMIS typically result from a reduction in either direct or indirect costs. Direct costs, consisting primarily of labor hours, permit fees, disposal costs, etc., are definable and easily to quantify. Indirect costs, which are comprised of reduced risks and liabilities, are less easily quantifiable. In fact, many have abandoned hope of ever quantifying expected benefits from indirect costs, and simply lump all indirect benefits into a qualitative, catch-all category called intangible benefits. However, by statistically analyzing individual risk events over an expected project life, anticipated benefits can be objectively and accurately quantified. Through the use of a case study, this paper will describe the process of quantifying direct and indirect benefits resulting from the implementation of an EMIS. The paper will describe the application of a statistical model to estimate indirect benefits and will demonstrate how the results of the benefit quantification can be used to make sound, business based decisions based on a required rate of return/return on investment.

  13. The Physics of Equestrian Show Jumping

    ERIC Educational Resources Information Center

    Stinner, Art

    2014-01-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this…

  14. The Language of Show Biz: A Dictionary.

    ERIC Educational Resources Information Center

    Sergel, Sherman Louis, Ed.

    This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…

  15. The Physics of Equestrian Show Jumping