Science.gov

Sample records for quantified results show

  1. Different methods to quantify Listeria monocytogenes biofilms cells showed different profile in their viability.

    PubMed

    Winkelströter, Lizziane Kretli; De Martinis, Elaine C P

    2015-03-01

    Listeria monocytogenes is a foodborne pathogen able to adhere and to form biofilms in several materials commonly present in food processing plants. The aim of this study was to evaluate the resistance of Listeria monocytogenes attached to abiotic surface, after treatment with sanitizers, by culture method, microscopy and Quantitative Real Time Polymerase Chain Reaction (qPCR). Biofilms of L. monocytogenes were obtained in stainless steel coupons immersed in Brain Heart Infusion Broth, under agitation at 37 °C for 24 h. The methods selected for this study were based on plate count, microscopic count with the aid of viability dyes (CTC-DAPI), and qPCR. Results of culture method showed that peroxyacetic acid was efficient to kill sessile L. monocytogenes populations, while sodium hypochlorite was only partially effective to kill attached L. monocytogenes (p < 0.05). When, viability dyes (CTC/DAPI) combined with fluorescence microscopy and qPCR were used and lower counts were found after treatments (p < 0.05). Selective quantification of viable cells of L. monocytogenes by qPCR using EMA revelead that the pre-treatment with EMA was not appropriate since it also inhibited amplification of DNA from live cells by ca. 2 log. Thus, the use of CTC counts was the best method to count viable cells in biofilms. PMID:26221112

  2. 13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. DETAIL VIEW OF BUTTRESS 4 SHOWING THE RESULTS OF POOR CONSTRUCTION WORK. THOUGH NOT A SERIOUS STRUCTURAL DEFICIENCY, THE 'HONEYCOMB' TEXTURE OF THE CONCRETE SURFACE WAS THE RESULT OF INADEQUATE TAMPING AT THE TIME OF THE INITIAL 'POUR'. - Hume Lake Dam, Sequioa National Forest, Hume, Fresno County, CA

  3. Quantifying IOHDR brachytherapy underdosage resulting from an incomplete scatter environment

    SciTech Connect

    Raina, Sanjay; Avadhani, Jaiteerth S.; Oh, Moonseong; Malhotra, Harish K.; Jaggernauth, Wainwright; Kuettel, Michael R.; Podgorsak, Matthew B. . E-mail: matthew.podgorsak@roswellpark.org

    2005-04-01

    Purpose: Most brachytherapy planning systems are based on a dose calculation algorithm that assumes an infinite scatter environment surrounding the target volume and applicator. Dosimetric errors from this assumption are negligible. However, in intraoperative high-dose-rate brachytherapy (IOHDR) where treatment catheters are typically laid either directly on a tumor bed or within applicators that may have little or no scatter material above them, the lack of scatter from one side of the applicator can result in underdosage during treatment. This study was carried out to investigate the magnitude of this underdosage. Methods: IOHDR treatment geometries were simulated using a solid water phantom beneath an applicator with varying amounts of bolus material on the top and sides of the applicator to account for missing tissue. Treatment plans were developed for 3 different treatment surface areas (4 x 4, 7 x 7, 12 x 12 cm{sup 2}), each with prescription points located at 3 distances (0.5 cm, 1.0 cm, and 1.5 cm) from the source dwell positions. Ionization measurements were made with a liquid-filled ionization chamber linear array with a dedicated electrometer and data acquisition system. Results: Measurements showed that the magnitude of the underdosage varies from about 8% to 13% of the prescription dose as the prescription depth is increased from 0.5 cm to 1.5 cm. This treatment error was found to be independent of the irradiated area and strongly dependent on the prescription distance. Furthermore, for a given prescription depth, measurements in planes parallel to an applicator at distances up to 4.0 cm from the applicator plane showed that the dose delivery error is equal in magnitude throughout the target volume. Conclusion: This study demonstrates the magnitude of underdosage in IOHDR treatments delivered in a geometry that may not result in a full scatter environment around the applicator. This implies that the target volume and, specifically, the prescription

  4. Breast vibro-acoustography: initial results show promise

    PubMed Central

    2012-01-01

    Introduction Vibro-acoustography (VA) is a recently developed imaging modality that is sensitive to the dynamic characteristics of tissue. It detects low-frequency harmonic vibrations in tissue that are induced by the radiation force of ultrasound. Here, we have investigated applications of VA for in vivo breast imaging. Methods A recently developed combined mammography-VA system for in vivo breast imaging was tested on female volunteers, aged 25 years or older, with suspected breast lesions on their clinical examination. After mammography, a set of VA scans was acquired by the experimental device. In a masked assessment, VA images were evaluated independently by 3 reviewers who identified mass lesions and calcifications. The diagnostic accuracy of this imaging method was determined by comparing the reviewers' responses with clinical data. Results We collected images from 57 participants: 7 were used for training and 48 for evaluation of diagnostic accuracy (images from 2 participants were excluded because of unexpected imaging artifacts). In total, 16 malignant and 32 benign lesions were examined. Specificity for diagnostic accuracy was 94% or higher for all 3 reviewers, but sensitivity varied (69% to 100%). All reviewers were able to detect 97% of masses, but sensitivity for detection of calcification was lower (≤ 72% for all reviewers). Conclusions VA can be used to detect various breast abnormalities, including calcifications and benign and malignant masses, with relatively high specificity. VA technology may lead to a new clinical tool for breast imaging applications. PMID:23021305

  5. Quantifying the offensive sequences that result in goals in elite futsal matches.

    PubMed

    Sarmento, Hugo; Bradley, Paul; Anguera, M Teresa; Polido, Tiago; Resende, Rui; Campaniço, Jorge

    2016-04-01

    The aim of this study was to quantify the type of offensive sequences that result in goals in elite futsal. Thirty competitive games in the Spanish Primera Division de Sala were analysed using computerised notation analysis for patterns of play that resulted in goals. More goals were scored in positional attack (42%) and from set pieces (27%) compared to other activities. The number of defence to offense "transitions" (n = 45) and the start of offensive plays due to the rules of the game (n = 45) were the most common type of sequences that resulted in goals compared to other patterns of play. The central offensive zonal areas were the most common for shots on goal, with 73% of all goals scored from these areas of the pitch compared to defensive and wide zones. The foot was the main part of the body involved in scoring (n = 114). T-pattern analysis of offensive sequences revealed regular patterns of play, which are common in goal scoring opportunities in futsal and are typical movement patterns in this sport. The data demonstrate common offensive sequences and movement patterns related to goals in elite futsal and this could provide important information for the development of physical and technical training drills that replicate important game situations. PMID:26183125

  6. Quantifying Uncertainty in Model Predictions for the Pliocene (Plio-QUMP): Initial results

    USGS Publications Warehouse

    Pope, J.O.; Collins, M.; Haywood, A.M.; Dowsett, H.J.; Hunter, S.J.; Lunt, D.J.; Pickering, S.J.; Pound, M.J.

    2011-01-01

    Examination of the mid-Pliocene Warm Period (mPWP; ~. 3.3 to 3.0. Ma BP) provides an excellent opportunity to test the ability of climate models to reproduce warm climate states, thereby assessing our confidence in model predictions. To do this it is necessary to relate the uncertainty in model simulations of mPWP climate to uncertainties in projections of future climate change. The uncertainties introduced by the model can be estimated through the use of a Perturbed Physics Ensemble (PPE). Developing on the UK Met Office Quantifying Uncertainty in Model Predictions (QUMP) Project, this paper presents the results from an initial investigation using the end members of a PPE in a fully coupled atmosphere-ocean model (HadCM3) running with appropriate mPWP boundary conditions. Prior work has shown that the unperturbed version of HadCM3 may underestimate mPWP sea surface temperatures at higher latitudes. Initial results indicate that neither the low sensitivity nor the high sensitivity simulations produce unequivocally improved mPWP climatology relative to the standard. Whilst the high sensitivity simulation was able to reconcile up to 6 ??C of the data/model mismatch in sea surface temperatures in the high latitudes of the Northern Hemisphere (relative to the standard simulation), it did not produce a better prediction of global vegetation than the standard simulation. Overall the low sensitivity simulation was degraded compared to the standard and high sensitivity simulations in all aspects of the data/model comparison. The results have shown that a PPE has the potential to explore weaknesses in mPWP modelling simulations which have been identified by geological proxies, but that a 'best fit' simulation will more likely come from a full ensemble in which simulations that contain the strengths of the two end member simulations shown here are combined. ?? 2011 Elsevier B.V.

  7. Quantifying viruses and bacteria in wastewater—Results, interpretation methods, and quality control

    USGS Publications Warehouse

    Francy, Donna S.; Stelzer, Erin A.; Bushon, Rebecca N.; Brady, Amie M.G.; Mailot, Brian E.; Spencer, Susan K.; Borchardt, Mark A.; Elber, Ashley G.; Riddell, Kimberly R.; Gellner, Terry M.

    2011-01-01

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes small enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bacterial indicators Escherichia coli (E. coli) and fecal coliforms are the required microbial measures of effluents for wastewater-discharge permits. Information is needed on the effectiveness of MBRs in removing human enteric viruses from wastewaters, particularly as compared to conventional wastewater treatment before and after disinfection. A total of 73 regular and 28 quality-control (QC) samples were collected at three MBR and two conventional wastewater plants in Ohio during 23 regular and 3 QC sampling trips in 2008-10. Samples were collected at various stages in the treatment processes and analyzed for bacterial indicators E. coli, fecal coliforms, and enterococci by membrane filtration; somatic and F-specific coliphage by the single agar layer (SAL) method; adenovirus, enterovirus, norovirus GI and GII, rotavirus, and hepatitis A virus by molecular methods; and viruses by cell culture. While addressing the main objective of the study-comparing removal of viruses and bacterial indicators in MBR and conventional plants-it was realized that work was needed to identify data analysis and quantification methods for interpreting enteric virus and QC data. Therefore, methods for quantifying viruses, qualifying results, and applying QC data to interpretations are described in this report. During each regular sampling trip, samples were collected (1) before conventional or MBR treatment (post-preliminary), (2) after secondary or MBR treatment (post-secondary or post-MBR), (3) after tertiary treatment (one conventional plant only), and (4) after disinfection (post-disinfection). Glass-wool fiber filtration was used to concentrate enteric viruses from large volumes, and small

  8. Results From Mars Show Electrostatic Charging of the Mars Pathfinder Sojourner Rover

    NASA Technical Reports Server (NTRS)

    Kolecki, Joseph C.; Siebert, Mark W.

    1998-01-01

    flighata. Electrical charging of vehicles and, one day, astronauts moving across the Martian surface may have moderate to severe consequences if large potential differences develop. The observations from Sojourner point to just such a possibility. It is desirable to quantify these results. The various lander/rover missions being planned for the upcoming decade provide the means for doing so. They should, therefore, carry instruments that will not only measure vehicle charging but characterize all the natural and induced electrical phenomena occurring in the environment and assess their impact on future missions.

  9. Comparison of some results of program SHOW with other solar hot water computer programs

    NASA Astrophysics Data System (ADS)

    Young, M. F.; Baughn, J. W.

    The SHOW (solar hot water) computer program is capable of simulating both one and two tank designs of thermosiphon and pumped solar domestic hot water systems. SHOW differs in a number of ways from other programs, the most notable of which is the emphasis on a thermal/hydraulic model of the stratified storage tank. The predicted performance for a typical two tank pumped system, computed by Program SHOW are compared, with results computed using F-CHART and TRNSYS. The results show fair to good agreement between the various computer programs when comparing the annual percent solar contributions. SHOW is also used to compute the expected performance of a two tank thermosiphon system and to compare its performance to the two tank pumped system.

  10. Gun shows and gun violence: fatally flawed study yields misleading results.

    PubMed

    Wintemute, Garen J; Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A

    2010-10-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled "The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas" outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors' prior research. The study should not be used as evidence in formulating gun policy. PMID:20724672

  11. Gun Shows and Gun Violence: Fatally Flawed Study Yields Misleading Results

    PubMed Central

    Hemenway, David; Webster, Daniel; Pierce, Glenn; Braga, Anthony A.

    2010-01-01

    A widely publicized but unpublished study of the relationship between gun shows and gun violence is being cited in debates about the regulation of gun shows and gun commerce. We believe the study is fatally flawed. A working paper entitled “The Effect of Gun Shows on Gun-Related Deaths: Evidence from California and Texas” outlined this study, which found no association between gun shows and gun-related deaths. We believe the study reflects a limited understanding of gun shows and gun markets and is not statistically powered to detect even an implausibly large effect of gun shows on gun violence. In addition, the research contains serious ascertainment and classification errors, produces results that are sensitive to minor specification changes in key variables and in some cases have no face validity, and is contradicted by 1 of its own authors’ prior research. The study should not be used as evidence in formulating gun policy. PMID:20724672

  12. Preliminary Results In Quantifying The Climatic Impact Forcing Factors Around 3 Ma Ago

    NASA Astrophysics Data System (ADS)

    Fluteau, F.; Ramstein, G.; Duringer, P.; Schuster, M.; Tiercelin, J. J.

    What is exactly the control of climate changes on the development of the Hominids ? Is it possible to quantify such changes ? and which are the forcing factors that create these changes ? We use here a General Circulation Model to investigate the climate sensitivity of 3 different forcing factors : the uplift of the East African Rift, the ex- tent (more than twenty time PD surfaces) of the Chad Lake and ultimately we shall with a coupled oceanatmospher GCM test the the effect of Indonesian throughflow changes. To achieve these goals, we need a multidisciplinary group to assess the evo- lution of the Rift and the extent of the Lake. We prescribe these different boundary conditions to the GCM and use a biome model to assess the vegetation changes. In this presentation we will only focus on the Rift uplift and the Chad lake impacts on Atmospheric circulation, monsoon and their environmental consequences in term of vegetation changes.

  13. Showing Value in Newborn Screening: Challenges in Quantifying the Effectiveness and Cost-Effectiveness of Early Detection of Phenylketonuria and Cystic Fibrosis

    PubMed Central

    Grosse, Scott D.

    2015-01-01

    Decision makers sometimes request information on the cost savings, cost-effectiveness, or cost-benefit of public health programs. In practice, quantifying the health and economic benefits of population-level screening programs such as newborn screening (NBS) is challenging. It requires that one specify the frequencies of health outcomes and events, such as hospitalizations, for a cohort of children with a given condition under two different scenarios—with or without NBS. Such analyses also assume that everything else, including treatments, is the same between groups. Lack of comparable data for representative screened and unscreened cohorts that are exposed to the same treatments following diagnosis can result in either under- or over-statement of differences. Accordingly, the benefits of early detection may be understated or overstated. This paper illustrates these common problems through a review of past economic evaluations of screening for two historically significant conditions, phenylketonuria and cystic fibrosis. In both examples qualitative judgments about the value of prompt identification and early treatment to an affected child were more influential than specific numerical estimates of lives or costs saved. PMID:26702401

  14. Nanotribology Results Show that DNA Forms a Mechanically Resistant 2D Network in Metaphase Chromatin Plates

    PubMed Central

    Gállego, Isaac; Oncins, Gerard; Sisquella, Xavier; Fernàndez-Busquets, Xavier; Daban, Joan-Ramon

    2010-01-01

    In a previous study, we found that metaphase chromosomes are formed by thin plates, and here we have applied atomic force microscopy (AFM) and friction force measurements at the nanoscale (nanotribology) to analyze the properties of these planar structures in aqueous media at room temperature. Our results show that high concentrations of NaCl and EDTA and extensive digestion with protease and nuclease enzymes cause plate denaturation. Nanotribology studies show that native plates under structuring conditions (5 mM Mg2+) have a relatively high friction coefficient (μ ≈ 0.3), which is markedly reduced when high concentrations of NaCl or EDTA are added (μ ≈ 0.1). This lubricant effect can be interpreted considering the electrostatic repulsion between DNA phosphate groups and the AFM tip. Protease digestion increases the friction coefficient (μ ≈ 0.5), but the highest friction is observed when DNA is cleaved by micrococcal nuclease (μ ≈ 0.9), indicating that DNA is the main structural element of plates. Whereas nuclease-digested plates are irreversibly damaged after the friction measurement, native plates can absorb kinetic energy from the AFM tip without suffering any damage. These results suggest that plates are formed by a flexible and mechanically resistant two-dimensional network which allows the safe storage of DNA during mitosis. PMID:21156137

  15. Quantifying entanglement

    NASA Astrophysics Data System (ADS)

    Thapliyal, Ashish Vachaspati

    Entanglement is an essential element of quantum mechanics. The aim of this work is to explore various properties of entanglement from the viewpoints of both physics and information science, thus providing a unique picture of entanglement from an interdisciplinary point of view. The focus of this work is on quantifying entanglement as a resource. We start with bipartite states, proposing a new measure of bipartite entanglement called entanglement of assistance, showing that bound entangled states of rank two cannot exist, exploring the number of members required in the ensemble achieving the entanglement of formation and the possibility of bound entangled states that are negative under partial transposition (NPT bound entangled states). For multipartite states we introduce the notions of reducibilities and equivalences under entanglement non-increasing operations and we study the relations between various reducibilities and equivalences such as exact and asymptotic LOCC, asymptotic LOCCq, cLOCC, LOc, etc. We use this new language to attempt to quantify entanglement for multiple parties. We introduce the idea of entanglement span and minimal entanglement generating set and entanglement coefficients associated with it which are the entanglement measures, thus proposing a multicomponent measure of entanglement for three or more parties. We show that the class of Schmidt decomposable states have only GHZM or Cat-like entanglement. Further we introduce the class of multiseparable states for quantification of their entanglement and prove that they are equivalent to the Schmidt decomposable states, and thus have only Cat-like entanglement. We further explore the conditions under which LOCO equivalences are possible for multipartite isentropic states. We define Cat-distillability, EPRB-distillability and distillability for multipartite mixed states and show that distillability implies EPRB-distillability. Further we show that all non-factorizable pure states are Cat

  16. Meta-analysis of aspirin use and risk of lung cancer shows notable results.

    PubMed

    Hochmuth, Friederike; Jochem, Maximilian; Schlattmann, Peter

    2016-07-01

    Aspirin is a promising agent for chemoprevention of lung cancer. We assessed the association of aspirin use and the development of lung cancer, with a focus on heterogeneity between studies. Databases were searched for relevant studies until September 2014. Studies evaluating the relationship of aspirin use and incidence of lung cancer were considered. Relative risks (RR) were extracted and a pooled estimate was calculated. Heterogeneity was assessed by the I measure, random-effects models, and finite-mixture models. Sources of heterogeneity were investigated using a meta-regression. A decreased risk of lung cancer was found including 20 studies [RR=0.87, 95% confidence interval (CI): 0.79-0.95] on the basis of a random-effects model. Strong heterogeneity was observed (τ=0.0258, I=74.4%). As a result, two subpopulations of studies were identified on the basis of a mixture model. The first subpopulation (42%) has an average RR of 0.64. The remaining subpopulation (58%) shows an RR of 1.04. Different results were found for case-control (RR=0.74, 95% CI: 0.60-0.90) and cohort studies (RR=0.99, 95% CI: 0.93-1.06) in a stratified analysis. In a subgroup analysis, use of aspirin was associated with a decreased risk of non-small-cell lung cancer in case-control studies (RR=0.74; 95% CI: 0.58-0.94). At first glance, our meta-analysis shows an average protective effect. A second glance indicates that there is strong heterogeneity. This leads to a subpopulation with considerable benefit and another subpopulation with no benefit. For further investigations, it is important to identify populations that benefit from aspirin use. PMID:26067033

  17. Image analysis techniques: Used to quantify and improve the precision of coatings testing results

    SciTech Connect

    Duncan, D.J.; Whetten, A.R.

    1993-12-31

    Coating evaluations often specify tests to measure performance characteristics rather than coating physical properties. These evaluation results are often very subjective. A new tool, Digital Video Image Analysis (DVIA), is successfully being used for two automotive evaluations; cyclic (scab) corrosion, and gravelometer (chip) test. An experimental design was done to evaluate variability and interactions among the instrumental factors. This analysis method has proved to be an order of magnitude more sensitive and reproducible than the current evaluations. Coating evaluations can be described and measured that had no way to be expressed previously. For example, DVIA chip evaluations can differentiate how much damage was done to the topcoat, primer even to the metal. DVIA with or without magnification, has the capability to become the quantitative measuring tool for several other coating evaluations, such as T-bends, wedge bends, acid etch analysis, coating defects, observing cure, defect formation or elimination over time, etc.

  18. Long-Term Trial Results Show No Mortality Benefit from Annual Prostate Cancer Screening

    Cancer.gov

    Thirteen year follow-up data from the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial show higher incidence but similar mortality among men screened annually with the prostate-specific antigen (PSA) test and digital rectal examination

  19. Comparison of some results of program SHOW with other solar hot water computer programs

    NASA Astrophysics Data System (ADS)

    Young, M. F.; Baughn, J. W.

    Subroutines and the driver program for the simulation code SHOW (solar hot water) for solar thermosyphon systems are discussed, and simulations are compared with predictions by the F-CHART and TRNSYS codes. SHOW has the driver program MAIN, which defines the system control logic for choosing the appropriate system subroutine for analysis. Ten subroutines are described, which account for the solar system physical parameters, the weather data, the manufacturer-supplied system specifications, mass flow rates, pumped systems, total transformed radiation, load use profiles, stratification in storage, an electric water heater, and economic analyses. The three programs are employed to analyze a thermosiphon installation in Sacramento with two storage tanks. TRNSYS and SHOW were in agreement and lower than F-CHARt for annual predictions, although significantly more computer time was necessary to make TRNSYS converge.

  20. Data for behavioral results and brain regions showing a time effect during pair-association retrieval.

    PubMed

    Jimura, Koji; Hirose, Satoshi; Wada, Hiroyuki; Yoshizawa, Yasunori; Imai, Yoshio; Akahane, Masaaki; Machida, Toru; Shirouzu, Ichiro; Koike, Yasuharu; Konishi, Seiki

    2016-09-01

    The current data article provides behavioral and neuroimaging data for the research article "Relatedness-dependent rapid development of brain activity in anterior temporal cortex during pair-association retrieval" (Jimura et al., 2016) [1]. Behavioral performance is provided in a table. Fig. 2 of the article is based on this table. Brain regions showing time effect are provided in a table. A statistical activation map for the time effect is shown in Fig. 3C of the article. PMID:27508239

  1. Aortic emboli show surprising size dependent predilection for cerebral arteries: Results from computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Carr, Ian; Schwartz, Robert; Shadden, Shawn

    2012-11-01

    Cardiac emboli can have devastating consequences if they enter the cerebral circulation, and are the most common cause of embolic stroke. Little is known about relationships of embolic origin/density/size to cerebral events; as these relationships are difficult to observe. To better understand stoke risk from cardiac and aortic emboli, we developed a computational model to track emboli from the heart to the brain. Patient-specific models of the human aorta and arteries to the brain were derived from CT angiography from 10 MHIF patients. Blood flow was modeled by the Navier-Stokes equations using pulsatile inflow at the aortic valve, and physiologic Windkessel models at the outlets. Particulate was injected at the aortic valve and tracked using modified Maxey-Riley equations with a wall collision model. Results demonstrate aortic emboli that entered the cerebral circulation through the carotid or vertebral arteries were localized to specific locations of the proximal aorta. The percentage of released particles embolic to the brain markedly increased with particle size from 0 to ~1-1.5 mm in all patients. Larger particulate became less likely to traverse the cerebral vessels. These findings are consistent with sparse literature based on transesophageal echo measurements. This work was supported in part by the National Science Foundation, award number 1157041.

  2. QUantifying the Aerosol Direct and Indirect Effect over Eastern Mediterranean from Satellites (QUADIEEMS): Overview and preliminary results

    NASA Astrophysics Data System (ADS)

    Georgoulias, Aristeidis K.; Zanis, Prodromos; Pöschl, Ulrich; Kourtidis, Konstantinos A.; Alexandri, Georgia; Ntogras, Christos; Marinou, Eleni; Amiridis, Vassilis

    2013-04-01

    An overview and preliminary results from the research implemented within the framework of QUADIEEMS project are presented. For the scopes of the project, satellite data from five sensors (MODIS aboard EOS TERRA, MODIS aboard EOS AQUA, TOMS aboard Earth Probe, OMI aboard EOS AURA and CALIOP aboard CALIPSO) are used in conjunction with meteorological data from ECMWF ERA-interim reanalysis and data from a global chemical-aerosol-transport model as well as simulation results from a regional climate model (RegCM4) coupled with a simplified aerosol scheme. QUADIEEMS focuses on Eastern Mediterranean [30oN-45No, 17.5oE-37.5oE], a region situated at the crossroad of different aerosol types and thus ideal for the investigation of the direct and indirect effects of various aerosol types at a high spatial resolution. The project consists of five components. First, raw data from various databases are acquired, analyzed and spatially homogenized with the outcome being a high resolution (0.1x0.1 degree) and a moderate resolution (1.0x1.0 degree) gridded dataset of aerosol and cloud optical properties. The marine, dust and anthropogenic fraction of aerosols over the region is quantified making use of the homogenized dataset. Regional climate model simulations with REGCM4/aerosol are also implemented for the greater European region for the period 2000-2010 at a resolution of 50 km. REGCM4's ability to simulate AOD550 over Europe is evaluated. The aerosol-cloud relationships, for sub-regions of Eastern Mediterranean characterized by the presence of predominant aerosol types, are examined. The aerosol-cloud relationships are also examined taking into account the relative position of aerosol and cloud layers as defined by CALIPSO observations. Within the final component of the project, results and data that emerged from all the previous components are used in satellite-based parameterizations in order to quantify the direct and indirect (first) radiative effect of the different

  3. Quantifying geological processes on Mars-Results of the high resolution stereo camera (HRSC) on Mars express

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Tirsch, D.; Hauber, E.; Ansan, V.; Di Achille, G.; Erkeling, G.; Fueten, F.; Head, J.; Kleinhans, M. G.; Mangold, N.; Michael, G. G.; Neukum, G.; Pacifici, A.; Platz, T.; Pondrelli, M.; Raack, J.; Reiss, D.; Williams, D. A.; Adeli, S.; Baratoux, D.; de Villiers, G.; Foing, B.; Gupta, S.; Gwinner, K.; Hiesinger, H.; Hoffmann, H.; Deit, L. Le; Marinangeli, L.; Matz, K.-D.; Mertens, V.; Muller, J. P.; Pasckert, J. H.; Roatsch, T.; Rossi, A. P.; Scholten, F.; Sowe, M.; Voigt, J.; Warner, N.

    2015-07-01

    This review summarizes the use of High Resolution Stereo Camera (HRSC) data as an instrumental tool and its application in the analysis of geological processes and landforms on Mars during the last 10 years of operation. High-resolution digital elevations models on a local to regional scale are the unique strength of the HRSC instrument. The analysis of these data products enabled quantifying geological processes such as effusion rates of lava flows, tectonic deformation, discharge of water in channels, formation timescales of deltas, geometry of sedimentary deposits as well as estimating the age of geological units by crater size-frequency distribution measurements. Both the quantification of geological processes and the age determination allow constraining the evolution of Martian geologic activity in space and time. A second major contribution of HRSC is the discovery of episodicity in the intensity of geological processes on Mars. This has been revealed by comparative age dating of volcanic, fluvial, glacial, and lacustrine deposits. Volcanic processes on Mars have been active over more than 4 Gyr, with peak phases in all three geologic epochs, generally ceasing towards the Amazonian. Fluvial and lacustrine activity phases spread a time span from Noachian until Amazonian times, but detailed studies show that they have been interrupted by multiple and long lasting phases of quiescence. Also glacial activity shows discrete phases of enhanced intensity that may correlate with periods of increased spin-axis obliquity. The episodicity of geological processes like volcanism, erosion, and glaciation on Mars reflects close correlation between surface processes and endogenic activity as well as orbit variations and changing climate condition.

  4. Quantifying microwear on experimental Mistassini quartzite scrapers: preliminary results of exploratory research using LSCM and scale-sensitive fractal analysis.

    PubMed

    Stemp, W James; Lerner, Harry J; Kristant, Elaine H

    2013-01-01

    Although previous use-wear studies involving quartz and quartzite have been undertaken by archaeologists, these are comparatively few in number. Moreover, there has been relatively little effort to quantify use-wear on stone tools made from quartzite. The purpose of this article is to determine the effectiveness of a measurement system, laser scanning confocal microscopy (LSCM), to document the surface roughness or texture of experimental Mistassini quartzite scrapers used on two different contact materials (fresh and dry deer hide). As in previous studies using LSCM on chert, flint, and obsidian, this exploratory study incorporates a mathematical algorithm that permits the discrimination of surface roughness based on comparisons at multiple scales. Specifically, we employ measures of relative area (RelA) coupled with the F-test to discriminate used from unused stone tool surfaces, as well as surfaces of quartzite scrapers used on dry and fresh deer hide. Our results further demonstrate the effect of raw material variation on use-wear formation and its documentation using LSCM and RelA. PMID:22688593

  5. Quantifying contextuality.

    PubMed

    Grudka, A; Horodecki, K; Horodecki, M; Horodecki, P; Horodecki, R; Joshi, P; Kłobus, W; Wójcik, A

    2014-03-28

    Contextuality is central to both the foundations of quantum theory and to the novel information processing tasks. Despite some recent proposals, it still faces a fundamental problem: how to quantify its presence? In this work, we provide a universal framework for quantifying contextuality. We conduct two complementary approaches: (i) the bottom-up approach, where we introduce a communication game, which grasps the phenomenon of contextuality in a quantitative manner; (ii) the top-down approach, where we just postulate two measures, relative entropy of contextuality and contextuality cost, analogous to existent measures of nonlocality (a special case of contextuality). We then match the two approaches by showing that the measure emerging from the communication scenario turns out to be equal to the relative entropy of contextuality. Our framework allows for the quantitative, resource-type comparison of completely different games. We give analytical formulas for the proposed measures for some contextual systems, showing in particular that the Peres-Mermin game is by order of magnitude more contextual than that of Klyachko et al. Furthermore, we explore properties of these measures such as monotonicity or additivity. PMID:24724629

  6. Seeking to quantify the ferromagnetic-to-antiferromagnetic interface coupling resulting in exchange bias with various thin-film conformations

    SciTech Connect

    Hsiao, C. H.; Wang, S.; Ouyang, H.; Desautels, R. D.; Lierop, J. van; Lin, K. W.

    2014-08-07

    Ni{sub 3}Fe/(Ni, Fe)O thin films with bilayer and nanocrystallite dispersion morphologies are prepared with a dual ion beam deposition technique permitting precise control of nanocrystallite growth, composition, and admixtures. A bilayer morphology provides a Ni{sub 3}Fe-to-NiO interface, while the dispersion films have different mixtures of Ni{sub 3}Fe, NiO, and FeO nanocrystallites. Using detailed analyses of high resolution transmission electron microscopy images with Multislice simulations, the nanocrystallites' structures and phases are determined, and the intermixing between the Ni{sub 3}Fe, NiO, and FeO interfaces is quantified. From field-cooled hysteresis loops, the exchange bias loop shift from spin interactions at the interfaces are determined. With similar interfacial molar ratios of FM-to-AF, we find the exchange bias field essentially unchanged. However, when the interfacial ratio of FM to AF was FM rich, the exchange bias field increases. Since the FM/AF interface ‘contact’ areas in the nanocrystallite dispersion films are larger than that of the bilayer film, and the nanocrystallite dispersions exhibit larger FM-to-AF interfacial contributions to the magnetism, we attribute the changes in the exchange bias to be from increases in the interfacial segments that suffer defects (such as vacancies and bond distortions), that also affects the coercive fields.

  7. Comparison of gas analyzers for quantifying eddy covariance fluxes- results from an irrigated alfalfa field in Davis, CA

    NASA Astrophysics Data System (ADS)

    Chan, S.; Biraud, S.; Polonik, P.; Billesbach, D.; Hanson, C. V.; Bogoev, I.; Conrad, B.; Alstad, K. P.; Burba, G. G.; Li, J.

    2015-12-01

    The eddy covariance technique requires simultaneous, rapid measurements of wind components and scalars (e.g., water vapor, carbon dioxide) to calculate the vertical exchange due to turbulent processes. The technique has been used extensively as a non-intrusive means to quantify land-atmosphere exchanges of mass and energy. A variety of sensor technologies and gas sampling designs have been tried. Gas concentrations are commonly measured using infrared or laser absorption spectroscopy. Open-path sensors directly sample the ambient environment but suffer when the sample volume is obstructed (e.g., rain, dust). Closed-path sensors utilize pumps to draw air into the analyzer through inlet tubes which can attenuate the signal. Enclosed-path sensors are a newer, hybrid of the open- and closed-path designs where the sensor is mounted in the environment and the sample is drawn through a short inlet tube with short residence time. Five gas analyzers were evaluated as part of this experiment: open-path LI-COR 7500A, enclosed-path LI-COR 7200, closed-path Picarro G2311-f, open-path Campbell Scientific IRGASON, and enclosed-path Campbell Scientific EC155. We compared the relative performance of the gas analyzers over an irrigated alfalfa field in Davis, CA. The field was host to a range of ancillary measurements including below-ground sensors, and a weighing lysimeter. The crop was flood irrigated and harvested monthly. To compare sensors, we evaluated the half-hour mean and variance of gas concentrations (or mole densities). Power spectra for the gas analyzers and turbulent fluxes (from a common sonic anemometer) were also calculated and analyzed. Eddy covariance corrections will be discussed as they relate to sensor design (e.g., density corrections, signal attenuation).

  8. Survey results show that adults are willing to pay higher insurance premiums for generous coverage of specialty drugs.

    PubMed

    Romley, John A; Sanchez, Yuri; Penrod, John R; Goldman, Dana P

    2012-04-01

    Generous coverage of specialty drugs for cancer and other diseases may be valuable not only for sick patients currently using these drugs, but also for healthy people who recognize the potential need for them in the future. This study estimated how healthy people value insurance coverage of specialty drugs, defined as high-cost drugs that treat cancer and other serious health conditions like multiple sclerosis, by quantifying willingness to pay via a survey. US adults were estimated to be willing to pay an extra $12.94 on average in insurance premiums per month for generous specialty-drug coverage--in effect, $2.58 for every dollar in out-of-pocket costs that they would expect to pay with a less generous insurance plan. Given the value that people assign to generous coverage of specialty drugs, having high cost sharing on these drugs seemingly runs contrary to what people value in their health insurance. PMID:22492884

  9. Prognostic significance of intraoperative macroscopic serosal invasion finding when it shows a discrepancy in pathologic result gastric cancer

    PubMed Central

    Kang, Sang Yull; Park, Ho Sung

    2016-01-01

    Purpose Depth of wall invasion is an important prognostic factor in patients with gastric cancer, whereas the prognostic significance of intraoperative macroscopic serosal invasion (mSE) findings remain unclear when they show a discrepancy in pathologic findings. This study, therefore, assessed the prognostic significance of mSE. Methods Data from cohort of 2,835 patients with resectable gastric cancer who underwent surgery between 1990 and 2010 were retrospectively reviewed. Results The overall accuracy of mSE and pathologic results was 83.4%. The accuracy of mSE was 75.5% in pT2. On the other hand, the accuracy of pT3 dropped to 24.5%. According to mSE findings (+/–), the 5-year disease-specific survival (DSS) rate differed significantly in patients with pT2 (+; 74.2% vs. –; 92.0%), pT3 (+; 76.7% vs. –; 91.8%) and pT4a (+; 51.3% vs. –; 72.8%) (P < 0.001 each), but not in patients with T1 tumor. Multivariate analysis showed that mSE findings (hazard ratio [HR], 2.275; 95% confidence interval [CI], 1.148–4.509), tumor depth (HR, 6.894; 95% CI, 2.325–20.437), nodal status (HR, 5.206; 95% CI, 2.298–11.791), distant metastasis (HR, 2.881; 95% CI, 1.388–6.209), radical resection (HR, 2.002; 95% CI, 1.017–3.940), and lymphatic invasion (HR, 2.713; 95% CI, 1.424–5.167) were independent predictors of 5-year DSS rate. Conclusion We observed considerable discrepancies between macroscopic and pathologic diagnosis of serosal invasion. However, macroscopic diagnosis of serosal invasion was independently prognostic of 5-year DSS. It suggests that because the pathologic results could not be perfect and the local inflammatory change with mSE(+) could affect survival, a combination of mSE(+/–) and pathologic depth may be predictive of prognosis in patients with gastric cancer. PMID:27186569

  10. Not all Surface Waters show a Strong Relation between DOC and Hg Species: Results from an Adirondack Mountain Watershed

    NASA Astrophysics Data System (ADS)

    Burns, D. A.; Schelker, J.; Murray, K. R.; Brigham, M. E.; Aiken, G.

    2009-12-01

    in ponded areas, and (3) the effects of the widely varying seasonal temperature and snow cover on the rates of microbial processes such as the decomposition of soil organic matter and methylation of Hg. These results emphasize that not all watersheds show simple linear relations between DOC and Hg species on an annual basis, and provide a caution that measurements such as the optical properties of waters are not always a strong surrogate for Hg.

  11. Quantifying resilience

    USGS Publications Warehouse

    Allen, Craig R.; Angeler, David G.

    2016-01-01

    Several frameworks to operationalize resilience have been proposed. A decade ago, a special feature focused on quantifying resilience was published in the journal Ecosystems (Carpenter, Westley & Turner 2005). The approach there was towards identifying surrogates of resilience, but few of the papers proposed quantifiable metrics. Consequently, many ecological resilience frameworks remain vague and difficult to quantify, a problem that this special feature aims to address. However, considerable progress has been made during the last decade (e.g. Pope, Allen & Angeler 2014). Although some argue that resilience is best kept as an unquantifiable, vague concept (Quinlan et al. 2016), to be useful for managers, there must be concrete guidance regarding how and what to manage and how to measure success (Garmestani, Allen & Benson 2013; Spears et al. 2015). Ideas such as ‘resilience thinking’ have utility in helping stakeholders conceptualize their systems, but provide little guidance on how to make resilience useful for ecosystem management, other than suggesting an ambiguous, Goldilocks approach of being just right (e.g. diverse, but not too diverse; connected, but not too connected). Here, we clarify some prominent resilience terms and concepts, introduce and synthesize the papers in this special feature on quantifying resilience and identify core unanswered questions related to resilience.

  12. Native trees show conservative water use relative to invasive trees: results from a removal experiment in a Hawaiian wet forest

    PubMed Central

    Cavaleri, Molly A.; Ostertag, Rebecca; Cordell, Susan; Sack, Lawren

    2014-01-01

    While the supply of freshwater is expected to decline in many regions in the coming decades, invasive plant species, often ‘high water spenders’, are greatly expanding their ranges worldwide. In this study, we quantified the ecohydrological differences between native and invasive trees and also the effects of woody invasive removal on plot-level water use in a heavily invaded mono-dominant lowland wet tropical forest on the Island of Hawaii. We measured transpiration rates of co-occurring native and invasive tree species with and without woody invasive removal treatments. Twenty native Metrosideros polymorpha and 10 trees each of three invasive species, Cecropia obtusifolia, Macaranga mappa and Melastoma septemnervium, were instrumented with heat-dissipation sap-flux probes in four 100 m2 plots (two invaded, two removal) for 10 months. In the invaded plots, where both natives and invasives were present, Metrosideros had the lowest sap-flow rates per unit sapwood, but the highest sap-flow rates per whole tree, owing to its larger mean diameter than the invasive trees. Stand-level water use within the removal plots was half that of the invaded plots, even though the removal of invasives caused a small but significant increase in compensatory water use by the remaining native trees. By investigating the effects of invasive species on ecohydrology and comparing native vs. invasive physiological traits, we not only gain understanding about the functioning of invasive species, but we also highlight potential water-conservation strategies for heavily invaded mono-dominant tropical forests worldwide. Native-dominated forests free of invasive species can be conservative in overall water use, providing a strong rationale for the control of invasive species and preservation of native-dominated stands. PMID:27293637

  13. Genomic and Enzymatic Results Show Bacillus cellulosilyticus Uses a Novel Set of LPXTA Carbohydrases to Hydrolyze Polysaccharides

    PubMed Central

    Mead, David; Drinkwater, Colleen; Brumm, Phillip J.

    2013-01-01

    Background Alkaliphilic Bacillus species are intrinsically interesting due to the bioenergetic problems posed by growth at high pH and high salt. Three alkaline cellulases have been cloned, sequenced and expressed from Bacillus cellulosilyticus N-4 (Bcell) making it an excellent target for genomic sequencing and mining of biomass-degrading enzymes. Methodology/Principal Findings The genome of Bcell is a single chromosome of 4.7 Mb with no plasmids present and three large phage insertions. The most unusual feature of the genome is the presence of 23 LPXTA membrane anchor proteins; 17 of these are annotated as involved in polysaccharide degradation. These two values are significantly higher than seen in any other Bacillus species. This high number of membrane anchor proteins is seen only in pathogenic Gram-positive organisms such as Listeria monocytogenes or Staphylococcus aureus. Bcell also possesses four sortase D subfamily 4 enzymes that incorporate LPXTA-bearing proteins into the cell wall; three of these are closely related to each other and unique to Bcell. Cell fractionation and enzymatic assay of Bcell cultures show that the majority of polysaccharide degradation is associated with the cell wall LPXTA-enzymes, an unusual feature in Gram-positive aerobes. Genomic analysis and growth studies both strongly argue against Bcell being a truly cellulolytic organism, in spite of its name. Preliminary results suggest that fungal mycelia may be the natural substrate for this organism. Conclusions/Significance Bacillus cellulosilyticus N-4, in spite of its name, does not possess any of the genes necessary for crystalline cellulose degradation, demonstrating the risk of classifying microorganisms without the benefit of genomic analysis. Bcell is the first Gram-positive aerobic organism shown to use predominantly cell-bound, non-cellulosomal enzymes for polysaccharide degradation. The LPXTA-sortase system utilized by Bcell may have applications both in anchoring

  14. Development and application of methods to quantify spatial and temporal hyperpolarized 3He MRI ventilation dynamics: preliminary results in chronic obstructive pulmonary disease

    NASA Astrophysics Data System (ADS)

    Kirby, Miranda; Wheatley, Andrew; McCormack, David G.; Parraga, Grace

    2010-03-01

    Hyperpolarized helium-3 (3He) magnetic resonance imaging (MRI) has emerged as a non-invasive research method for quantifying lung structural and functional changes, enabling direct visualization in vivo at high spatial and temporal resolution. Here we described the development of methods for quantifying ventilation dynamics in response to salbutamol in Chronic Obstructive Pulmonary Disease (COPD). Whole body 3.0 Tesla Excite 12.0 MRI system was used to obtain multi-slice coronal images acquired immediately after subjects inhaled hyperpolarized 3He gas. Ventilated volume (VV), ventilation defect volume (VDV) and thoracic cavity volume (TCV) were recorded following segmentation of 3He and 1H images respectively, and used to calculate percent ventilated volume (PVV) and ventilation defect percent (VDP). Manual segmentation and Otsu thresholding were significantly correlated for VV (r=.82, p=.001), VDV (r=.87 p=.0002), PVV (r=.85, p=.0005), and VDP (r=.85, p=.0005). The level of agreement between these segmentation methods was also evaluated using Bland-Altman analysis and this showed that manual segmentation was consistently higher for VV (Mean=.22 L, SD=.05) and consistently lower for VDV (Mean=-.13, SD=.05) measurements than Otsu thresholding. To automate the quantification of newly ventilated pixels (NVp) post-bronchodilator, we used translation, rotation, and scaling transformations to register pre-and post-salbutamol images. There was a significant correlation between NVp and VDV (r=-.94 p=.005) and between percent newly ventilated pixels (PNVp) and VDP (r=- .89, p=.02), but not for VV or PVV. Evaluation of 3He MRI ventilation dynamics using Otsu thresholding and landmark-based image registration provides a way to regionally quantify functional changes in COPD subjects after treatment with beta-agonist bronchodilators, a common COPD and asthma therapy.

  15. Quantifying Surface Processes and Stratigraphic Characteristics Resulting from Large Magnitude High Frequency and Small Magnitude Low Frequency Relative Sea Level Cycles: An Experimental Study

    NASA Astrophysics Data System (ADS)

    Yu, L.; Li, Q.; Esposito, C. R.; Straub, K. M.

    2015-12-01

    Relative Sea-Level (RSL) change, which is a primary control on sequence stratigraphic architecture, has a close relationship with climate change. In order to explore the influence of RSL change on the stratigraphic record, we conducted three physical experiments which shared identical boundary conditions but differed in their RSL characteristics. Specifically, the three experiments differed with respect to two non-dimensional numbers that compare the magnitude and periodicity of RSL cycles to the spatial and temporal scales of autogenic processes, respectively. The magnitude of RSL change is quantified with H*, defined as the peak to trough difference in RSL during a cycle divided by a system's maximum autogenic channel depth. The periodicity of RSL change is quantified with T*, defined as the period of RSL cycles divided by the time required to deposit one channel depth of sediment, on average, everywhere in the basin. Experiments performed included: 1) a control experiment lacking RSL cycles, used to define a system's autogenics, 2) a high magnitude, high frequency RSL cycles experiment, and 3) a low magnitude, low frequency cycles experiment. We observe that the high magnitude, high frequency experiment resulted in the thickest channel bodies with the lowest width-to-depth ratios, while the low magnitude, long period experiment preserves a record of gradual shoreline transgression and regression producing facies that are the most continuous in space. We plan to integrate our experimental results with Delft3D numerical experiments models that sample similar non-dimensional characteristics of RSL cycles. Quantifying the influence of RSL change, normalized as a function of the spatial and temporal scales of autogenic processes will strengthen our ability to predict stratigraphic architecture and invert stratigraphy for paleo-environmental conditions.

  16. Presentation Showing Results of a Hydrogeochemical Investigation of the Standard Mine Vicinity, Upper Elk Creek Basin, Colorado

    USGS Publications Warehouse

    Manning, Andrew H.; Verplanck, Philip L.; Mast, M. Alisa; Wanty, Richard B.

    2008-01-01

    PREFACE This Open-File Report consists of a presentation given in Crested Butte, Colorado on December 13, 2007 to the Standard Mine Advisory Group. The presentation was paired with another presentation given by the Colorado Division of Reclamation, Mining, and Safety on the physical features and geology of the Standard Mine. The presentation in this Open-File Report summarizes the results and conclusions of a hydrogeochemical investigation of the Standard Mine performed by the U.S. Geological Survey (Manning and others, in press). The purpose of the investigation was to aid the U.S. Environmental Protection Agency in evaluating remediation options for the Standard Mine site. Additional details and supporting data related to the information in this presentation can be found in Manning and others (in press).

  17. QUANTIFYING FOREST ABOVEGROUND CARBON POOLS AND FLUXES USING MULTI-TEMPORAL LIDAR A report on field monitoring, remote sensing MMV, GIS integration, and modeling results for forestry field validation test to quantify aboveground tree biomass and carbon

    SciTech Connect

    Lee Spangler; Lee A. Vierling; Eva K. Stand; Andrew T. Hudak; Jan U.H. Eitel; Sebastian Martinuzzi

    2012-04-01

    Sound policy recommendations relating to the role of forest management in mitigating atmospheric carbon dioxide (CO{sub 2}) depend upon establishing accurate methodologies for quantifying forest carbon pools for large tracts of land that can be dynamically updated over time. Light Detection and Ranging (LiDAR) remote sensing is a promising technology for achieving accurate estimates of aboveground biomass and thereby carbon pools; however, not much is known about the accuracy of estimating biomass change and carbon flux from repeat LiDAR acquisitions containing different data sampling characteristics. In this study, discrete return airborne LiDAR data was collected in 2003 and 2009 across {approx}20,000 hectares (ha) of an actively managed, mixed conifer forest landscape in northern Idaho, USA. Forest inventory plots, established via a random stratified sampling design, were established and sampled in 2003 and 2009. The Random Forest machine learning algorithm was used to establish statistical relationships between inventory data and forest structural metrics derived from the LiDAR acquisitions. Aboveground biomass maps were created for the study area based on statistical relationships developed at the plot level. Over this 6-year period, we found that the mean increase in biomass due to forest growth across the non-harvested portions of the study area was 4.8 metric ton/hectare (Mg/ha). In these non-harvested areas, we found a significant difference in biomass increase among forest successional stages, with a higher biomass increase in mature and old forest compared to stand initiation and young forest. Approximately 20% of the landscape had been disturbed by harvest activities during the six-year time period, representing a biomass loss of >70 Mg/ha in these areas. During the study period, these harvest activities outweighed growth at the landscape scale, resulting in an overall loss in aboveground carbon at this site. The 30-fold increase in sampling density

  18. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle

    PubMed Central

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  19. Selection Indices and Multivariate Analysis Show Similar Results in the Evaluation of Growth and Carcass Traits in Beef Cattle.

    PubMed

    Brito Lopes, Fernando; da Silva, Marcelo Corrêa; Magnabosco, Cláudio Ulhôa; Goncalves Narciso, Marcelo; Sainz, Roberto Daniel

    2016-01-01

    This research evaluated a multivariate approach as an alternative tool for the purpose of selection regarding expected progeny differences (EPDs). Data were fitted using a multi-trait model and consisted of growth traits (birth weight and weights at 120, 210, 365 and 450 days of age) and carcass traits (longissimus muscle area (LMA), back-fat thickness (BF), and rump fat thickness (RF)), registered over 21 years in extensive breeding systems of Polled Nellore cattle in Brazil. Multivariate analyses were performed using standardized (zero mean and unit variance) EPDs. The k mean method revealed that the best fit of data occurred using three clusters (k = 3) (P < 0.001). Estimates of genetic correlation among growth and carcass traits and the estimates of heritability were moderate to high, suggesting that a correlated response approach is suitable for practical decision making. Estimates of correlation between selection indices and the multivariate index (LD1) were moderate to high, ranging from 0.48 to 0.97. This reveals that both types of indices give similar results and that the multivariate approach is reliable for the purpose of selection. The alternative tool seems very handy when economic weights are not available or in cases where more rapid identification of the best animals is desired. Interestingly, multivariate analysis allowed forecasting information based on the relationships among breeding values (EPDs). Also, it enabled fine discrimination, rapid data summarization after genetic evaluation, and permitted accounting for maternal ability and the genetic direct potential of the animals. In addition, we recommend the use of longissimus muscle area and subcutaneous fat thickness as selection criteria, to allow estimation of breeding values before the first mating season in order to accelerate the response to individual selection. PMID:26789008

  20. Quantifying Electron Delocalization in Electrides.

    PubMed

    Janesko, Benjamin G; Scalmani, Giovanni; Frisch, Michael J

    2016-01-12

    Electrides are ionic solids whose anions are electrons confined to crystal voids. We show that our electron delocalization range function EDR(r;d), which quantifies the extent to which an electron at point r in a calculated wave function delocalizes over distance d, provides useful insights into electrides. The EDR quantifies the characteristic delocalization length of electride electrons and provides a chemically intuitive real-space picture of the electrons' distribution. It also gives a potential diagnostic for whether a given formula unit will form a solid electride at ambient pressure, quantifies the effects of electron-electron correlation on confined electrons' interactions, and highlights analogies between covalent bonding and the interaction of interstitial quasi-atoms in high-pressure electrides. These results motivate adding the EDR to the toolbox of theoretical methods applied to electrides. PMID:26652208

  1. QUANTIFYING SPICULES

    SciTech Connect

    Pereira, Tiago M. D.; De Pontieu, Bart; Carlsson, Mats

    2012-11-01

    Understanding the dynamic solar chromosphere is fundamental in solar physics. Spicules are an important feature of the chromosphere, connecting the photosphere to the corona, potentially mediating the transfer of energy and mass. The aim of this work is to study the properties of spicules over different regions of the Sun. Our goal is to investigate if there is more than one type of spicule, and how spicules behave in the quiet Sun, coronal holes, and active regions. We make use of high cadence and high spatial resolution Ca II H observations taken by Hinode/Solar Optical Telescope. Making use of a semi-automated detection algorithm, we self-consistently track and measure the properties of 519 spicules over different regions. We find clear evidence of two types of spicules. Type I spicules show a rise and fall and have typical lifetimes of 150-400 s and maximum ascending velocities of 15-40 km s{sup -1}, while type II spicules have shorter lifetimes of 50-150 s, faster velocities of 30-110 km s{sup -1}, and are not seen to fall down, but rather fade at around their maximum length. Type II spicules are the most common, seen in the quiet Sun and coronal holes. Type I spicules are seen mostly in active regions. There are regional differences between quiet-Sun and coronal hole spicules, likely attributable to the different field configurations. The properties of type II spicules are consistent with published results of rapid blueshifted events (RBEs), supporting the hypothesis that RBEs are their disk counterparts. For type I spicules we find the relations between their properties to be consistent with a magnetoacoustic shock wave driver, and with dynamic fibrils as their disk counterpart. The driver of type II spicules remains unclear from limb observations.

  2. Quantifying Quantumness

    NASA Astrophysics Data System (ADS)

    Braun, Daniel; Giraud, Olivier; Braun, Peter A.

    2010-03-01

    We introduce and study a measure of ``quantumness'' of a quantum state based on its Hilbert-Schmidt distance from the set of classical states. ``Classical states'' were defined earlier as states for which a positive P-function exists, i.e. they are mixtures of coherent states [1]. We study invariance properties of the measure, upper bounds, and its relation to entanglement measures. We evaluate the quantumness of a number of physically interesting states and show that for any physical system in thermal equilibrium there is a finite critical temperature above which quantumness vanishes. We then use the measure for identifying the ``most quantum'' states. Such states are expected to be potentially most useful for quantum information theoretical applications. We find these states explicitly for low-dimensional spin-systems, and show that they possess beautiful, highly symmetric Majorana representations. [4pt] [1] Classicality of spin states, Olivier Giraud, Petr Braun, and Daniel Braun, Phys. Rev. A 78, 042112 (2008)

  3. Uncertainty quantified trait predictions

    NASA Astrophysics Data System (ADS)

    Fazayeli, Farideh; Kattge, Jens; Banerjee, Arindam; Schrodt, Franziska; Reich, Peter

    2015-04-01

    Functional traits of organisms are key to understanding and predicting biodiversity and ecological change, which motivates continuous collection of traits and their integration into global databases. Such composite trait matrices are inherently sparse, severely limiting their usefulness for further analyses. On the other hand, traits are characterized by the phylogenetic trait signal, trait-trait correlations and environmental constraints, all of which provide information that could be used to statistically fill gaps. We propose the application of probabilistic models which, for the first time, utilize all three characteristics to fill gaps in trait databases and predict trait values at larger spatial scales. For this purpose we introduce BHPMF, a hierarchical Bayesian extension of Probabilistic Matrix Factorization (PMF). PMF is a machine learning technique which exploits the correlation structure of sparse matrices to impute missing entries. BHPMF additionally utilizes the taxonomic hierarchy for trait prediction. Implemented in the context of a Gibbs Sampler MCMC approach BHPMF provides uncertainty estimates for each trait prediction. We present comprehensive experimental results on the problem of plant trait prediction using the largest database of plant traits, where BHPMF shows strong empirical performance in uncertainty quantified trait prediction, outperforming the state-of-the-art based on point estimates. Further, we show that BHPMF is more accurate when it is confident, whereas the error is high when the uncertainty is high.

  4. Quantifying solvated electrons' delocalization.

    PubMed

    Janesko, Benjamin G; Scalmani, Giovanni; Frisch, Michael J

    2015-07-28

    Delocalized, solvated electrons are a topic of much recent interest. We apply the electron delocalization range EDR(r;u) (J. Chem. Phys., 2014, 141, 144104) to quantify the extent to which a solvated electron at point r in a calculated wavefunction delocalizes over distance u. Calculations on electrons in one-dimensional model cavities illustrate fundamental properties of the EDR. Mean-field calculations on hydrated electrons (H2O)n(-) show that the density-matrix-based EDR reproduces existing molecular-orbital-based measures of delocalization. Correlated calculations on hydrated electrons and electrons in lithium-ammonia clusters illustrates how electron correlation tends to move surface- and cavity-bound electrons onto the cluster or cavity surface. Applications to multiple solvated electrons in lithium-ammonia clusters provide a novel perspective on the interplay of delocalization and strong correlation central to lithium-ammonia solutions' concentration-dependent insulator-to-metal transition. The results motivate continued application of the EDR to simulations of delocalized electrons. PMID:25994586

  5. Methods for Quantifying the Uncertainties of LSIT Test Parameters, Test Results, and Full-Scale Mixing Performance Using Models Developed from Scaled Test Data

    SciTech Connect

    Piepel, Gregory F.; Cooley, Scott K.; Kuhn, William L.; Rector, David R.; Heredia-Langner, Alejandro

    2015-05-01

    This report discusses the statistical methods for quantifying uncertainties in 1) test responses and other parameters in the Large Scale Integrated Testing (LSIT), and 2) estimates of coefficients and predictions of mixing performance from models that relate test responses to test parameters. Testing at a larger scale has been committed to by Bechtel National, Inc. and the U.S. Department of Energy (DOE) to “address uncertainties and increase confidence in the projected, full-scale mixing performance and operations” in the Waste Treatment and Immobilization Plant (WTP).

  6. "The Show"

    ERIC Educational Resources Information Center

    Gehring, John

    2004-01-01

    For the past 16 years, the blue-collar city of Huntington, West Virginia, has rolled out the red carpet to welcome young wrestlers and their families as old friends. They have come to town chasing the same dream for a spot in what many of them call "The Show". For three days, under the lights of an arena packed with 5,000 fans, the state's best…

  7. Quantifiers induced by subjective expected value of sample information.

    PubMed

    Guo, Kaihong

    2014-10-01

    The ordered weighted averaging (OWA) operator provides a unified framework for multiattribute decision making (MADM) under uncertainty. In this paper, we attempt to tackle some issues arising from the quantifier guided aggregation using OWA operators. This allows us to consider a more general case involving the generation of quantifier targeted at the specified decision maker (DM) by using sample information. In order to do that, we first develop a repeatable interactive procedure in which with the given sample values, and the expected values the DM involved provides with personal preferences, we build nonlinear optimal models to extract from the DM information about his/her decision attitude in an OWA weighting vector form. After that, with the obtained attitudinal weighting vectors we suggest a suitable quantifier just for this DM by means of the piecewise linear interpolations. This obtained quantifier is totally derived from the behavior of the DM involved and thus inherently characterized by his/her own attitudinal character. Owing to the nature of this type of quantifier, we call it the subjective expected value of sample information-induced quantifier. We show some properties of the developed quantifier. We also prove the consistency of OWA aggregation guided by this type of quantifier. In contrast with parameterized quantifiers, our developed quantifiers are oriented toward the specified DMs with proper consideration of their decision attitudes or behavior characteristics, thus bringing about more intuitively appealing and convincing results in the quantifier guided OWA aggregation. PMID:25222722

  8. Quantifying Health Across Populations.

    PubMed

    Kershnar, Stephen

    2016-07-01

    In this article, I argue that as a theoretical matter, a population's health-level is best quantified via averagism. Averagism asserts that the health of a population is the average of members' health-levels. This model is better because it does not fall prey to a number of objections, including the repugnant conclusion, and because it is not arbitrary. I also argue that as a practical matter, population health-levels are best quantified via totalism. Totalism asserts that the health of a population is the sum of members' health-levels. Totalism is better here because it fits better with cost-benefit analysis and such an analysis is the best practical way to value healthcare outcomes. The two results are compatible because the theoretical and practical need not always align, whether in general or in the context of population health. PMID:26766584

  9. Quantifying concordance in cosmology

    NASA Astrophysics Data System (ADS)

    Seehars, Sebastian; Grandis, Sebastian; Amara, Adam; Refregier, Alexandre

    2016-05-01

    Quantifying the concordance between different cosmological experiments is important for testing the validity of theoretical models and systematics in the observations. In earlier work, we thus proposed the Surprise, a concordance measure derived from the relative entropy between posterior distributions. We revisit the properties of the Surprise and describe how it provides a general, versatile, and robust measure for the agreement between data sets. We also compare it to other measures of concordance that have been proposed for cosmology. As an application, we extend our earlier analysis and use the Surprise to quantify the agreement between WMAP 9, Planck 13, and Planck 15 constraints on the Λ CDM model. Using a principle component analysis in parameter space, we find that the large Surprise between WMAP 9 and Planck 13 (S =17.6 bits, implying a deviation from consistency at 99.8% confidence) is due to a shift along a direction that is dominated by the amplitude of the power spectrum. The Planck 15 constraints deviate from the Planck 13 results (S =56.3 bits), primarily due to a shift in the same direction. The Surprise between WMAP and Planck consequently disappears when moving to Planck 15 (S =-5.1 bits). This means that, unlike Planck 13, Planck 15 is not in tension with WMAP 9. These results illustrate the advantages of the relative entropy and the Surprise for quantifying the disagreement between cosmological experiments and more generally as an information metric for cosmology.

  10. Two heteronuclear dipolar results at the price of one: Quantifying Na/P contacts in phosphosilicate glasses and biomimetic hydroxy-apatite

    NASA Astrophysics Data System (ADS)

    Stevensson, Baltzar; Mathew, Renny; Yu, Yang; Edén, Mattias

    2015-02-01

    The analysis of S{I} recoupling experiments applied to amorphous solids yields a heteronuclear second moment M2 (S-I) that represents the effective through-space dipolar interaction between the detected S spins and the neighboring I-spin species. We show that both M2 (S-I) and M2 (I-S) values are readily accessible from a sole S{I} or I{S} experiment, which may involve either S or I detection, and is naturally selected as the most favorable option under the given experimental conditions. For the common case where I has half-integer spin, an I{S} REDOR implementation is preferred to the S{I} REAPDOR counterpart. We verify the procedure by 23Na{31P} REDOR and 31P{23Na} REAPDOR NMR applied to Na2O-CaO-SiO2-P2O5 glasses and biomimetic hydroxyapatite, where the M2 (P-Na) values directly determined by REAPDOR agree very well with those derived from the corresponding M2 (Na-P) results measured by REDOR. Moreover, we show that dipolar second moments are readily extracted from the REAPDOR NMR protocol by a straightforward numerical fitting of the initial dephasing data, in direct analogy with the well-established procedure to determine M2 (S-I) values from REDOR NMR experiments applied to amorphous materials; this avoids the problems with time-consuming numerically exact simulations whose accuracy is limited for describing the dynamics of a priori unknown multi-spin systems in disordered structures.

  11. Analysis of conservative tracer measurement results using the Frechet distribution at planted horizontal subsurface flow constructed wetlands filled with coarse gravel and showing the effect of clogging processes.

    PubMed

    Dittrich, Ernő; Klincsik, Mihály

    2015-11-01

    A mathematical process, developed in Maple environment, has been successful in decreasing the error of measurement results and in the precise calculation of the moments of corrected tracer functions. It was proved that with this process, the measured tracer results of horizontal subsurface flow constructed wetlands filled with coarse gravel (HSFCW-C) can be fitted more accurately than with the conventionally used distribution functions (Gaussian, Lognormal, Fick (Inverse Gaussian) and Gamma). This statement is true only for the planted HSFCW-Cs. The analysis of unplanted HSFCW-Cs needs more research. The result of the analysis shows that the conventional solutions (completely stirred series tank reactor (CSTR) model and convection-dispersion transport (CDT) model) cannot describe these types of transport processes with sufficient accuracy. These outcomes can help in developing better process descriptions of very difficult transport processes in HSFCW-Cs. Furthermore, a new mathematical process can be developed for the calculation of real hydraulic residence time (HRT) and dispersion coefficient values. The presented method can be generalized to other kinds of hydraulic environments. PMID:26126688

  12. Simple instruments used in monitoring ionospheric perturbations and some observational results showing the ionospheric responses to the perturbations mainly from the lower atmosphere

    NASA Astrophysics Data System (ADS)

    Xiao, Zuo; Hao, Yongqiang; Zhang, Donghe; Xiao, Sai-Guan; Huang, Weiquan

    Ionospheric disturbances such as SID and acoustic gravity waves in different scales are well known and commonly discussed topics. Some simple ground equipment was designed and used for monitoring continuously the effects of these disturbances, especially, SWF, SFD. Besides SIDs, They also reflect clearly the acoustic gravity waves in different scale and Spread-F and these data are important supplementary to the traditional ionosonde records. It is of signifi-cance in understanding physical essentials of the ionospheric disturbances and applications in SID warning. In this paper, the designing of the instruments is given and results are discussed in detail. Some case studies were introduced as example which showed very clearly not only immediate effects of solar flare, but also the phenomena of ionospheric responses to large scale gravity waves from lower atmosphere such as typhoon, great earthquake and volcano erup-tion. Particularlyresults showed that acoustic gravity waves play significant role in seeding ionospheric Spread-F. These examples give evidence that lower atmospheric activities strongly influence the ionosphere.

  13. Transgene silencing of the Hutchinson-Gilford progeria syndrome mutation results in a reversible bone phenotype, whereas resveratrol treatment does not show overall beneficial effects.

    PubMed

    Strandgren, Charlotte; Nasser, Hasina Abdul; McKenna, Tomás; Koskela, Antti; Tuukkanen, Juha; Ohlsson, Claes; Rozell, Björn; Eriksson, Maria

    2015-08-01

    Hutchinson-Gilford progeria syndrome (HGPS) is a rare premature aging disorder that is most commonly caused by a de novo point mutation in exon 11 of the LMNA gene, c.1824C>T, which results in an increased production of a truncated form of lamin A known as progerin. In this study, we used a mouse model to study the possibility of recovering from HGPS bone disease upon silencing of the HGPS mutation, and the potential benefits from treatment with resveratrol. We show that complete silencing of the transgenic expression of progerin normalized bone morphology and mineralization already after 7 weeks. The improvements included lower frequencies of rib fractures and callus formation, an increased number of osteocytes in remodeled bone, and normalized dentinogenesis. The beneficial effects from resveratrol treatment were less significant and to a large extent similar to mice treated with sucrose alone. However, the reversal of the dental phenotype of overgrown and laterally displaced lower incisors in HGPS mice could be attributed to resveratrol. Our results indicate that the HGPS bone defects were reversible upon suppressed transgenic expression and suggest that treatments targeting aberrant progerin splicing give hope to patients who are affected by HGPS. PMID:25877214

  14. Magnetic Sphincter Augmentation for Gastroesophageal Reflux at 5 Years: Final Results of a Pilot Study Show Long-Term Acid Reduction and Symptom Improvement

    PubMed Central

    Saino, Greta; Bonavina, Luigi; Lipham, John C.; Dunn, Daniel

    2015-01-01

    Abstract Background: As previously reported, the magnetic sphincter augmentation device (MSAD) preserves gastric anatomy and results in less severe side effects than traditional antireflux surgery. The final 5-year results of a pilot study are reported here. Patients and Methods: A prospective, multicenter study evaluated safety and efficacy of the MSAD for 5 years. Prior to MSAD placement, patients had abnormal esophageal acid and symptoms poorly controlled by proton pump inhibitors (PPIs). Patients served as their own control, which allowed comparison between baseline and postoperative measurements to determine individual treatment effect. At 5 years, gastroesophageal reflux disease (GERD)-Health Related Quality of Life (HRQL) questionnaire score, esophageal pH, PPI use, and complications were evaluated. Results: Between February 2007 and October 2008, 44 patients (26 males) had an MSAD implanted by laparoscopy, and 33 patients were followed up at 5 years. Mean total percentage of time with pH <4 was 11.9% at baseline and 4.6% at 5 years (P < .001), with 85% of patients achieving pH normalization or at least a 50% reduction. Mean total GERD-HRQL score improved significantly from 25.7 to 2.9 (P < .001) when comparing baseline and 5 years, and 93.9% of patients had at least a 50% reduction in total score compared with baseline. Complete discontinuation of PPIs was achieved by 87.8% of patients. No complications occurred in the long term, including no device erosions or migrations at any point. Conclusions: Based on long-term reduction in esophageal acid, symptom improvement, and no late complications, this study shows the relative safety and efficacy of magnetic sphincter augmentation for GERD. PMID:26437027

  15. Modeling upward brine migration through faults as a result of CO2 storage in the Northeast German Basin shows negligible salinization in shallow aquifers

    NASA Astrophysics Data System (ADS)

    Kuehn, M.; Tillner, E.; Kempka, T.; Nakaten, B.

    2012-12-01

    The geological storage of CO2 in deep saline formations may cause salinization of shallower freshwater resources by upward flow of displaced brine from the storage formation into potable groundwater. In this regard, permeable faults or fractures can serve as potential leakage pathways for upward brine migration. The present study uses a regional-scale 3D model based on real structural data of a prospective CO2 storage site in Northeastern Germany to determine the impact of compartmentalization and fault permeability on upward brine migration as a result of pressure elevation by CO2 injection. To evaluate the degree of salinization in the shallower aquifers, different fault leakage scenarios were carried out using a newly developed workflow in which the model grid from the software package Petrel applied for pre-processing is transferred to the reservoir simulator TOUGH2-MP/ECO2N. A discrete fault description is achieved by using virtual elements. A static 3D geological model of the CO2 storage site with an a real size of 40 km x 40 km and a thickness of 766 m was implemented. Subsequently, large-scale numerical multi-phase multi-component (CO2, NaCl, H2O) flow simulations were carried out on a high performance computing system. The prospective storage site, located in the Northeast German Basin is part of an anticline structure characterized by a saline multi-layer aquifer system. The NE and SW boundaries of the study area are confined by the Fuerstenwalde Gubener and the Lausitzer Abbruch fault zones represented by four discrete faults in the model. Two formations of the Middle Bunter were chosen to assess brine migration through faults triggered by an annual injection rate of 1.7 Mt CO2 into the lowermost formation over a time span of 20 years. In addition to varying fault permeabilities, different boundary conditions were applied to evaluate the effects of reservoir compartmentalization. Simulation results show that the highest pressurization within the storage

  16. A high-density wireless underground sensor network (WUSN) to quantify hydro-ecological interactions for a UK floodplain; project background and initial results

    NASA Astrophysics Data System (ADS)

    Verhoef, A.; Choudhary, B.; Morris, P. J.; McCann, J.

    2012-04-01

    Floodplain meadows support some of the most diverse vegetation in the UK, and also perform key ecosystem services, such as flood storage and sediment retention. However, the UK now has less than 1500 ha of this unique habitat remaining. In order to conserve and better exploit the services provided by this grassland, an improved understanding of its functioning is essential. Vegetation functioning and species composition are known to be tightly correlated to the hydrological regime, and related temperature and nutrient regime, but the mechanisms controlling these relationships are not well established. The FUSE* project aims to investigate the spatiotemporal variability in vegetation functioning (e.g. photosynthesis and transpiration) and plant community composition in a floodplain meadow near Oxford, UK (Yarnton Mead), and their relationship to key soil physical variables (soil temperature and moisture content), soil nutrient levels and the water- and energy-balance. A distributed high density Wireless Underground Sensor Network (WUSN) is in the process of being established on Yarnton Mead. The majority, or ideally all, of the sensing and transmitting components will be installed below-ground because Yarnton Mead is a SSSI (Site of Special Scientific Interest, due to its unique plant community) and because occasionally sheep or cattle are grazing on it, and that could damage the nodes. This prerequisite has implications for the maximum spacing between UG nodes and their communications technologies; in terms of signal strength, path losses and requirements for battery life. The success of underground wireless communication is highly dependent on the soil type and water content. This floodplain environment is particularly challenging in this context because the soil contains a large amount of clay near the surface and is therefore less favourable to EM wave propagation than sandy soils. Furthermore, due to high relative saturation levels (as a result of high

  17. Quantifying and Reducing the Uncertainties in Future Projections of Droughts and Heat Waves for North America that Result from the Diversity of Models in CMIP5

    NASA Astrophysics Data System (ADS)

    Herrera-Estrada, J. E.; Sheffield, J.

    2014-12-01

    There are many sources of uncertainty regarding the future projections of our climate, including the multiple possible Representative Concentration Pathways (RCPs), the variety of climate models used, and the initial and boundary conditions with which they are run. Moreover, it has been shown that the internal variability of the climate system can sometimes be of the same order of magnitude as the climate change signal or even larger for some variables. Nonetheless, in order to help inform stakeholders in water resources and agriculture in North America when developing adaptation strategies, particularly for extreme events such as droughts and heat waves, it is necessary to study the plausible range of changes that the region might experience during the 21st century. We aim to understand and reduce the uncertainties associated with this range of possible scenarios by focusing on the diversity of climate models involved in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Data output from various CMIP5 models is compared against near surface climate and land-surface hydrological data from the North American Land Data Assimilation System (NLDAS)-2 to evaluate how well each climate model represents the land-surface processes associated with droughts and heat waves during the overlapping historical period (1979-2005). These processes include the representation of precipitation and radiation and their partitioning at the land surface, land-atmosphere interactions, and the propagation of signals of these extreme events through the land surface. The ability of the CMIP5 models to reproduce these important physical processes for regions of North America is used to inform a multi-model ensemble in which models that represent the processes relevant to droughts and heat waves better are given more importance. Furthermore, the future projections are clustered to identify possible dependencies in behavior across models. The results indicate a wide range in performance

  18. Quantifiable Lateral Flow Assay Test Strips

    NASA Technical Reports Server (NTRS)

    2003-01-01

    As easy to read as a home pregnancy test, three Quantifiable Lateral Flow Assay (QLFA) strips used to test water for E. coli show different results. The brightly glowing control line on the far right of each strip indicates that all three tests ran successfully. But the glowing test line on the middle left and bottom strips reveal their samples were contaminated with E. coli bacteria at two different concentrations. The color intensity correlates with concentration of contamination.

  19. Storytelling Slide Shows to Improve Diabetes and High Blood Pressure Knowledge and Self-Efficacy: Three-Year Results among Community Dwelling Older African Americans

    ERIC Educational Resources Information Center

    Bertera, Elizabeth M.

    2014-01-01

    This study combined the African American tradition of oral storytelling with the Hispanic medium of "Fotonovelas." A staggered pretest posttest control group design was used to evaluate four Storytelling Slide Shows on health that featured community members. A total of 212 participants were recruited for the intervention and 217 for the…

  20. Mathematical modelling in Matlab of the experimental results shows the electrochemical potential difference - temperature of the WC coatings immersed in a NaCl solution

    NASA Astrophysics Data System (ADS)

    Benea, M. L.; Benea, O. D.

    2016-02-01

    The method used for purchasing the corrosion behaviour the WC coatings deposited by plasma spraying, on a martensitic stainless steel substrate consists in measuring the electrochemical potential of the coating, respectively that of the substrate, immersed in a NaCl solution as corrosive agent. The mathematical processing of the obtained experimental results in Matlab allowed us to make some correlations between the electrochemical potential of the coating and the solution temperature is very well described by some curves having equations obtained by interpolation order 4.

  1. Quantifying the Adaptive Cycle

    PubMed Central

    Angeler, David G.; Allen, Craig R.; Garmestani, Ahjond S.; Gunderson, Lance H.; Hjerne, Olle; Winder, Monika

    2015-01-01

    The adaptive cycle was proposed as a conceptual model to portray patterns of change in complex systems. Despite the model having potential for elucidating change across systems, it has been used mainly as a metaphor, describing system dynamics qualitatively. We use a quantitative approach for testing premises (reorganisation, conservatism, adaptation) in the adaptive cycle, using Baltic Sea phytoplankton communities as an example of such complex system dynamics. Phytoplankton organizes in recurring spring and summer blooms, a well-established paradigm in planktology and succession theory, with characteristic temporal trajectories during blooms that may be consistent with adaptive cycle phases. We used long-term (1994–2011) data and multivariate analysis of community structure to assess key components of the adaptive cycle. Specifically, we tested predictions about: reorganisation: spring and summer blooms comprise distinct community states; conservatism: community trajectories during individual adaptive cycles are conservative; and adaptation: phytoplankton species during blooms change in the long term. All predictions were supported by our analyses. Results suggest that traditional ecological paradigms such as phytoplankton successional models have potential for moving the adaptive cycle from a metaphor to a framework that can improve our understanding how complex systems organize and reorganize following collapse. Quantifying reorganization, conservatism and adaptation provides opportunities to cope with the intricacies and uncertainties associated with fast ecological change, driven by shifting system controls. Ultimately, combining traditional ecological paradigms with heuristics of complex system dynamics using quantitative approaches may help refine ecological theory and improve our understanding of the resilience of ecosystems. PMID:26716453

  2. "First Things First" Shows Promising Results

    ERIC Educational Resources Information Center

    Hendrie, Caroline

    2005-01-01

    In this article, the author discusses a school improvement model, First Things First, developed by James P. Connell, a former tenured professor of psychology at the University of Rochester in New York. The model has three pillars for the high school level: (1) small, themed learning communities that each keep a group of students together…

  3. Quantifying Faculty Workloads.

    ERIC Educational Resources Information Center

    Archer, J. Andrew

    Teaching load depends on many variables, however most colleges define it strictly in terms of contact or credit hours. The failure to give weight to variables such as number of preparations, number of students served, committee and other noninstructional assignments is usually due to the lack of a formula that will quantify the effects of these…

  4. Catalysis: Quantifying charge transfer

    NASA Astrophysics Data System (ADS)

    James, Trevor E.; Campbell, Charles T.

    2016-02-01

    Improving the design of catalytic materials for clean energy production requires a better understanding of their electronic properties, which remains experimentally challenging. Researchers now quantify the number of electrons transferred from metal nanoparticles to an oxide support as a function of particle size.

  5. Quantifying tumour heterogeneity with CT

    PubMed Central

    Miles, Kenneth A.

    2013-01-01

    Abstract Heterogeneity is a key feature of malignancy associated with adverse tumour biology. Quantifying heterogeneity could provide a useful non-invasive imaging biomarker. Heterogeneity on computed tomography (CT) can be quantified using texture analysis which extracts spatial information from CT images (unenhanced, contrast-enhanced and derived images such as CT perfusion) that may not be perceptible to the naked eye. The main components of texture analysis can be categorized into image transformation and quantification. Image transformation filters the conventional image into its basic components (spatial, frequency, etc.) to produce derived subimages. Texture quantification techniques include structural-, model- (fractal dimensions), statistical- and frequency-based methods. The underlying tumour biology that CT texture analysis may reflect includes (but is not limited to) tumour hypoxia and angiogenesis. Emerging studies show that CT texture analysis has the potential to be a useful adjunct in clinical oncologic imaging, providing important information about tumour characterization, prognosis and treatment prediction and response. PMID:23545171

  6. Quantifying Ubiquitin Signaling

    PubMed Central

    Ordureau, Alban; Münch, Christian; Harper, J. Wade

    2015-01-01

    Ubiquitin (UB)-driven signaling systems permeate biology, and are often integrated with other types of post-translational modifications (PTMs), most notably phosphorylation. Flux through such pathways is typically dictated by the fractional stoichiometry of distinct regulatory modifications and protein assemblies as well as the spatial organization of pathway components. Yet, we rarely understand the dynamics and stoichiometry of rate-limiting intermediates along a reaction trajectory. Here, we review how quantitative proteomic tools and enrichment strategies are being used to quantify UB-dependent signaling systems, and to integrate UB signaling with regulatory phosphorylation events. A key regulatory feature of ubiquitylation is that the identity of UB chain linkage types can control downstream processes. We also describe how proteomic and enzymological tools can be used to identify and quantify UB chain synthesis and linkage preferences. The emergence of sophisticated quantitative proteomic approaches will set a new standard for elucidating biochemical mechanisms of UB-driven signaling systems. PMID:26000850

  7. Results.

    ERIC Educational Resources Information Center

    Zemsky, Robert; Shaman, Susan; Shapiro, Daniel B.

    2001-01-01

    Describes the Collegiate Results Instrument (CRI), which measures a range of collegiate outcomes for alumni 6 years after graduation. The CRI was designed to target alumni from institutions across market segments and assess their values, abilities, work skills, occupations, and pursuit of lifelong learning. (EV)

  8. Quantifying light pollution

    NASA Astrophysics Data System (ADS)

    Cinzano, P.; Falchi, F.

    2014-05-01

    In this paper we review new available indicators useful to quantify and monitor light pollution, defined as the alteration of the natural quantity of light in the night environment due to introduction of manmade light. With the introduction of recent radiative transfer methods for the computation of light pollution propagation, several new indicators become available. These indicators represent a primary step in light pollution quantification, beyond the bare evaluation of the night sky brightness, which is an observational effect integrated along the line of sight and thus lacking the three-dimensional information.

  9. Quantifying surface normal estimation

    NASA Astrophysics Data System (ADS)

    Reid, Robert B.; Oxley, Mark E.; Eismann, Michael T.; Goda, Matthew E.

    2006-05-01

    An inverse algorithm for surface normal estimation from thermal polarimetric imagery was developed and used to quantify the requirements on a priori information. Building on existing knowledge that calculates the degree of linear polarization (DOLP) and the angle of polarization (AOP) for a given surface normal in a forward model (from an object's characteristics to calculation of the DOLP and AOP), this research quantifies the impact of a priori information with the development of an inverse algorithm to estimate surface normals from thermal polarimetric emissions in long-wave infrared (LWIR). The inverse algorithm assumes a polarized infrared focal plane array capturing LWIR intensity images which are then converted to Stokes vectors. Next, the DOLP and AOP are calculated from the Stokes vectors. Last, the viewing angles, θ v, to the surface normals are estimated assuming perfect material information about the imaged scene. A sensitivity analysis is presented to quantitatively describe the a priori information's impact on the amount of error in the estimation of surface normals, and a bound is determined given perfect information about an object. Simulations explored the impact of surface roughness (σ) and the real component (n) of a dielectric's complex index of refraction across a range of viewing angles (θ v) for a given wavelength of observation.

  10. On quantifying insect movements

    SciTech Connect

    Wiens, J.A.; Crist, T.O. ); Milne, B.T. )

    1993-08-01

    We elaborate on methods described by Turchin, Odendaal Rausher for quantifying insect movement pathways. We note the need to scale measurement resolution to the study insects and the questions being asked, and we discuss the use of surveying instrumentation for recording sequential positions of individuals on pathways. We itemize several measures that may be used to characterize movement pathways and illustrate these by comparisons among several Eleodes beetles occurring in shortgrass steppe. The fractal dimension of pathways may provide insights not available from absolute measures of pathway configuration. Finally, we describe a renormalization procedure that may be used to remove sequential interdependence among locations of moving individuals while preserving the basic attributes of the pathway.

  11. A new index quantifying the precipitation extremes

    NASA Astrophysics Data System (ADS)

    Busuioc, Aristita; Baciu, Madalina; Stoica, Cerasela

    2015-04-01

    Meteorological Administration in Romania. These types of records contain the rainfall intensity (mm/minute) over various intervals for which it remains constant. The maximum intensity for each continuous rain over the May-August interval has been calculated for each year. The corresponding time series over the 1951-2008 period have been analysed in terms of their long term trends and shifts in the mean; the results have been compared to those resulted from other rainfall indices based on daily and hourly data, computed over the same interval such as: total rainfall amount, maximum daily amount, contribution of total hourly amounts exceeding 10mm/day, contribution of daily amounts exceeding the 90th percentile, the 90th, 99th and 99.9th percentiles of 1-hour data . The results show that the proposed index exhibit a coherent and stronger climate signal (significant increase) for all analysed stations compared to the other indices associated to precipitation extremes, which show either no significant change or weaker signal. This finding shows that the proposed index is most appropriate to quantify the climate change signal of the precipitation extremes. We consider that this index is more naturally connected to the maximum intensity of a real rainfall event. The results presented is this study were funded by the Executive Agency for Higher Education, Research, Development and Innovation Funding (UEFISCDI) through the research project CLIMHYDEX, "Changes in climate extremes and associated impact in hydrological events in Romania", code PNII-ID-2011-2-0073 (http://climhydex.meteoromania.ro)

  12. Quantifying the Wave Driving of the Stratosphere

    NASA Technical Reports Server (NTRS)

    Newman, Paul A.; Nash, Eric R.

    1999-01-01

    The zonal mean eddy heat flux is directly proportional to the wave activity that propagates from the troposphere into the stratosphere. This quantity is a simple eddy diagnostic which is easily calculated from conventional meteorological analyses. Because this "wave driving" of the stratosphere has a strong impact on the stratospheric temperature, it is necessary to compare the impact of the flux with respect to stratospheric radiative changes caused by greenhouse gas changes. Hence, we must understand the precision and accuracy of the heat flux derived from our global meteorological analyses. Herein, we quantify the stratospheric heat flux using five different meteorological analyses, and show that there are 30% differences between these analyses during the disturbed conditions of the northern hemisphere winter. Such large differences result from the planetary differences in the stationary temperature and meridional wind fields. In contrast, planetary transient waves show excellent agreement amongst these five analyses, and this transient heat flux appears to have a long term downward trend.

  13. Quantifier Comprehension in Corticobasal Degeneration

    ERIC Educational Resources Information Center

    McMillan, Corey T.; Clark, Robin; Moore, Peachie; Grossman, Murray

    2006-01-01

    In this study, we investigated patients with focal neurodegenerative diseases to examine a formal linguistic distinction between classes of generalized quantifiers, like "some X" and "less than half of X." Our model of quantifier comprehension proposes that number knowledge is required to understand both first-order and higher-order quantifiers.…

  14. How to quantify structural anomalies in fluids?

    PubMed

    Fomin, Yu D; Ryzhov, V N; Klumov, B A; Tsiok, E N

    2014-07-21

    Some fluids are known to behave anomalously. The so-called structural anomaly which means that the fluid becomes less structures under isothermal compression is among the most frequently discussed ones. Several methods for quantifying the degree of structural order are described in the literature and are used for calculating the region of structural anomaly. It is generally thought that all of the structural order determinations yield qualitatively identical results. However, no explicit comparison was made. This paper presents such a comparison for the first time. The results of some definitions are shown to contradict the intuitive notion of a fluid. On the basis of this comparison, we show that the region of structural anomaly can be most reliably determined from the behavior of the excess entropy. PMID:25053327

  15. Quantifying Loopy Network Architectures

    PubMed Central

    Katifori, Eleni; Magnasco, Marcelo O.

    2012-01-01

    Biology presents many examples of planar distribution and structural networks having dense sets of closed loops. An archetype of this form of network organization is the vasculature of dicotyledonous leaves, which showcases a hierarchically-nested architecture containing closed loops at many different levels. Although a number of approaches have been proposed to measure aspects of the structure of such networks, a robust metric to quantify their hierarchical organization is still lacking. We present an algorithmic framework, the hierarchical loop decomposition, that allows mapping loopy networks to binary trees, preserving in the connectivity of the trees the architecture of the original graph. We apply this framework to investigate computer generated graphs, such as artificial models and optimal distribution networks, as well as natural graphs extracted from digitized images of dicotyledonous leaves and vasculature of rat cerebral neocortex. We calculate various metrics based on the asymmetry, the cumulative size distribution and the Strahler bifurcation ratios of the corresponding trees and discuss the relationship of these quantities to the architectural organization of the original graphs. This algorithmic framework decouples the geometric information (exact location of edges and nodes) from the metric topology (connectivity and edge weight) and it ultimately allows us to perform a quantitative statistical comparison between predictions of theoretical models and naturally occurring loopy graphs. PMID:22701593

  16. Quantifying T Lymphocyte Turnover

    PubMed Central

    De Boer, Rob J.; Perelson, Alan S.

    2013-01-01

    Peripheral T cell populations are maintained by production of naive T cells in the thymus, clonal expansion of activated cells, cellular self-renewal (or homeostatic proliferation), and density dependent cell life spans. A variety of experimental techniques have been employed to quantify the relative contributions of these processes. In modern studies lymphocytes are typically labeled with 5-bromo-2′-deoxyuridine (BrdU), deuterium, or the fluorescent dye carboxy-fluorescein diacetate succinimidyl ester (CFSE), their division history has been studied by monitoring telomere shortening and the dilution of T cell receptor excision circles (TRECs) or the dye CFSE, and clonal expansion has been documented by recording changes in the population densities of antigen specific cells. Proper interpretation of such data in terms of the underlying rates of T cell production, division, and death has proven to be notoriously difficult and involves mathematical modeling. We review the various models that have been developed for each of these techniques, discuss which models seem most appropriate for what type of data, reveal open problems that require better models, and pinpoint how the assumptions underlying a mathematical model may influence the interpretation of data. Elaborating various successful cases where modeling has delivered new insights in T cell population dynamics, this review provides quantitative estimates of several processes involved in the maintenance of naive and memory, CD4+ and CD8+ T cell pools in mice and men. PMID:23313150

  17. Mountain torrents: Quantifying vulnerability and assessing uncertainties

    PubMed Central

    Totschnig, Reinhold; Fuchs, Sven

    2013-01-01

    Vulnerability assessment for elements at risk is an important component in the framework of risk assessment. The vulnerability of buildings affected by torrent processes can be quantified by vulnerability functions that express a mathematical relationship between the degree of loss of individual elements at risk and the intensity of the impacting process. Based on data from the Austrian Alps, we extended a vulnerability curve for residential buildings affected by fluvial sediment transport processes to other torrent processes and other building types. With respect to this goal to merge different data based on different processes and building types, several statistical tests were conducted. The calculation of vulnerability functions was based on a nonlinear regression approach applying cumulative distribution functions. The results suggest that there is no need to distinguish between different sediment-laden torrent processes when assessing vulnerability of residential buildings towards torrent processes. The final vulnerability functions were further validated with data from the Italian Alps and different vulnerability functions presented in the literature. This comparison showed the wider applicability of the derived vulnerability functions. The uncertainty inherent to regression functions was quantified by the calculation of confidence bands. The derived vulnerability functions may be applied within the framework of risk management for mountain hazards within the European Alps. The method is transferable to other mountain regions if the input data needed are available. PMID:27087696

  18. Quantifying Anderson's fault types

    USGS Publications Warehouse

    Simpson, R.W.

    1997-01-01

    Anderson [1905] explained three basic types of faulting (normal, strike-slip, and reverse) in terms of the shape of the causative stress tensor and its orientation relative to the Earth's surface. Quantitative parameters can be defined which contain information about both shape and orientation [Ce??le??rier, 1995], thereby offering a way to distinguish fault-type domains on plots of regional stress fields and to quantify, for example, the degree of normal-faulting tendencies within strike-slip domains. This paper offers a geometrically motivated generalization of Angelier's [1979, 1984, 1990] shape parameters ?? and ?? to new quantities named A?? and A??. In their simple forms, A?? varies from 0 to 1 for normal, 1 to 2 for strike-slip, and 2 to 3 for reverse faulting, and A?? ranges from 0?? to 60??, 60?? to 120??, and 120?? to 180??, respectively. After scaling, A?? and A?? agree to within 2% (or 1??), a difference of little practical significance, although A?? has smoother analytical properties. A formulation distinguishing horizontal axes as well as the vertical axis is also possible, yielding an A?? ranging from -3 to +3 and A?? from -180?? to +180??. The geometrically motivated derivation in three-dimensional stress space presented here may aid intuition and offers a natural link with traditional ways of plotting yield and failure criteria. Examples are given, based on models of Bird [1996] and Bird and Kong [1994], of the use of Anderson fault parameters A?? and A?? for visualizing tectonic regimes defined by regional stress fields. Copyright 1997 by the American Geophysical Union.

  19. Quantifying the Arctic methane budget

    NASA Astrophysics Data System (ADS)

    Warwick, Nicola; Cain, Michelle; Pyle, John

    2014-05-01

    The Arctic is a major source of atmospheric methane, containing climate-sensitive emissions from natural wetlands and gas hydrates, as well as the fossil fuel industry. Both wetland and gas hydrate methane emissions from the Arctic may increase with increasing temperature, resulting in a positive feedback leading to enhancement of climate warming. It is important that these poorly-constrained sources are quantified by location and strength and their vulnerability to change be assessed. The MAMM project (Methane and other greenhouse gases in the Arctic: Measurements, process studies and Modelling') addresses these issues as part of the UK NERC Arctic Programme. A global chemistry transport model has been used, along with MAMM and other long term observations, to assess our understanding of the different source and sink terms in the Arctic methane budget. Simulations including methane coloured by source and latitude are used to distinguish between Arctic seasonal variability arising from transport and that arising from changes in Arctic sources and sinks. Methane isotopologue tracers provide a further constraint on modelled methane variability, distinguishing between isotopically light and heavy sources (e.g. wetlands and gas fields). We focus on quantifying the magnitude and seasonal variability of Arctic wetland emissions.

  20. Quantifying actin wave modulation on periodic topography

    NASA Astrophysics Data System (ADS)

    Guven, Can; Driscoll, Meghan; Sun, Xiaoyu; Parker, Joshua; Fourkas, John; Carlsson, Anders; Losert, Wolfgang

    2014-03-01

    Actin is the essential builder of the cell cytoskeleton, whose dynamics are responsible for generating the necessary forces for the formation of protrusions. By exposing amoeboid cells to periodic topographical cues, we show that actin can be directionally guided via inducing preferential polymerization waves. To quantify the dynamics of these actin waves and their interaction with the substrate, we modify a technique from computer vision called ``optical flow.'' We obtain vectors that represent the apparent actin flow and cluster these vectors to obtain patches of newly polymerized actin, which represent actin waves. Using this technique, we compare experimental results, including speed distribution of waves and distance from the wave centroid to the closest ridge, with actin polymerization simulations. We hypothesize the modulation of the activity of nucleation promotion factors on ridges (elevated regions of the surface) as a potential mechanism for the wave-substrate coupling. Funded by NIH grant R01GM085574.

  1. Terahertz spectroscopy for quantifying refined oil mixtures.

    PubMed

    Li, Yi-nan; Li, Jian; Zeng, Zhou-mo; Li, Jie; Tian, Zhen; Wang, Wei-kui

    2012-08-20

    In this paper, the absorption coefficient spectra of samples prepared as mixtures of gasoline and diesel in different proportions are obtained by terahertz time-domain spectroscopy. To quantify the components of refined oil mixtures, a method is proposed to evaluate the best frequency band for regression analysis. With the data in this frequency band, dualistic linear regression fitting is used to determine the volume fraction of gasoline and diesel in the mixture based on the Beer-Lambert law. The minimum of regression fitting R-Square is 0.99967, and the mean error of fitted volume fraction of 97# gasoline is 4.3%. Results show that refined oil mixtures can be quantitatively analyzed through absorption coefficient spectra in terahertz frequency, which it has bright application prospects in the storage and transportation field for refined oil. PMID:22907017

  2. Quantifying Information Flow between Two Chaotic Semiconductor Lasers Using Symbolic Transfer Entropy

    NASA Astrophysics Data System (ADS)

    Li, Nian-Qiang; Pan, Wei; Yan, Lian-Shan; Luo, Bin; Xu, Ming-Feng; Tang, Yi-Long

    2012-03-01

    Symbolic transfer entropy (STE) is employed to quantify the dominant direction of information flow between two chaotic-semiconductor-laser time series. The information flow in unidirectionally and bidirectionally coupled systems was analyzed systematically. Numerical results show that the dependence relationship can be revealed if there exists any coupling between two chaotic semiconductor lasers. More importantly, in both unsynchronized and good synchronization regimes, the STE can be used to quantify the direction of information flow between the lasers, although the former case leads to a better identification. The results thus establish STE as an effective tool for quantifying the direction of information flow between chaotic-laser-based systems.

  3. Quantifying Aggressive Behavior in Zebrafish.

    PubMed

    Teles, Magda C; Oliveira, Rui F

    2016-01-01

    Aggression is a complex behavior that influences social relationships and can be seen as adaptive or maladaptive depending on the context and intensity of expression. A model organism suitable for genetic dissection of the underlying neural mechanisms of aggressive behavior is still needed. Zebrafish has already proven to be a powerful vertebrate model organism for the study of normal and pathological brain function. Despite the fact that zebrafish is a gregarious species that forms shoals, when allowed to interact in pairs, both males and females express aggressive behavior and establish dominance hierarchies. Here, we describe two protocols that can be used to quantify aggressive behavior in zebrafish, using two different paradigms: (1) staged fights between real opponents and (2) mirror-elicited fights. We also discuss the methodology for the behavior analysis, the expected results for both paradigms, and the advantages and disadvantages of each paradigm in face of the specific goals of the study. PMID:27464816

  4. Quantifying water diffusion in secondary organic material

    NASA Astrophysics Data System (ADS)

    Price, Hannah; Murray, Benjamin; Mattsson, Johan; O'Sullivan, Daniel; Wilson, Theodore; Zhang, Yue; Martin, Scot

    2014-05-01

    Recent research suggests that some secondary organic aerosol (SOA) is highly viscous under certain atmospheric conditions. This may have important consequences for equilibration timescales, SOA growth, heterogeneous chemistry and ice nucleation. In order to quantify these effects, knowledge of the diffusion coefficients of relevant gas species within aerosol particles is vital. In this work, a Raman isotope tracer method is used to quantify water diffusion coefficients over a range of atmospherically relevant humidity and temperature conditions. D2O is observed as it diffuses from the gas phase into a disk of aqueous solution, without the disk changing in size or viscosity. An analytical solution of Fick's second law is then used with a fitting procedure to determine water diffusion coefficients in reference materials for method validation. The technique is then extended to compounds of atmospheric relevance and α-pinene secondary organic material. We produce water diffusion coefficients from 20 to 80 % RH at 23.5° C for sucrose, levoglucosan, M5AS and MgSO4. For levoglucosan we show that under conditions where a particle bounces, water diffusion in aqueous solutions can be fast (a fraction of a second for a 100 nm radius). For sucrose solutions, we also show that the Stokes-Einstein relation breaks down at high viscosity and cannot be used to predict water diffusion timescales with accuracy. In addition, we also quantify water diffusion coefficients in α-pinene SOM from 20-80% RH and over temperatures from 6 to -30° C. Our results suggest that, at 6° C, water diffusion in α-pinene SOA is not kinetically limited on the second timescale, even at 20% RH. As temperatures decrease, however, diffusion slows and may become an increasingly limiting factor for atmospheric processes. A parameterization for the diffusion coefficient of water in α-pinene secondary organic material, as a function of relative humidity and temperature, is presented. The implications for

  5. Quantifying the quiet epidemic

    PubMed Central

    2014-01-01

    During the late 20th century numerical rating scales became central to the diagnosis of dementia and helped transform attitudes about its causes and prevalence. Concentrating largely on the development and use of the Blessed Dementia Scale, I argue that rating scales served professional ends during the 1960s and 1970s. They helped old age psychiatrists establish jurisdiction over conditions such as dementia and present their field as a vital component of the welfare state, where they argued that ‘reliable modes of diagnosis’ were vital to the allocation of resources. I show how these arguments appealed to politicians, funding bodies and patient groups, who agreed that dementia was a distinct disease and claimed research on its causes and prevention should be designated ‘top priority’. But I also show that worries about the replacement of clinical acumen with technical and depersonalized methods, which could conceivably be applied by anyone, led psychiatrists to stress that rating scales had their limits and could be used only by trained experts. PMID:25866448

  6. Quantifying nonisothermal subsurface soil water evaporation

    NASA Astrophysics Data System (ADS)

    Deol, Pukhraj; Heitman, Josh; Amoozegar, Aziz; Ren, Tusheng; Horton, Robert

    2012-11-01

    Accurate quantification of energy and mass transfer during soil water evaporation is critical for improving understanding of the hydrologic cycle and for many environmental, agricultural, and engineering applications. Drying of soil under radiation boundary conditions results in formation of a dry surface layer (DSL), which is accompanied by a shift in the position of the latent heat sink from the surface to the subsurface. Detailed investigation of evaporative dynamics within this active near-surface zone has mostly been limited to modeling, with few measurements available to test models. Soil column studies were conducted to quantify nonisothermal subsurface evaporation profiles using a sensible heat balance (SHB) approach. Eleven-needle heat pulse probes were used to measure soil temperature and thermal property distributions at the millimeter scale in the near-surface soil. Depth-integrated SHB evaporation rates were compared with mass balance evaporation estimates under controlled laboratory conditions. The results show that the SHB method effectively measured total subsurface evaporation rates with only 0.01-0.03 mm h-1difference from mass balance estimates. The SHB approach also quantified millimeter-scale nonisothermal subsurface evaporation profiles over a drying event, which has not been previously possible. Thickness of the DSL was also examined using measured soil thermal conductivity distributions near the drying surface. Estimates of the DSL thickness were consistent with observed evaporation profile distributions from SHB. Estimated thickness of the DSL was further used to compute diffusive vapor flux. The diffusive vapor flux also closely matched both mass balance evaporation rates and subsurface evaporation rates estimated from SHB.

  7. Public medical shows.

    PubMed

    Walusinski, Olivier

    2014-01-01

    In the second half of the 19th century, Jean-Martin Charcot (1825-1893) became famous for the quality of his teaching and his innovative neurological discoveries, bringing many French and foreign students to Paris. A hunger for recognition, together with progressive and anticlerical ideals, led Charcot to invite writers, journalists, and politicians to his lessons, during which he presented the results of his work on hysteria. These events became public performances, for which physicians and patients were transformed into actors. Major newspapers ran accounts of these consultations, more like theatrical shows in some respects. The resultant enthusiasm prompted other physicians in Paris and throughout France to try and imitate them. We will compare the form and substance of Charcot's lessons with those given by Jules-Bernard Luys (1828-1897), Victor Dumontpallier (1826-1899), Ambroise-Auguste Liébault (1823-1904), Hippolyte Bernheim (1840-1919), Joseph Grasset (1849-1918), and Albert Pitres (1848-1928). We will also note their impact on contemporary cinema and theatre. PMID:25273491

  8. Working Memory Mechanism in Proportional Quantifier Verification

    ERIC Educational Resources Information Center

    Zajenkowski, Marcin; Szymanik, Jakub; Garraffa, Maria

    2014-01-01

    The paper explores the cognitive mechanisms involved in the verification of sentences with proportional quantifiers (e.g. "More than half of the dots are blue"). The first study shows that the verification of proportional sentences is more demanding than the verification of sentences such as: "There are seven blue and eight yellow…

  9. Holographic mixing quantified

    SciTech Connect

    Batell, Brian; Gherghetta, Tony

    2007-08-15

    We compute the precise elementary/composite field content of mass eigenstates in holographic duals of warped models in a slice of AdS{sub 5}. This is accomplished by decomposing the bulk fields not in the usual Kaluza-Klein basis, but rather into a holographic basis of 4D fields, corresponding to purely elementary source or conformal field theory (CFT) composite fields. Generically, this decomposition yields kinetic and mass mixing between the elementary and composite sectors of the holographic theory. Depending on where the bulk zero mode is localized, the elementary/composite content may differ radically, which we show explicitly for several examples including the bulk Randall-Sundrum graviton, bulk gauge boson, and Higgs boson.

  10. Television Quiz Show Simulation

    ERIC Educational Resources Information Center

    Hill, Jonnie Lynn

    2007-01-01

    This article explores the simulation of four television quiz shows for students in China studying English as a foreign language (EFL). It discusses the adaptation and implementation of television quiz shows and how the students reacted to them.

  11. The Great Cometary Show

    NASA Astrophysics Data System (ADS)

    2007-01-01

    The ESO Very Large Telescope Interferometer, which allows astronomers to scrutinise objects with a precision equivalent to that of a 130-m telescope, is proving itself an unequalled success every day. One of the latest instruments installed, AMBER, has led to a flurry of scientific results, an anthology of which is being published this week as special features in the research journal Astronomy & Astrophysics. ESO PR Photo 06a/07 ESO PR Photo 06a/07 The AMBER Instrument "With its unique capabilities, the VLT Interferometer (VLTI) has created itself a niche in which it provide answers to many astronomical questions, from the shape of stars, to discs around stars, to the surroundings of the supermassive black holes in active galaxies," says Jorge Melnick (ESO), the VLT Project Scientist. The VLTI has led to 55 scientific papers already and is in fact producing more than half of the interferometric results worldwide. "With the capability of AMBER to combine up to three of the 8.2-m VLT Unit Telescopes, we can really achieve what nobody else can do," added Fabien Malbet, from the LAOG (France) and the AMBER Project Scientist. Eleven articles will appear this week in Astronomy & Astrophysics' special AMBER section. Three of them describe the unique instrument, while the other eight reveal completely new results about the early and late stages in the life of stars. ESO PR Photo 06b/07 ESO PR Photo 06b/07 The Inner Winds of Eta Carinae The first results presented in this issue cover various fields of stellar and circumstellar physics. Two papers deal with very young solar-like stars, offering new information about the geometry of the surrounding discs and associated outflowing winds. Other articles are devoted to the study of hot active stars of particular interest: Alpha Arae, Kappa Canis Majoris, and CPD -57o2874. They provide new, precise information about their rotating gas envelopes. An important new result concerns the enigmatic object Eta Carinae. Using AMBER with

  12. Children's interpretations of general quantifiers, specific quantifiers, and generics

    PubMed Central

    Gelman, Susan A.; Leslie, Sarah-Jane; Was, Alexandra M.; Koch, Christina M.

    2014-01-01

    Recently, several scholars have hypothesized that generics are a default mode of generalization, and thus that young children may at first treat quantifiers as if they were generic in meaning. To address this issue, the present experiment provides the first in-depth, controlled examination of the interpretation of generics compared to both general quantifiers ("all Xs", "some Xs") and specific quantifiers ("all of these Xs", "some of these Xs"). We provided children (3 and 5 years) and adults with explicit frequency information regarding properties of novel categories, to chart when "some", "all", and generics are deemed appropriate. The data reveal three main findings. First, even 3-year-olds distinguish generics from quantifiers. Second, when children make errors, they tend to be in the direction of treating quantifiers like generics. Third, children were more accurate when interpreting specific versus general quantifiers. We interpret these data as providing evidence for the position that generics are a default mode of generalization, especially when reasoning about kinds. PMID:25893205

  13. Quantifying the seismicity on Taiwan

    NASA Astrophysics Data System (ADS)

    Wu, Yi-Hsuan; Chen, Chien-Chih; Turcotte, Donald L.; Rundle, John B.

    2013-07-01

    We quantify the seismicity on the island of Taiwan using the frequency-magnitude statistics of earthquakes since 1900. A break in Gutenberg-Richter scaling for large earthquakes in global seismicity has been observed, this break is also observed in our Taiwan study. The seismic data from the Central Weather Bureau Seismic Network are in good agreement with the Gutenberg-Richter relation taking b ≈ 1 when M < 7. For large earthquakes, M ≥ 7, the seismic data fit Gutenberg-Richter scaling with b ≈ 1.5. If the Gutenberg-Richter scaling for M < 7 earthquakes is extrapolated to larger earthquakes, we would expect a M > 8 earthquake in the study region about every 25 yr. However, our analysis shows a lower frequency of occurrence of large earthquakes so that the expected frequency of M > 8 earthquakes is about 200 yr. The level of seismicity for smaller earthquakes on Taiwan is about 12 times greater than in Southern California and the possibility of a M ≈ 9 earthquake north or south of Taiwan cannot be ruled out. In light of the Fukushima, Japan nuclear disaster, we also discuss the implications of our study for the three operating nuclear power plants on the coast of Taiwan.

  14. Quantifying renewable groundwater stress with GRACE

    NASA Astrophysics Data System (ADS)

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min-Hui; Reager, John T.; Famiglietti, James S.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-07-01

    Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human-dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE-based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions.

  15. Quantifying renewable groundwater stress with GRACE

    PubMed Central

    Richey, Alexandra S.; Thomas, Brian F.; Lo, Min‐Hui; Reager, John T.; Voss, Katalyn; Swenson, Sean; Rodell, Matthew

    2015-01-01

    Abstract Groundwater is an increasingly important water supply source globally. Understanding the amount of groundwater used versus the volume available is crucial to evaluate future water availability. We present a groundwater stress assessment to quantify the relationship between groundwater use and availability in the world's 37 largest aquifer systems. We quantify stress according to a ratio of groundwater use to availability, which we call the Renewable Groundwater Stress ratio. The impact of quantifying groundwater use based on nationally reported groundwater withdrawal statistics is compared to a novel approach to quantify use based on remote sensing observations from the Gravity Recovery and Climate Experiment (GRACE) satellite mission. Four characteristic stress regimes are defined: Overstressed, Variable Stress, Human‐dominated Stress, and Unstressed. The regimes are a function of the sign of use (positive or negative) and the sign of groundwater availability, defined as mean annual recharge. The ability to mitigate and adapt to stressed conditions, where use exceeds sustainable water availability, is a function of economic capacity and land use patterns. Therefore, we qualitatively explore the relationship between stress and anthropogenic biomes. We find that estimates of groundwater stress based on withdrawal statistics are unable to capture the range of characteristic stress regimes, especially in regions dominated by sparsely populated biome types with limited cropland. GRACE‐based estimates of use and stress can holistically quantify the impact of groundwater use on stress, resulting in both greater magnitudes of stress and more variability of stress between regions. PMID:26900185

  16. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2014-07-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with the large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (Coefficient of Variation of 12% for standards, 4% for ambient samples), and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution and Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, road-side, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per air volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in southeast US. However, the greater heterogeneity in the intrinsic DTT activity (per PM mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the instrument can also be used to determine oxidative potential with other acellular assays.

  17. The Wordpath Show.

    ERIC Educational Resources Information Center

    Anderton, Alice

    The Intertribal Wordpath Society is a nonprofit educational corporation formed to promote the teaching, status, awareness, and use of Oklahoma Indian languages. The Society produces "Wordpath," a weekly 30-minute public access television show about Oklahoma Indian languages and the people who are teaching and preserving them. The show aims to…

  18. Detecting, visualising, and quantifying mucins.

    PubMed

    Harrop, Ceri A; Thornton, David J; McGuckin, Michael A

    2012-01-01

    The extreme size, extensive glycosylation, and gel-forming nature of mucins make them a challenge to work with, and methodologies for the detection of mucins must take into consideration these features to ensure that one obtains both accurate and meaningful results. In understanding and appreciating the nature of mucins, this affords the researcher a valuable toolkit which can be used to full advantage in detecting, quantifying, and visualising mucins. The employment of a combinatorial approach to mucin detection, using antibody, chemical, and lectin detection methods, allows important information to be gleaned regarding the size, extent of glycosylation, specific mucin species, and distribution of mucins within a given sample. In this chapter, the researcher is guided through considerations into the structure of mucins and how this both affects the detection of mucins and can be used to full advantage. Techniques including ELISA, dot/slot blotting, and Western blotting, use of lectins and antibodies in mucin detection on membranes as well as immunohistochemistry and immunofluorescence on both tissues and cells grown on Transwell™ inserts are described. Notes along with each section advice the researcher on best practice and describe any associated limitations of a particular technique from which the researcher can further develop a particular protocol. PMID:22259129

  19. A Holographic Road Show.

    ERIC Educational Resources Information Center

    Kirkpatrick, Larry D.; Rugheimer, Mac

    1979-01-01

    Describes the viewing sessions and the holograms of a holographic road show. The traveling exhibits, believed to stimulate interest in physics, include a wide variety of holograms and demonstrate several physical principles. (GA)

  20. A semi-automated system for quantifying the oxidative potential of ambient particles in aqueous extracts using the dithiothreitol (DTT) assay: results from the Southeastern Center for Air Pollution and Epidemiology (SCAPE)

    NASA Astrophysics Data System (ADS)

    Fang, T.; Verma, V.; Guo, H.; King, L. E.; Edgerton, E. S.; Weber, R. J.

    2015-01-01

    A variety of methods are used to measure the capability of particulate matter (PM) to catalytically generate reactive oxygen species (ROS) in vivo, also defined as the aerosol oxidative potential. A widely used measure of aerosol oxidative potential is the dithiothreitol (DTT) assay, which monitors the depletion of DTT (a surrogate for cellular antioxidants) as catalyzed by the redox-active species in PM. However, a major constraint in the routine use of the DTT assay for integrating it with large-scale health studies is its labor-intensive and time-consuming protocol. To specifically address this concern, we have developed a semi-automated system for quantifying the oxidative potential of aerosol liquid extracts using the DTT assay. The system, capable of unattended analysis at one sample per hour, has a high analytical precision (coefficient of variation of 15% for positive control, 4% for ambient samples) and reasonably low limit of detection (0.31 nmol min-1). Comparison of the automated approach with the manual method conducted on ambient samples yielded good agreement (slope = 1.08 ± 0.12, r2 = 0.92, N = 9). The system was utilized for the Southeastern Center for Air Pollution & Epidemiology (SCAPE) to generate an extensive data set on DTT activity of ambient particles collected from contrasting environments (urban, roadside, and rural) in the southeastern US. We find that water-soluble PM2.5 DTT activity on a per-air-volume basis was spatially uniform and often well correlated with PM2.5 mass (r = 0.49 to 0.88), suggesting regional sources contributing to the PM oxidative potential in the southeastern US. The correlation may also suggest a mechanistic explanation (oxidative stress) for observed PM2.5 mass-health associations. The heterogeneity in the intrinsic DTT activity (per-PM-mass basis) across seasons indicates variability in the DTT activity associated with aerosols from sources that vary with season. Although developed for the DTT assay, the

  1. A methodology for quantifying seated lumbar curvatures.

    PubMed

    Leitkam, Samuel T; Bush, Tamara Reid; Li, Mingfei

    2011-11-01

    To understand the role seating plays in the support of posture and spinal articulation, it is necessary to study the interface between a human and the seat. However, a method to quantify lumbar curvature in commercially available unmodified seats does not currently exist. This work sought to determine if the lumbar curvature for normal ranges of seated posture could be documented by using body landmarks located on the anterior portion of the body. The development of such a methodology will allow researchers to evaluate spinal articulation of a seated subject while in standard, commercially available seats and chairs. Anterior measurements of boney landmarks were used to quantify the relative positions of the ribcage and pelvis while simultaneous posterior measurements were made of lumbar curvature. The relationship between the anterior and the posterior measures was compared. The predictive capacity of this approach was evaluated by determining linear and second-order regressions for each of the four postures across all subjects and conducting a leave-one-out cross validation. The relationships between the anterior and posterior measures were approximated by linear and second-order polynomial regressions (r(2 ) =  0.829, 0.935 respectively) across all postures. The quantitative analysis showed that openness had a significant relationship with lumbar curvature, and a first-order regression was superior to a second-order regression. Average standard errors in the prediction were 5.9° for the maximum kyphotic posture, 9.9° for the comfortable posture, 12.8° for the straight and tall, and 22.2° for the maximum lordotic posture. These results show predictions of lumbar curvature are possible in seated postures by using a motion capture system and anterior measures. This method of lumbar curvature prediction shows potential for use in the assessment of seated spinal curvatures and the corresponding design of seating to accommodate those curvatures; however

  2. Show What You Know

    ERIC Educational Resources Information Center

    Eccleston, Jeff

    2007-01-01

    Big things come in small packages. This saying came to the mind of the author after he created a simple math review activity for his fourth grade students. Though simple, it has proven to be extremely advantageous in reinforcing math concepts. He uses this activity, which he calls "Show What You Know," often. This activity provides the perfect…

  3. The Ozone Show.

    ERIC Educational Resources Information Center

    Mathieu, Aaron

    2000-01-01

    Uses a talk show activity for a final assessment tool for students to debate about the ozone hole. Students are assessed on five areas: (1) cooperative learning; (2) the written component; (3) content; (4) self-evaluation; and (5) peer evaluation. (SAH)

  4. Honored Teacher Shows Commitment.

    ERIC Educational Resources Information Center

    Ratte, Kathy

    1987-01-01

    Part of the acceptance speech of the 1985 National Council for the Social Studies Teacher of the Year, this article describes the censorship experience of this honored social studies teacher. The incident involved the showing of a videotape version of the feature film entitled "The Seduction of Joe Tynan." (JDH)

  5. Talk Show Science.

    ERIC Educational Resources Information Center

    Moore, Mitzi Ruth

    1992-01-01

    Proposes having students perform skits in which they play the roles of the science concepts they are trying to understand. Provides the dialog for a skit in which hot and cold gas molecules are interviewed on a talk show to study how these properties affect wind, rain, and other weather phenomena. (MDH)

  6. Stage a Water Show

    ERIC Educational Resources Information Center

    Frasier, Debra

    2008-01-01

    In the author's book titled "The Incredible Water Show," the characters from "Miss Alaineus: A Vocabulary Disaster" used an ocean of information to stage an inventive performance about the water cycle. In this article, the author relates how she turned the story into hands-on science teaching for real-life fifth-grade students. The author also…

  7. Showing What They Know

    ERIC Educational Resources Information Center

    Cech, Scott J.

    2008-01-01

    Having students show their skills in three dimensions, known as performance-based assessment, dates back at least to Socrates. Individual schools such as Barrington High School--located just outside of Providence--have been requiring students to actively demonstrate their knowledge for years. The Rhode Island's high school graduating class became…

  8. Taking in a Show.

    PubMed

    Boden, Timothy W

    2016-01-01

    Many medical practices have cut back on education and staff development expenses, especially those costs associated with conventions and conferences. But there are hard-to-value returns on your investment in these live events--beyond the obvious benefits of acquired knowledge and skills. Major vendors still exhibit their services and wares at many events, and the exhibit hall is a treasure-house of information and resources for the savvy physician or administrator. Make and stick to a purposeful plan to exploit the trade show. You can compare products, gain new insights and ideas, and even negotiate better deals with representatives anxious to realize returns on their exhibition investments. PMID:27249887

  9. Not a "reality" show.

    PubMed

    Wrong, Terence; Baumgart, Erica

    2013-01-01

    The authors of the preceding articles raise legitimate questions about patient and staff rights and the unintended consequences of allowing ABC News to film inside teaching hospitals. We explain why we regard their fears as baseless and not supported by what we heard from individuals portrayed in the filming, our decade-long experience making medical documentaries, and the full un-aired context of the scenes shown in the broadcast. The authors don't and can't know what conversations we had, what documents we reviewed, and what protections we put in place in each televised scene. Finally, we hope to correct several misleading examples cited by the authors as well as their offhand mischaracterization of our program as a "reality" show. PMID:23631336

  10. Quantifying uncertainty in stable isotope mixing models

    NASA Astrophysics Data System (ADS)

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-01

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, Stable Isotope Analysis in R (SIAR), a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  11. Quantifying uncertainty in stable isotope mixing models

    SciTech Connect

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the

  12. Quantifying uncertainty in stable isotope mixing models

    DOE PAGESBeta

    Davis, Paul; Syme, James; Heikoop, Jeffrey; Fessenden-Rahn, Julianna; Perkins, George; Newman, Brent; Chrystal, Abbey E.; Hagerty, Shannon B.

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ15N and δ18O) but all methods testedmore » are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated

  13. Incremental comprehension of spoken quantifier sentences: Evidence from brain potentials.

    PubMed

    Freunberger, Dominik; Nieuwland, Mante S

    2016-09-01

    Do people incrementally incorporate the meaning of quantifier expressions to understand an unfolding sentence? Most previous studies concluded that quantifiers do not immediately influence how a sentence is understood based on the observation that online N400-effects differed from offline plausibility judgments. Those studies, however, used serial visual presentation (SVP), which involves unnatural reading. In the current ERP-experiment, we presented spoken positive and negative quantifier sentences ("Practically all/practically no postmen prefer delivering mail, when the weather is good/bad during the day"). Different from results obtained in a previously reported SVP-study (Nieuwland, 2016) sentence truth-value N400 effects occurred in positive and negative quantifier sentences alike, reflecting fully incremental quantifier comprehension. This suggests that the prosodic information available during spoken language comprehension supports the generation of online predictions for upcoming words and that, at least for quantifier sentences, comprehension of spoken language may proceed more incrementally than comprehension during SVP reading. PMID:27346365

  14. Use of the Concept of Equivalent Biologically Effective Dose (BED) to Quantify the Contribution of Hyperthermia to Local Tumor Control in Radiohyperthermia Cervical Cancer Trials, and Comparison With Radiochemotherapy Results

    SciTech Connect

    Plataniotis, George A. Dale, Roger G.

    2009-04-01

    Purpose: To express the magnitude of contribution of hyperthermia to local tumor control in radiohyperthermia (RT/HT) cervical cancer trials, in terms of the radiation-equivalent biologically effective dose (BED) and to explore the potential of the combined modalities in the treatment of this neoplasm. Materials and Methods: Local control rates of both arms of each study (RT vs. RT+HT) reported from randomized controlled trials (RCT) on concurrent RT/HT for cervical cancer were reviewed. By comparing the two tumor control probabilities (TCPs) from each study, we calculated the HT-related log cell-kill and then expressed it in terms of the number of 2 Gy fraction equivalents, for a range of tumor volumes and radiosensitivities. We have compared the contribution of each modality and made some exploratory calculations on the TCPs that might be expected from a combined trimodality treatment (RT+CT+HT). Results: The HT-equivalent number of 2-Gy fractions ranges from 0.6 to 4.8 depending on radiosensitivity. Opportunities for clinically detectable improvement by the addition of HT are only available in tumors with an alpha value in the approximate range of 0.22-0.28 Gy{sup -1}. A combined treatment (RT+CT+HT) is not expected to improve prognosis in radioresistant tumors. Conclusion: The most significant improvements in TCP, which may result from the combination of RT/CT/HT for locally advanced cervical carcinomas, are likely to be limited only to those patients with tumors of relatively low-intermediate radiosensitivity.

  15. Quantifying reliability uncertainty : a proof of concept.

    SciTech Connect

    Diegert, Kathleen V.; Dvorack, Michael A.; Ringland, James T.; Mundt, Michael Joseph; Huzurbazar, Aparna; Lorio, John F.; Fatherley, Quinn; Anderson-Cook, Christine; Wilson, Alyson G.; Zurn, Rena M.

    2009-10-01

    This paper develops Classical and Bayesian methods for quantifying the uncertainty in reliability for a system of mixed series and parallel components for which both go/no-go and variables data are available. Classical methods focus on uncertainty due to sampling error. Bayesian methods can explore both sampling error and other knowledge-based uncertainties. To date, the reliability community has focused on qualitative statements about uncertainty because there was no consensus on how to quantify them. This paper provides a proof of concept that workable, meaningful quantification methods can be constructed. In addition, the application of the methods demonstrated that the results from the two fundamentally different approaches can be quite comparable. In both approaches, results are sensitive to the details of how one handles components for which no failures have been seen in relatively few tests.

  16. Quantifying and measuring cyber resiliency

    NASA Astrophysics Data System (ADS)

    Cybenko, George

    2016-05-01

    Cyber resliency has become an increasingly attractive research and operational concept in cyber security. While several metrics have been proposed for quantifying cyber resiliency, a considerable gap remains between those metrics and operationally measurable and meaningful concepts that can be empirically determined in a scientific manner. This paper describes a concrete notion of cyber resiliency that can be tailored to meet specific needs of organizations that seek to introduce resiliency into their assessment of their cyber security posture.

  17. What Do Blood Tests Show?

    MedlinePlus

    ... shows the ranges for blood glucose levels after 8 to 12 hours of fasting (not eating). It shows the normal range and the abnormal ranges that are a sign of prediabetes or diabetes. Plasma Glucose Results (mg/dL)* Diagnosis 70 to 99 ...

  18. Children's school-breakfast reports and school-lunch reports (in 24-h dietary recalls): conventional and reporting-error-sensitive measures show inconsistent accuracy results for retention interval and breakfast location.

    PubMed

    Baxter, Suzanne D; Guinn, Caroline H; Smith, Albert F; Hitchcock, David B; Royer, Julie A; Puryear, Megan P; Collins, Kathleen L; Smith, Alyssa L

    2016-04-14

    Validation-study data were analysed to investigate retention interval (RI) and prompt effects on the accuracy of fourth-grade children's reports of school-breakfast and school-lunch (in 24-h recalls), and the accuracy of school-breakfast reports by breakfast location (classroom; cafeteria). Randomly selected fourth-grade children at ten schools in four districts were observed eating school-provided breakfast and lunch, and were interviewed under one of eight conditions created by crossing two RIs ('short'--prior-24-hour recall obtained in the afternoon and 'long'--previous-day recall obtained in the morning) with four prompts ('forward'--distant to recent, 'meal name'--breakfast, etc., 'open'--no instructions, and 'reverse'--recent to distant). Each condition had sixty children (half were girls). Of 480 children, 355 and 409 reported meals satisfying criteria for reports of school-breakfast and school-lunch, respectively. For breakfast and lunch separately, a conventional measure--report rate--and reporting-error-sensitive measures--correspondence rate and inflation ratio--were calculated for energy per meal-reporting child. Correspondence rate and inflation ratio--but not report rate--showed better accuracy for school-breakfast and school-lunch reports with the short RI than with the long RI; this pattern was not found for some prompts for each sex. Correspondence rate and inflation ratio showed better school-breakfast report accuracy for the classroom than for cafeteria location for each prompt, but report rate showed the opposite. For each RI, correspondence rate and inflation ratio showed better accuracy for lunch than for breakfast, but report rate showed the opposite. When choosing RI and prompts for recalls, researchers and practitioners should select a short RI to maximise accuracy. Recommendations for prompt selections are less clear. As report rates distort validation-study accuracy conclusions, reporting-error-sensitive measures are recommended. PMID

  19. Quantifying potential recharge in mantled sinkholes using ERT.

    PubMed

    Schwartz, Benjamin F; Schreiber, Madeline E

    2009-01-01

    Potential recharge through thick soils in mantled sinkholes was quantified using differential electrical resistivity tomography (ERT). Conversion of time series two-dimensional (2D) ERT profiles into 2D volumetric water content profiles using a numerically optimized form of Archie's law allowed us to monitor temporal changes in water content in soil profiles up to 9 m in depth. Combining Penman-Monteith daily potential evapotranspiration (PET) and daily precipitation data with potential recharge calculations for three sinkhole transects indicates that potential recharge occurred only during brief intervals over the study period and ranged from 19% to 31% of cumulative precipitation. Spatial analysis of ERT-derived water content showed that infiltration occurred both on sinkhole flanks and in sinkhole bottoms. Results also demonstrate that mantled sinkholes can act as regions of both rapid and slow recharge. Rapid recharge is likely the result of flow through macropores (such as root casts and thin gravel layers), while slow recharge is the result of unsaturated flow through fine-grained sediments. In addition to developing a new method for quantifying potential recharge at the field scale in unsaturated conditions, we show that mantled sinkholes are an important component of storage in a karst system. PMID:18823398

  20. Tracking and Quantifying Objects and Non-Cohesive Substances

    ERIC Educational Resources Information Center

    van Marle, Kristy; Wynn, Karen

    2011-01-01

    The present study tested infants' ability to assess and compare quantities of a food substance. Contrary to previous findings, the results suggest that by 10 months of age infants can quantify non-cohesive substances, and that this ability is different in important ways from their ability to quantify discrete objects: (1) In contrast to even much…

  1. Quantifying Order in Poly(3-hexylthiophene)

    NASA Astrophysics Data System (ADS)

    Snyder, Chad; Nieuwendaal, Ryan; Delongchamp, Dean; Luscombe, Christine; Sista, Prakash; Boyd, Shane

    2014-03-01

    While poly(3-hexylthiophene) (P3HT) is one of the most studied polymers in organic electronics, it remains one of the most challenging in terms of quantitative measures of its order, e.g., crystallinity. To address this challenge, we prepared a series of highly regioregular P3HT fractions ranging from 3.3 kg/mol to 23 kg/mol. Using this series plus a high molar mass (62 kg/mol) commercial material, we compare different metrics for order in P3HT via calorimetry, solid state NMR, and x-ray diffraction. We reconcile the results of our work with those of recent studies on oligomeric (3-hexylthiophenes). One challenges of quantifying low molar mass P3HT samples via DSC is a thermal fractionation effect due to varying chain lengths. We quantify these effects in our molar mass series, and a clear crossover region from extended chain crystals to chain folded crystals is identified through the thermal fractionation process. New values for the enthalpy of fusion of high molar mass P3HT and its equilibrium melting temperature are established through our work. Another result of our research is the validation of high heating rate DSC methods for quantifying crystallinity in P3HT samples with device relevant film thicknesses.

  2. Anaphoric reference to quantified antecedents: an event-related brain potential study.

    PubMed

    Filik, Ruth; Leuthold, Hartmut; Moxey, Linda M; Sanford, Anthony J

    2011-11-01

    We report an event-related brain potential (ERP) study examining how readers process sentences containing anaphoric reference to quantified antecedents. Previous studies indicate that positive (e.g. many) and negative (e.g. not many) quantifiers cause readers to focus on different sets of entities. For example in Many of the fans attended the game, focus is on the fans who attended (the reference set), and subsequent pronominal reference to this set, as in, Their presence was a boost to the team, is facilitated. In contrast, if many is replaced by not many, focus shifts to the fans who did not attend (the complement set), and reference to this set, as in, Their absence was disappointing, is preferred. In the current studies, the electroencephalogram (EEG) was recorded while participants read positive or negative quantified statements followed by anaphoric reference to the reference set or complement set. Results showed that the pronoun their elicited a larger N400 following negative than positive quantifiers. There was also a larger N400 on the disambiguating word (presence/absence) for complement set reference following a positive quantifier, and for reference set reference following a negative quantifier. Findings are discussed in relation to theoretical accounts of complement anaphora. PMID:21986293

  3. PARAMETERS FOR QUANTIFYING BEAM HALO

    SciTech Connect

    C.K. ALLEN; T.P. WANGLER

    2001-06-01

    Two different parameters for the quantitative description of beam halo are introduced, both based on moments of the particle distribution. One parameter is a measure of spatial halo formation and has been defined previously by Wangler and Crandall [3], termed the profile parameter. The second parameter relies on kinematic invariants to quantify halo formation in phase space; we call it the halo parameter. The profile parameter can be computed from experimental beam profile data. The halo parameter provides a theoretically more complete description of halo in phase space, but is difficult to obtain experimentally.

  4. Results, Results, Results?

    ERIC Educational Resources Information Center

    Wallace, Dale

    2000-01-01

    Given the amount of time, energy, and money devoted to provincial achievement exams in Canada, it is disturbing that Alberta students and teachers feel so pressured and that the exams do not accurately reflect what students know. Research shows that intelligence has an (untested) emotional component. (MLH)

  5. Quantifying Connectivity in the Coastal Ocean

    NASA Astrophysics Data System (ADS)

    Mitarai, S.; Siegel, D.; Watson, J.; Dong, C.; McWilliams, J.

    2008-12-01

    The quantification of coastal connectivity is important for a wide range of real-world applications ranging from marine pollution to nearshore fisheries management. For these purposes, coastal connectivity is best defined as the probability that water parcels from one nearshore location are advected to another site over a given time interval. Here, we demonstrate how to quantify coastal connectivity using Lagrangian probability- density function (PDF) methods, a classic modeling approach for many turbulent applications, and numerical solutions of coastal circulation for the Southern California Bight. Mean dispersal patterns from a single release site (or Lagrangian PDFs) show a strong dependency to the particle-release location and seasonal variability, reflecting circulation patterns in the Southern California Bight. Strong interannual variations, responding to El Nino and La Nina transitions are also observed. Mean connectivity patterns, deduced from Lagrangian PDFs, is spatially heterogeneous for the advection time of around 30 days or less, resulting from distinctive circulation patterns, and becomes more homogeneous for a longer advection time. A given realization of connectivity is stochastic because of eddy-driven transport and synoptic wind forcing changes. In general, mainland sites are good sources while both Northern and Southern Channel Islands are poor source sites, although they receive substantial fluxes of water parcels from the mainland. The predicted connectivity gives useful information to ecological and other applications for the Southern California Bight (e.g., designing marine protected areas, understanding gene structures, and predicting the impact of a pollution event) and provide a path for assessing connectivity for other regions of the coastal ocean.

  6. Quantifying crystal-melt segregation in dykes

    NASA Astrophysics Data System (ADS)

    Yamato, Philippe; Duretz, Thibault; May, Dave A.; Tartèse, Romain

    2015-04-01

    The dynamics of magma flow is highly affected by the presence of a crystalline load. During magma ascent, it has been demonstrated that crystal-melt segregation constitutes a viable mechanism for magmatic differentiation. However, the influences of crystal volume fraction, geometry, size and density on crystal melt segregation are still not well constrained. In order to address these issues, we performed a parametric study using 2D direct numerical simulations, which model the ascension of crystal-bearing magma in a vertical dyke. Using these models, we have characterised the amount of segregation as a function of different quantities including: the crystal fraction (φ), the density contrast between crystals and melt (Δρ), the size of the crystals (Ac) and their aspect ratio (R). Results show that crystal aspect ratio does not affect the segregation if R is small enough (long axis smaller than ~1/6 of the dyke width, Wd). Inertia within the system was also found not to influence crystal-melt segregation. The degree of segregation was however found to be highly dependent upon other parameters. Segregation is highest when Δρ and Ac are large, and lowest for large pressure gradient (Pd) and/or large values of Wd. These four parameters can be combined into a single one, the Snumber, which can be used to quantify the segregation. Based on systematic numerical modelling and dimensional analysis, we provide a first order scaling law which allows quantification of the segregation for an arbitrary Snumber and φ, encompassing a wide range of typical parameters encountered in terrestrial magmatic systems.

  7. In favour of the definition "adolescents with idiopathic scoliosis": juvenile and adolescent idiopathic scoliosis braced after ten years of age, do not show different end results. SOSORT award winner 2014

    PubMed Central

    2014-01-01

    Background The most important factor discriminating juvenile (JIS) from adolescent idiopathic scoliosis (AIS) is the risk of deformity progression. Brace treatment can change natural history, even when risk of progression is high. The aim of this study was to compare the end of growth results of JIS subjects, treated after 10 years of age, with final results of AIS. Methods Design: prospective observational controlled cohort study nested in a prospective database. Setting: outpatient tertiary referral clinic specialized in conservative treatment of spinal deformities. Inclusion criteria: idiopathic scoliosis; European Risser 0–2; 25 degrees to 45 degrees Cobb; start treatment age: 10 years or more, never treated before. Exclusion criteria: secondary scoliosis, neurological etiology, prior treatment for scoliosis (brace or surgery). Groups: 27 patients met the inclusion criteria for the AJIS, (Juvenile Idiopathic Scoliosis treated in adolescence), demonstrated by an x-ray before 10 year of age, and treatment start after 10 years of age. AIS group included 45 adolescents with a diagnostic x-ray made after the threshold of age 10 years. Results at the end of growth were analysed; the threshold of 5 Cobb degree to define worsened, improved and stabilized curves was considered. Statistics: Mean and SD were used for descriptive statistics of clinical and radiographic changes. Relative Risk of failure (RR), Chi-square and T-test of all data was calculated to find differences among the two groups. 95% Confidence Interval (CI) , and of radiographic changes have been calculated. Results We did not find any Cobb angle significant differences among groups at baseline and at the end of treatment. The only difference was in the number of patients progressed above 45 degrees, found in the JIS group. The RR of progression of AJIS was, 1.35 (IC95% 0.57-3.17) versus AIS, and it wasn't statistically significant in the AJIS group, in respect to AIS group (p = 0.5338). Conclusion

  8. Quantifying pulsed laser induced damage to graphene

    SciTech Connect

    Currie, Marc; Caldwell, Joshua D.; Bezares, Francisco J.; Robinson, Jeremy; Anderson, Travis; Chun, Hayden; Tadjer, Marko

    2011-11-21

    As an emerging optical material, graphene's ultrafast dynamics are often probed using pulsed lasers yet the region in which optical damage takes place is largely uncharted. Here, femtosecond laser pulses induced localized damage in single-layer graphene on sapphire. Raman spatial mapping, SEM, and AFM microscopy quantified the damage. The resulting size of the damaged area has a linear correlation with the optical fluence. These results demonstrate local modification of sp{sup 2}-carbon bonding structures with optical pulse fluences as low as 14 mJ/cm{sup 2}, an order-of-magnitude lower than measured and theoretical ablation thresholds.

  9. Towards quantifying fuzzy stream power

    NASA Astrophysics Data System (ADS)

    Schwanghart, W.; Korup, O.

    2012-04-01

    Deterministic flow direction algorithms such as the D8 have wide application in numerical models of landscape evolution. These simple algorithms play a central role in quantifying drainage basin area, and hence approximating—via empirically derived relationships from regional flood frequency and hydraulic geometry—stream power or fluvial erosion potential. Here we explore how alternative algorithms that employ a probabilistic choice of flow direction affect quantitative estimates of stream power. We test a probabilistic multi-flow direction algorithm within the MATLAB TopoToolbox in model and real landscapes of low topographic relief and minute gradients, where potentially fuzzy drainage divides are dictated by, among others, alluvial fan dynamics, playa infill, and groundwater fluxes and seepage. We employ a simplistic numerical landscape evolution model that simulates fluvial incision and hillslope diffusion and explicitly models the existence and capture of endorheic basins that prevail in (semi-)arid, low-relief landscapes. We discuss how using this probabilistic multi-flow direction algorithm helps represent and quantify uncertainty about spatio-temporal drainage divide locations and how this bears on quantitative estimates of downstream stream power and fluvial erosion potential as well as their temporal dynamics.

  10. Quantifying torso deformity in scoliosis

    NASA Astrophysics Data System (ADS)

    Ajemba, Peter O.; Kumar, Anish; Durdle, Nelson G.; Raso, V. James

    2006-03-01

    Scoliosis affects the alignment of the spine and the shape of the torso. Most scoliosis patients and their families are more concerned about the effect of scoliosis on the torso than its effect on the spine. There is a need to develop robust techniques for quantifying torso deformity based on full torso scans. In this paper, deformation indices obtained from orthogonal maps of full torso scans are used to quantify torso deformity in scoliosis. 'Orthogonal maps' are obtained by applying orthogonal transforms to 3D surface maps. (An 'orthogonal transform' maps a cylindrical coordinate system to a Cartesian coordinate system.) The technique was tested on 361 deformed computer models of the human torso and on 22 scans of volunteers (8 normal and 14 scoliosis). Deformation indices from the orthogonal maps correctly classified up to 95% of the volunteers with a specificity of 1.00 and a sensitivity of 0.91. In addition to classifying scoliosis, the system gives a visual representation of the entire torso in one view and is viable for use in a clinical environment for managing scoliosis.

  11. SPACE: an algorithm to predict and quantify alternatively spliced isoforms using microarrays.

    PubMed

    Anton, Miguel A; Gorostiaga, Dorleta; Guruceaga, Elizabeth; Segura, Victor; Carmona-Saez, Pedro; Pascual-Montano, Alberto; Pio, Ruben; Montuenga, Luis M; Rubio, Angel

    2008-01-01

    Exon and exon+junction microarrays are promising tools for studying alternative splicing. Current analytical tools applied to these arrays lack two relevant features: the ability to predict unknown spliced forms and the ability to quantify the concentration of known and unknown isoforms. SPACE is an algorithm that has been developed to (1) estimate the number of different transcripts expressed under several conditions, (2) predict the precursor mRNA splicing structure and (3) quantify the transcript concentrations including unknown forms. The results presented here show its robustness and accuracy for real and simulated data. PMID:18312629

  12. Children's knowledge of hierarchical phrase structure: quantifier floating in Japanese.

    PubMed

    Suzuki, Takaaki; Yoshinaga, Naoko

    2013-06-01

    The interpretation of floating quantifiers in Japanese requires knowledge of hierarchical phrase structure. However, the input to children is insufficient or even misleading, as our analysis indicates. This presents an intriguing question on learnability: do children interpret floating quantifiers based on a structure-dependent rule which is not obvious in the input or do they employ a sentence comprehension strategy based on the available input? Two experiments examined four- to six-year-old Japanese-speaking children for their interpretations of floating quantifiers in SOV and OSV sentences. The results revealed that no child employed a comprehension strategy in terms of the linear ordering of constituents, and most five- and six-year-olds correctly interpreted floating quantifiers when word-order difficulty was reduced. These facts indicate that children's interpretation of floating quantifiers is structurally dependent on hierarchical phrase structure, suggesting that this knowledge is a part of children's grammar despite the insufficient input available to them. PMID:22850618

  13. Comprehension of simple quantifiers: empirical evaluation of a computational model.

    PubMed

    Szymanik, Jakub; Zajenkowski, Marcin

    2010-04-01

    We examine the verification of simple quantifiers in natural language from a computational model perspective. We refer to previous neuropsychological investigations of the same problem and suggest extending their experimental setting. Moreover, we give some direct empirical evidence linking computational complexity predictions with cognitive reality. In the empirical study we compare time needed for understanding different types of quantifiers. We show that the computational distinction between quantifiers recognized by finite-automata and push-down automata is psychologically relevant. Our research improves upon, the hypotheses and explanatory power of recent neuroimaging studies as well as provides evidence for the claim that human linguistic abilities are constrained by computational complexity. PMID:21564222

  14. Quantifying entanglement with witness operators

    SciTech Connect

    Brandao, Fernando G.S.L.

    2005-08-15

    We present a unifying approach to the quantification of entanglement based on entanglement witnesses, which includes several already established entanglement measures such as the negativity, the concurrence, and the robustness of entanglement. We then introduce an infinite family of new entanglement quantifiers, having as its limits the best separable approximation measure and the generalized robustness. Gaussian states, states with symmetry, states constrained to super-selection rules, and states composed of indistinguishable particles are studied under the view of the witnessed entanglement. We derive new bounds to the fidelity of teleportation d{sub min}, for the distillable entanglement E{sub D} and for the entanglement of formation. A particular measure, the PPT-generalized robustness, stands out due to its easy calculability and provides sharper bounds to d{sub min} and E{sub D} than the negativity in most of the states. We illustrate our approach studying thermodynamical properties of entanglement in the Heisenberg XXX and dimerized models.

  15. QUANTIFIERS UNDONE: REVERSING PREDICTABLE SPEECH ERRORS IN COMPREHENSION

    PubMed Central

    Frazier, Lyn; Clifton, Charles

    2015-01-01

    Speakers predictably make errors during spontaneous speech. Listeners may identify such errors and repair the input, or their analysis of the input, accordingly. Two written questionnaire studies investigated error compensation mechanisms in sentences with doubled quantifiers such as Many students often turn in their assignments late. Results show a considerable number of undoubled interpretations for all items tested (though fewer for sentences containing doubled negation than for sentences containing many-often, every-always or few-seldom.) This evidence shows that the compositional form-meaning pairing supplied by the grammar is not the only systematic mapping between form and meaning. Implicit knowledge of the workings of the performance systems provides an additional mechanism for pairing sentence form and meaning. Alternate accounts of the data based on either a concord interpretation or an emphatic interpretation of the doubled quantifier don’t explain why listeners fail to apprehend the ‘extra meaning’ added by the potentially redundant material only in limited circumstances. PMID:26478637

  16. Quantifying properties of ICM inhomogeneities

    NASA Astrophysics Data System (ADS)

    Zhuravleva, I.; Churazov, E.; Kravtsov, A.; Lau, E. T.; Nagai, D.; Sunyaev, R.

    2013-02-01

    We present a new method to identify and characterize the structure of the intracluster medium (ICM) in simulated galaxy clusters. The method uses the median of gas properties, such as density and pressure, which we show to be very robust to the presence of gas inhomogeneities. In particular, we show that the radial profiles of median gas properties in cosmological simulations of clusters are smooth and do not exhibit fluctuations at locations of massive clumps in contrast to mean and mode properties. Analysis of simulations shows that distribution of gas properties in a given radial shell can be well described by a log-normal probability density function and a tail. The former corresponds to a nearly hydrostatic bulk component, accounting for ˜99 per cent of the volume, while the tail corresponds to high-density inhomogeneities. The clumps can thus be easily identified with the volume elements corresponding to the tail of the distribution. We show that this results in a simple and robust separation of the diffuse and clumpy components of the ICM. The full width at half-maximum of the density distribution in simulated clusters is a growing function of radius and varies from ˜0.15 dex in cluster centre to ˜0.5 dex at 2 r500 in relaxed clusters. The small scatter in the width between relaxed clusters suggests that the degree of inhomogeneity is a robust characteristic of the ICM. It broadly agrees with the amplitude of density perturbations found in the Coma cluster core. We discuss the origin of ICM density variations in spherical shells and show that less than 20 per cent of the width can be attributed to the triaxiality of the cluster gravitational potential. As a link to X-ray observations of real clusters we evaluated the ICM clumping factor, weighted with the temperature-dependent X-ray emissivity, with and without high-density inhomogeneities. We argue that these two cases represent upper and lower limits on the departure of the observed X-ray emissivity

  17. Quantifying entanglement with scattering experiments

    NASA Astrophysics Data System (ADS)

    Marty, O.; Epping, M.; Kampermann, H.; Bruß, D.; Plenio, M. B.; Cramer, M.

    2014-03-01

    We show how the entanglement contained in states of spins arranged on a lattice may be lower bounded with observables arising in scattering experiments. We focus on the partial differential cross section obtained in neutron scattering from magnetic materials but our results are sufficiently general such that they may also be applied to, e.g., optical Bragg scattering from ultracold atoms in optical lattices or from ion chains. We discuss resonating valence bond states and ground and thermal states of experimentally relevant models—such as the Heisenberg, Majumdar-Ghosh, and XY models—in different geometries and with different spin numbers. As a by-product, we find that for the one-dimensional XY model in a transverse field such measurements reveal factorization and the quantum phase transition at zero temperature.

  18. QUANTIFYING ASSAY VARIATION IN NUTRIENT ANALYSIS OF FEEDSTUFFS

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Analytical results from different laboratories have greater variation than those from a single laboratory, and this variation differs by nutrient. Objectives of this presentation are to describe methods for quantifying the analytical reproducibility among and repeatability within laboratories, estim...

  19. Quantifying Evaporation in a Permeable Pavement System

    EPA Science Inventory

    Studies quantifying evaporation from permeable pavement systems are limited to a few laboratory studies and one field application. This research quantifies evaporation for a larger-scale field application by measuring the water balance from lined permeable pavement sections. Th...

  20. New Drug Shows Mixed Results Against Early Alzheimer's

    MedlinePlus

    ... Sign Up See recent e-Newsletters Preserving Your Memory Magazine Get Your Copy Now Subscribe to our ... 3 Letter Resources Articles Brochure Download Preserving Your Memory Magazine e-Newsletter Resource Locator Videos Charity Navigator ...

  1. National Orange Show Photovoltaic Demonstration

    SciTech Connect

    Dan Jimenez Sheri Raborn, CPA; Tom Baker

    2008-03-31

    National Orange Show Photovoltaic Demonstration created a 400KW Photovoltaic self-generation plant at the National Orange Show Events Center (NOS). The NOS owns a 120-acre state fairground where it operates an events center and produces an annual citrus fair known as the Orange Show. The NOS governing board wanted to employ cost-saving programs for annual energy expenses. It is hoped the Photovoltaic program will result in overall savings for the NOS, help reduce the State's energy demands as relating to electrical power consumption, improve quality of life within the affected grid area as well as increase the energy efficiency of buildings at our venue. In addition, the potential to reduce operational expenses would have a tremendous effect on the ability of the NOS to service its community.

  2. Career and Technical Education: Show Us the Buck, We'll Show You the Bang!

    ERIC Educational Resources Information Center

    Whetstone, Ryan

    2011-01-01

    Adult and CTE programs in California have been cut by about 60 percent over the past three years. A number of school districts have summarily eliminated these programs to preserve funding for other educational endeavors. The author says part of the problem has been the community's inability to communicate quantifiable results. One of the hottest…

  3. Evaluation of two methods for quantifying passeriform lice

    PubMed Central

    Koop, Jennifer A. H.; Clayton, Dale H.

    2013-01-01

    Two methods commonly used to quantify ectoparasites on live birds are visual examination and dust-ruffling. Visual examination provides an estimate of ectoparasite abundance based on an observer’s timed inspection of various body regions on a bird. Dust-ruffling involves application of insecticidal powder to feathers that are then ruffled to dislodge ectoparasites onto a collection surface where they can then be counted. Despite the common use of these methods in the field, the proportion of actual ectoparasites they account for has only been tested with Rock Pigeons (Columba livia), a relatively large-bodied species (238–302 g) with dense plumage. We tested the accuracy of the two methods using European Starlings (Sturnus vulgaris; ~75 g). We first quantified the number of lice (Brueelia nebulosa) on starlings using visual examination, followed immediately by dust-ruffling. Birds were then euthanized and the proportion of lice accounted for by each method was compared to the total number of lice on each bird as determined with a body-washing method. Visual examination and dust-ruffling each accounted for a relatively small proportion of total lice (14% and 16%, respectively), but both were still significant predictors of abundance. The number of lice observed by visual examination accounted for 68% of the variation in total abundance. Similarly, the number of lice recovered by dust-ruffling accounted for 72% of the variation in total abundance. Our results show that both methods can be used to reliably quantify the abundance of lice on European Starlings and other similar-sized passerines. PMID:24039328

  4. Quantifying Emergent Behavior of Autonomous Robots

    NASA Astrophysics Data System (ADS)

    Martius, Georg; Olbrich, Eckehard

    2015-10-01

    Quantifying behaviors of robots which were generated autonomously from task-independent objective functions is an important prerequisite for objective comparisons of algorithms and movements of animals. The temporal sequence of such a behavior can be considered as a time series and hence complexity measures developed for time series are natural candidates for its quantification. The predictive information and the excess entropy are such complexity measures. They measure the amount of information the past contains about the future and thus quantify the nonrandom structure in the temporal sequence. However, when using these measures for systems with continuous states one has to deal with the fact that their values will depend on the resolution with which the systems states are observed. For deterministic systems both measures will diverge with increasing resolution. We therefore propose a new decomposition of the excess entropy in resolution dependent and resolution independent parts and discuss how they depend on the dimensionality of the dynamics, correlations and the noise level. For the practical estimation we propose to use estimates based on the correlation integral instead of the direct estimation of the mutual information using the algorithm by Kraskov et al. (2004) which is based on next neighbor statistics because the latter allows less control of the scale dependencies. Using our algorithm we are able to show how autonomous learning generates behavior of increasing complexity with increasing learning duration.

  5. Quantifying cell behaviors during embryonic wound healing

    NASA Astrophysics Data System (ADS)

    Mashburn, David; Ma, Xiaoyan; Crews, Sarah; Lynch, Holley; McCleery, W. Tyler; Hutson, M. Shane

    2011-03-01

    During embryogenesis, internal forces induce motions in cells leading to widespread motion in tissues. We previously developed laser hole-drilling as a consistent, repeatable way to probe such epithelial mechanics. The initial recoil (less than 30s) gives information about physical properties (elasticity, force) of cells surrounding the wound, but the long-term healing process (tens of minutes) shows how cells adjust their behavior in response to stimuli. To study this biofeedback in many cells through time, we developed tools to quantify statistics of individual cells. By combining watershed segmentation with a powerful and efficient user interaction system, we overcome problems that arise in any automatic segmentation from poor image quality. We analyzed cell area, perimeter, aspect ratio, and orientation relative to wound for a wide variety of laser cuts in dorsal closure. We quantified statistics for different regions as well, i.e. cells near to and distant from the wound. Regional differences give a distribution of wound-induced changes, whose spatial localization provides clues into the physical/chemical signals that modulate the wound healing response. Supported by the Human Frontier Science Program (RGP0021/2007 C).

  6. Quantifying utricular stimulation during natural behavior

    PubMed Central

    Rivera, Angela R. V.; Davis, Julian; Grant, Wally; Blob, Richard W.; Peterson, Ellengene; Neiman, Alexander B.; Rowe, Michael

    2012-01-01

    The use of natural stimuli in neurophysiological studies has led to significant insights into the encoding strategies used by sensory neurons. To investigate these encoding strategies in vestibular receptors and neurons, we have developed a method for calculating the stimuli delivered to a vestibular organ, the utricle, during natural (unrestrained) behaviors, using the turtle as our experimental preparation. High-speed digital video sequences are used to calculate the dynamic gravito-inertial (GI) vector acting on the head during behavior. X-ray computed tomography (CT) scans are used to determine the orientation of the otoconial layer (OL) of the utricle within the head, and the calculated GI vectors are then rotated into the plane of the OL. Thus, the method allows us to quantify the spatio-temporal structure of stimuli to the OL during natural behaviors. In the future, these waveforms can be used as stimuli in neurophysiological experiments to understand how natural signals are encoded by vestibular receptors and neurons. We provide one example of the method which shows that turtle feeding behaviors can stimulate the utricle at frequencies higher than those typically used in vestibular studies. This method can be adapted to other species, to other vestibular end organs, and to other methods of quantifying head movements. PMID:22753360

  7. Detecting and Quantifying Topography in Neural Maps

    PubMed Central

    Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy

    2014-01-01

    Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279

  8. Quantifying of bactericide properties of medicinal plants

    PubMed Central

    Ács, András; Gölöncsér, Flóra; Barabás, Anikó

    2011-01-01

    Extended research has been carried out to clarify the ecological role of plant secondary metabolites (SMs). Although their primary ecological function is self-defense, bioactive compounds have long been used in alternative medicine or in biological control of pests. Several members of the family Labiatae are known to have strong antimicrobial capacity. For testing and quantifying antibacterial activity, most often standard microbial protocols are used, assessing inhibitory activity on a selected strain. In this study, the applicability of a microbial ecotoxtest was evaluated to quantify the aggregate bactericide capacity of Labiatae species, based on the bioluminescence inhibition of the bacterium Vibrio fischeri. Striking differences were found amongst herbs, reaching even 10-fold toxicity. Glechoma hederacea L. proved to be the most toxic, with the EC50 of 0.4073 g dried plant/l. LC50 values generated by the standard bioassay seem to be a good indicator of the bactericide property of herbs. Traditional use of the selected herbs shows a good correlation with bioactivity expressed as bioluminescence inhibition, leading to the conclusion that the Vibrio fischeri bioassay can be a good indicator of the overall antibacterial capacity of herbs, at least on a screening level. PMID:21502819

  9. Quantifying the vitamin D economy.

    PubMed

    Heaney, Robert P; Armas, Laura A G

    2015-01-01

    Vitamin D enters the body through multiple routes and in a variety of chemical forms. Utilization varies with input, demand, and genetics. Vitamin D and its metabolites are carried in the blood on a Gc protein that has three principal alleles with differing binding affinities and ethnic prevalences. Three major metabolites are produced, which act via two routes, endocrine and autocrine/paracrine, and in two compartments, extracellular and intracellular. Metabolic consumption is influenced by physiological controls, noxious stimuli, and tissue demand. When administered as a supplement, varying dosing schedules produce major differences in serum metabolite profiles. To understand vitamin D's role in human physiology, it is necessary both to identify the foregoing entities, mechanisms, and pathways and, specifically, to quantify them. This review was performed to delineate the principal entities and transitions involved in the vitamin D economy, summarize the status of present knowledge of the applicable rates and masses, draw inferences about functions that are implicit in these quantifications, and point out implications for the determination of adequacy. PMID:26024057

  10. Quantifying macromolecular conformational transition pathways

    NASA Astrophysics Data System (ADS)

    Seyler, Sean; Kumar, Avishek; Thorpe, Michael; Beckstein, Oliver

    2015-03-01

    Diverse classes of proteins function through large-scale conformational changes that are challenging for computer simulations. A range of fast path-sampling techniques have been used to generate transitions, but it has been difficult to compare paths from (and assess the relative strengths of) different methods. We introduce a comprehensive method (pathway similarity analysis, PSA) for quantitatively characterizing and comparing macromolecular pathways. The Hausdorff and Fréchet metrics (known from computational geometry) are used to quantify the degree of similarity between polygonal curves in configuration space. A strength of PSA is its use of the full information available from the 3 N-dimensional configuration space trajectory without requiring additional specific knowledge about the system. We compare a sample of eleven different methods for the closed-to-open transitions of the apo enzyme adenylate kinase (AdK) and also apply PSA to an ensemble of 400 AdK trajectories produced by dynamic importance sampling MD and the Geometrical Pathways algorithm. We discuss the method's potential to enhance our understanding of transition path sampling methods, validate them, and help guide future research toward deeper physical insights into conformational transitions.

  11. Quantifying Uncertainty in Epidemiological Models

    SciTech Connect

    Ramanathan, Arvind; Jha, Sumit Kumar

    2012-01-01

    Modern epidemiology has made use of a number of mathematical models, including ordinary differential equation (ODE) based models and agent based models (ABMs) to describe the dynamics of how a disease may spread within a population and enable the rational design of strategies for intervention that effectively contain the spread of the disease. Although such predictions are of fundamental importance in preventing the next global pandemic, there is a significant gap in trusting the outcomes/predictions solely based on such models. Hence, there is a need to develop approaches such that mathematical models can be calibrated against historical data. In addition, there is a need to develop rigorous uncertainty quantification approaches that can provide insights into when a model will fail and characterize the confidence in the (possibly multiple) model outcomes/predictions, when such retrospective analysis cannot be performed. In this paper, we outline an approach to develop uncertainty quantification approaches for epidemiological models using formal methods and model checking. By specifying the outcomes expected from a model in a suitable spatio-temporal logic, we use probabilistic model checking methods to quantify the probability with which the epidemiological model satisfies the specification. We argue that statistical model checking methods can solve the uncertainty quantification problem for complex epidemiological models.

  12. Quantifying the Shape of Aging

    PubMed Central

    Wrycza, Tomasz F.; Missov, Trifon I.; Baudisch, Annette

    2015-01-01

    In Biodemography, aging is typically measured and compared based on aging rates. We argue that this approach may be misleading, because it confounds the time aspect with the mere change aspect of aging. To disentangle these aspects, here we utilize a time-standardized framework and, instead of aging rates, suggest the shape of aging as a novel and valuable alternative concept for comparative aging research. The concept of shape captures the direction and degree of change in the force of mortality over age, which—on a demographic level—reflects aging. We 1) provide a list of shape properties that are desirable from a theoretical perspective, 2) suggest several demographically meaningful and non-parametric candidate measures to quantify shape, and 3) evaluate performance of these measures based on the list of properties as well as based on an illustrative analysis of a simple dataset. The shape measures suggested here aim to provide a general means to classify aging patterns independent of any particular mortality model and independent of any species-specific time-scale. Thereby they support systematic comparative aging research across different species or between populations of the same species under different conditions and constitute an extension of the toolbox available to comparative research in Biodemography. PMID:25803427

  13. Quantifying Scheduling Challenges for Exascale System Software

    SciTech Connect

    Mondragon, Oscar; Bridges, Patrick G.; Jones, Terry R

    2015-01-01

    The move towards high-performance computing (HPC) ap- plications comprised of coupled codes and the need to dra- matically reduce data movement is leading to a reexami- nation of time-sharing vs. space-sharing in HPC systems. In this paper, we discuss and begin to quantify the perfor- mance impact of a move away from strict space-sharing of nodes for HPC applications. Specifically, we examine the po- tential performance cost of time-sharing nodes between ap- plication components, we determine whether a simple coor- dinated scheduling mechanism can address these problems, and we research how suitable simple constraint-based opti- mization techniques are for solving scheduling challenges in this regime. Our results demonstrate that current general- purpose HPC system software scheduling and resource al- location systems are subject to significant performance de- ciencies which we quantify for six representative applica- tions. Based on these results, we discuss areas in which ad- ditional research is needed to meet the scheduling challenges of next-generation HPC systems.

  14. A stochastic approach for quantifying immigrant integration: the Spanish test case

    NASA Astrophysics Data System (ADS)

    Agliari, Elena; Barra, Adriano; Contucci, Pierluigi; Sandell, Richard; Vernia, Cecilia

    2014-10-01

    We apply stochastic process theory to the analysis of immigrant integration. Using a unique and detailed data set from Spain, we study the relationship between local immigrant density and two social and two economic immigration quantifiers for the period 1999-2010. As opposed to the classic time-series approach, by letting immigrant density play the role of ‘time’ and the quantifier the role of ‘space,’ it becomes possible to analyse the behavior of the quantifiers by means of continuous time random walks. Two classes of results are then obtained. First, we show that social integration quantifiers evolve following diffusion law, while the evolution of economic quantifiers exhibits ballistic dynamics. Second, we make predictions of best- and worst-case scenarios taking into account large local fluctuations. Our stochastic process approach to integration lends itself to interesting forecasting scenarios which, in the hands of policy makers, have the potential to improve political responses to integration problems. For instance, estimating the standard first-passage time and maximum-span walk reveals local differences in integration performance for different immigration scenarios. Thus, by recognizing the importance of local fluctuations around national means, this research constitutes an important tool to assess the impact of immigration phenomena on municipal budgets and to set up solid multi-ethnic plans at the municipal level as immigration pressures build.

  15. Quantifying lateral femoral condyle ellipticalness in chimpanzees, gorillas, and humans.

    PubMed

    Sylvester, Adam D; Pfisterer, Theresa

    2012-11-01

    Articular surfaces of limb bones provide information for understanding animal locomotion because their size and shape are a reflection of habitual postures and movements. Here we present a novel method for quantifying the ellipticalness (i.e., departure from perfectly circular) of the lateral femoral condyle (LFC), applying this technique to hominid femora. Three-dimensional surface models were created for 49 Homo sapiens, 34 Pan troglodytes and 25 Gorilla gorilla femora. Software was developed that fit separate cylinders to each of the femoral condyles. These cylinders were constrained to have a single axis, but could have different radii. The cylinder fit to the LFC was allowed to assume an elliptical cross-section, while the cylinder fit to the medial condyle was constrained to remain circular. The shape of the elliptical cylinder (ratio of the major and minor axes of the ellipse) was recorded, and the orientation of the elliptical cylinder quantified as angles between the major axis of the ellipse and the anatomical and mechanical axes of the femur. Species were compared using analysis of variance and post hoc multiple comparisons tests. Confirming qualitative descriptions, human LFCs are more elliptical than those of chimpanzees and gorillas. Human femora exhibit a narrow range for the angle between the major axis of the elliptical cylinder and femoral axes. Conversely, the chimpanzee sample is bimodal for these angles, exhibiting two ellipse orientations, while Gorilla shows no preferred angle. Our results suggest that like modern human femora, chimpanzee femoral condyles have preferentially used regions. PMID:23042636

  16. Quantifying uncertainty from material inhomogeneity.

    SciTech Connect

    Battaile, Corbett Chandler; Emery, John M.; Brewer, Luke N.; Boyce, Brad Lee

    2009-09-01

    Most engineering materials are inherently inhomogeneous in their processing, internal structure, properties, and performance. Their properties are therefore statistical rather than deterministic. These inhomogeneities manifest across multiple length and time scales, leading to variabilities, i.e. statistical distributions, that are necessary to accurately describe each stage in the process-structure-properties hierarchy, and are ultimately the primary source of uncertainty in performance of the material and component. When localized events are responsible for component failure, or when component dimensions are on the order of microstructural features, this uncertainty is particularly important. For ultra-high reliability applications, the uncertainty is compounded by a lack of data describing the extremely rare events. Hands-on testing alone cannot supply sufficient data for this purpose. To date, there is no robust or coherent method to quantify this uncertainty so that it can be used in a predictive manner at the component length scale. The research presented in this report begins to address this lack of capability through a systematic study of the effects of microstructure on the strain concentration at a hole. To achieve the strain concentration, small circular holes (approximately 100 {micro}m in diameter) were machined into brass tensile specimens using a femto-second laser. The brass was annealed at 450 C, 600 C, and 800 C to produce three hole-to-grain size ratios of approximately 7, 1, and 1/7. Electron backscatter diffraction experiments were used to guide the construction of digital microstructures for finite element simulations of uniaxial tension. Digital image correlation experiments were used to qualitatively validate the numerical simulations. The simulations were performed iteratively to generate statistics describing the distribution of plastic strain at the hole in varying microstructural environments. In both the experiments and simulations, the

  17. Quantifying a cellular automata simulation of electric vehicles

    NASA Astrophysics Data System (ADS)

    Hill, Graeme; Bell, Margaret; Blythe, Phil

    2014-12-01

    Within this work the Nagel-Schreckenberg (NS) cellular automata is used to simulate a basic cyclic road network. Results from SwitchEV, a real world Electric Vehicle trial which has collected more than two years of detailed electric vehicle data, are used to quantify the results of the NS automata, demonstrating similar power consumption behavior to that observed in the experimental results. In particular the efficiency of the electric vehicles reduces as the vehicle density increases, due in part to the reduced efficiency of EVs at low speeds, but also due to the energy consumption inherent in changing speeds. Further work shows the results from introducing spatially restricted speed restriction. In general it can be seen that induced congestion from spatially transient events propagates back through the road network and alters the energy and efficiency profile of the simulated vehicles, both before and after the speed restriction. Vehicles upstream from the restriction show a reduced energy usage and an increased efficiency, and vehicles downstream show an initial large increase in energy usage as they accelerate away from the speed restriction.

  18. Quantifying, Visualizing, and Monitoring Lead Optimization.

    PubMed

    Maynard, Andrew T; Roberts, Christopher D

    2016-05-12

    Although lead optimization (LO) is by definition a process, process-centric analysis and visualization of this important phase of pharmaceutical R&D has been lacking. Here we describe a simple statistical framework to quantify and visualize the progression of LO projects so that the vital signs of LO convergence can be monitored. We refer to the resulting visualizations generated by our methodology as the "LO telemetry" of a project. These visualizations can be automated to provide objective, holistic, and instantaneous analysis and communication of LO progression. This enhances the ability of project teams to more effectively drive LO process, while enabling management to better coordinate and prioritize LO projects. We present the telemetry of five LO projects comprising different biological targets and different project outcomes, including clinical compound selection, termination due to preclinical safety/tox, and termination due to lack of tractability. We demonstrate that LO progression is accurately captured by the telemetry. We also present metrics to quantify LO efficiency and tractability. PMID:26262898

  19. A Generalizable Methodology for Quantifying User Satisfaction

    NASA Astrophysics Data System (ADS)

    Huang, Te-Yuan; Chen, Kuan-Ta; Huang, Polly; Lei, Chin-Laung

    Quantifying user satisfaction is essential, because the results can help service providers deliver better services. In this work, we propose a generalizable methodology, based on survival analysis, to quantify user satisfaction in terms of session times, i. e., the length of time users stay with an application. Unlike subjective human surveys, our methodology is based solely on passive measurement, which is more cost-efficient and better able to capture subconscious reactions. Furthermore, by using session times, rather than a specific performance indicator, such as the level of distortion of voice signals, the effects of other factors like loudness and sidetone, can also be captured by the developed models. Like survival analysis, our methodology is characterized by low complexity and a simple model-developing process. The feasibility of our methodology is demonstrated through case studies of ShenZhou Online, a commercial MMORPG in Taiwan, and the most prevalent VoIP application in the world, namely Skype. Through the model development process, we can also identify the most significant performance factors and their impacts on user satisfaction and discuss how they can be exploited to improve user experience and optimize resource allocation.

  20. Quantifying Significance of MHC II Residues.

    PubMed

    Fan, Ying; Lu, Ruoshui; Wang, Lusheng; Andreatta, Massimo; Li, Shuai Cheng

    2014-01-01

    The major histocompatibility complex (MHC), a cell-surface protein mediating immune recognition, plays important roles in the immune response system of all higher vertebrates. MHC molecules are highly polymorphic and they are grouped into serotypes according to the specificity of the response. It is a common belief that a protein sequence determines its three dimensional structure and function. Hence, the protein sequence determines the serotype. Residues play different levels of importance. In this paper, we quantify the residue significance with the available serotype information. Knowing the significance of the residues will deepen our understanding of the MHC molecules and yield us a concise representation of the molecules. In this paper we propose a linear programming-based approach to find significant residue positions as well as quantifying their significance in MHC II DR molecules. Among all the residues in MHC II DR molecules, 18 positions are of particular significance, which is consistent with the literature on MHC binding sites, and succinct pseudo-sequences appear to be adequate to capture the whole sequence features. When the result is used for classification of MHC molecules with serotype assigned by WHO, a 98.4 percent prediction performance is achieved. The methods have been implemented in java (http://code.google.com/p/quassi/). PMID:26355503

  1. Computed tomography to quantify tooth abrasion

    NASA Astrophysics Data System (ADS)

    Kofmehl, Lukas; Schulz, Georg; Deyhle, Hans; Filippi, Andreas; Hotz, Gerhard; Berndt-Dagassan, Dorothea; Kramis, Simon; Beckmann, Felix; Müller, Bert

    2010-09-01

    Cone-beam computed tomography, also termed digital volume tomography, has become a standard technique in dentistry, allowing for fast 3D jaw imaging including denture at moderate spatial resolution. More detailed X-ray images of restricted volumes for post-mortem studies in dental anthropology are obtained by means of micro computed tomography. The present study evaluates the impact of the pipe smoking wear on teeth morphology comparing the abraded tooth with its contra-lateral counterpart. A set of 60 teeth, loose or anchored in the jaw, from 12 dentitions have been analyzed. After the two contra-lateral teeth were scanned, one dataset has been mirrored before the two datasets were registered using affine and rigid registration algorithms. Rigid registration provides three translational and three rotational parameters to maximize the overlap of two rigid bodies. For the affine registration, three scaling factors are incorporated. Within the present investigation, affine and rigid registrations yield comparable values. The restriction to the six parameters of the rigid registration is not a limitation. The differences in size and shape between the tooth and its contra-lateral counterpart generally exhibit only a few percent in the non-abraded volume, validating that the contralateral tooth is a reasonable approximation to quantify, for example, the volume loss as the result of long-term clay pipe smoking. Therefore, this approach allows quantifying the impact of the pipe abrasion on the internal tooth morphology including root canal, dentin, and enamel volumes.

  2. Stimfit: quantifying electrophysiological data with Python

    PubMed Central

    Guzman, Segundo J.; Schlögl, Alois; Schmidt-Hieber, Christoph

    2013-01-01

    Intracellular electrophysiological recordings provide crucial insights into elementary neuronal signals such as action potentials and synaptic currents. Analyzing and interpreting these signals is essential for a quantitative understanding of neuronal information processing, and requires both fast data visualization and ready access to complex analysis routines. To achieve this goal, we have developed Stimfit, a free software package for cellular neurophysiology with a Python scripting interface and a built-in Python shell. The program supports most standard file formats for cellular neurophysiology and other biomedical signals through the Biosig library. To quantify and interpret the activity of single neurons and communication between neurons, the program includes algorithms to characterize the kinetics of presynaptic action potentials and postsynaptic currents, estimate latencies between pre- and postsynaptic events, and detect spontaneously occurring events. We validate and benchmark these algorithms, give estimation errors, and provide sample use cases, showing that Stimfit represents an efficient, accessible and extensible way to accurately analyze and interpret neuronal signals. PMID:24600389

  3. Quantifying asymmetry of quantum states using entanglement

    NASA Astrophysics Data System (ADS)

    Toloui, Borzu

    2013-03-01

    For open systems, symmetric dynamics do not always lead to conservation laws. We show that, for a dynamic symmetry associated with a compact Lie group, one can derive new selection rules from entanglement theory. These selection rules apply to both closed and open systems as well as reversible and irreversible time evolutions. Our approach is based on an embedding of the system's Hilbert space into a tensor product of two Hilbert spaces allowing for the symmetric dynamics to be simulated with local operations. The entanglement of the embedded states determines which transformations are forbidden because of the symmetry. In fact, every bipartite entanglement monotone can be used to quantify the asymmetry of the initial states. Moreover, where the dynamics is reversible, each of these monotones becomes a new conserved quantity. This research has been supported by the Institute for Quantum Information Science (IQIS) at the University of Calgary, Alberta Innovates, NSERC, General Dynamics Canada, and MITACS.

  4. Measuring political polarization: Twitter shows the two sides of Venezuela

    NASA Astrophysics Data System (ADS)

    Morales, A. J.; Borondo, J.; Losada, J. C.; Benito, R. M.

    2015-03-01

    We say that a population is perfectly polarized when divided in two groups of the same size and opposite opinions. In this paper, we propose a methodology to study and measure the emergence of polarization from social interactions. We begin by proposing a model to estimate opinions in which a minority of influential individuals propagate their opinions through a social network. The result of the model is an opinion probability density function. Next, we propose an index to quantify the extent to which the resulting distribution is polarized. Finally, we apply the proposed methodology to a Twitter conversation about the late Venezuelan president, Hugo Chávez, finding a good agreement between our results and offline data. Hence, we show that our methodology can detect different degrees of polarization, depending on the structure of the network.

  5. Fuzzy Entropy Method for Quantifying Supply Chain Networks Complexity

    NASA Astrophysics Data System (ADS)

    Zhang, Jihui; Xu, Junqin

    Supply chain is a special kind of complex network. Its complexity and uncertainty makes it very difficult to control and manage. Supply chains are faced with a rising complexity of products, structures, and processes. Because of the strong link between a supply chain’s complexity and its efficiency the supply chain complexity management becomes a major challenge of today’s business management. The aim of this paper is to quantify the complexity and organization level of an industrial network working towards the development of a ‘Supply Chain Network Analysis’ (SCNA). By measuring flows of goods and interaction costs between different sectors of activity within the supply chain borders, a network of flows is built and successively investigated by network analysis. The result of this study shows that our approach can provide an interesting conceptual perspective in which the modern supply network can be framed, and that network analysis can handle these issues in practice.

  6. Quantifying consumption rates of dissolved oxygen along bed forms

    NASA Astrophysics Data System (ADS)

    Boano, Fulvio; De Falco, Natalie; Arnon, Shai

    2016-04-01

    Streambed interfaces represent hotspots for nutrient transformations because they host different microbial species, and the evaluation of these reaction rates is important to assess the fate of nutrients in riverine environments. In this work we analyze a series of flume experiments on oxygen demand in dune-shaped hyporheic sediments under losing and gaining flow conditions. We employ a new modeling code to quantify oxygen consumption rates from observed vertical profiles of oxygen concentration. The code accounts for transport by molecular diffusion and water advection, and automatically determines the reaction rates that provide the best fit between observed and modeled concentration values. The results show that reaction rates are not uniformly distributed across the streambed, in agreement with the expected behavior predicted by hyporheic exchange theory. Oxygen consumption was found to be highly influenced by the presence of gaining or losing flow conditions, which controlled the delivery of labile DOC to streambed microorganisms.

  7. Quantifying Global Uncertainties in a Simple Microwave Rainfall Algorithm

    NASA Technical Reports Server (NTRS)

    Kummerow, Christian; Berg, Wesley; Thomas-Stahle, Jody; Masunaga, Hirohiko

    2006-01-01

    While a large number of methods exist in the literature for retrieving rainfall from passive microwave brightness temperatures, little has been written about the quantitative assessment of the expected uncertainties in these rainfall products at various time and space scales. The latter is the result of two factors: sparse validation sites over most of the world's oceans, and algorithm sensitivities to rainfall regimes that cause inconsistencies against validation data collected at different locations. To make progress in this area, a simple probabilistic algorithm is developed. The algorithm uses an a priori database constructed from the Tropical Rainfall Measuring Mission (TRMM) radar data coupled with radiative transfer computations. Unlike efforts designed to improve rainfall products, this algorithm takes a step backward in order to focus on uncertainties. In addition to inversion uncertainties, the construction of the algorithm allows errors resulting from incorrect databases, incomplete databases, and time- and space-varying databases to be examined. These are quantified. Results show that the simple algorithm reduces errors introduced by imperfect knowledge of precipitation radar (PR) rain by a factor of 4 relative to an algorithm that is tuned to the PR rainfall. Database completeness does not introduce any additional uncertainty at the global scale, while climatologically distinct space/time domains add approximately 25% uncertainty that cannot be detected by a radiometer alone. Of this value, 20% is attributed to changes in cloud morphology and microphysics, while 5% is a result of changes in the rain/no-rain thresholds. All but 2%-3% of this variability can be accounted for by considering the implicit assumptions in the algorithm. Additional uncertainties introduced by the details of the algorithm formulation are not quantified in this study because of the need for independent measurements that are beyond the scope of this paper. A validation strategy

  8. Quantifying strain variability in modeling growth of Listeria monocytogenes.

    PubMed

    Aryani, D C; den Besten, H M W; Hazeleger, W C; Zwietering, M H

    2015-09-01

    Prediction of microbial growth kinetics can differ from the actual behavior of the target microorganisms. In the present study, the impact of strain variability on maximum specific growth rate (μmax) (h(-1)) was quantified using twenty Listeria monocytogenes strains. The μmax was determined as function of four different variables, namely pH, water activity (aw)/NaCl concentration [NaCl], undissociated lactic acid concentration ([HA]), and temperature (T). The strain variability was compared to biological and experimental variabilities to determine their importance. The experiment was done in duplicate at the same time to quantify experimental variability and reproduced at least twice on different experimental days to quantify biological (reproduction) variability. For all variables, experimental variability was clearly lower than biological variability and strain variability; and remarkably, biological variability was similar to strain variability. Strain variability in cardinal growth parameters, namely pHmin, [NaCl]max, [HA]max, and Tmin was further investigated by fitting secondary growth models to the μmax data, including a modified secondary pH model. The fitting results showed that L. monocytogenes had an average pHmin of 4.5 (5-95% prediction interval (PI) 4.4-4.7), [NaCl]max of 2.0mM (PI 1.8-2.1), [HA]max of 5.1mM (PI 4.2-5.9), and Tmin of -2.2°C (PI (-3.3)-(-1.1)). The strain variability in cardinal growth parameters was benchmarked to available literature data, showing that the effect of strain variability explained around 1/3 or less of the variability found in literature. The cardinal growth parameters and their prediction intervals were used as input to illustrate the effect of strain variability on the growth of L. monocytogenes in food products with various characteristics, resulting in 2-4 logCFU/ml(g) difference in growth prediction between the most and least robust strains, depending on the type of food product. This underlined the importance

  9. Quantifying recrystallization by electron backscatter diffraction.

    PubMed

    Jazaeri, H; Humphreys, F J

    2004-03-01

    The use of high-resolution electron backscatter diffraction in the scanning electron microscope to quantify the volume fraction of recrystallization and the recrystallization kinetics is discussed. Monitoring the changes of high-angle grain boundary (HAGB) content during annealing is shown to be a reliable method of determining the volume fraction of recrystallization during discontinuous recrystallization, where a large increase in the percentage of high-angle boundaries occurs during annealing. The results are shown to be consistent with the standard methods of studying recrystallization, such as quantitative metallography and hardness testing. Application of the method to a highly deformed material has shown that it can be used to identify the transition from discontinuous to continuous recrystallization during which there is no significant change in the percentage of HAGB during annealing. PMID:15009691

  10. Quantifying capital goods for waste incineration

    SciTech Connect

    Brogaard, L.K.; Riber, C.; Christensen, T.H.

    2013-06-15

    Highlights: • Materials and energy used for the construction of waste incinerators were quantified. • The data was collected from five incineration plants in Scandinavia. • Included were six main materials, electronic systems, cables and all transportation. • The capital goods contributed 2–3% compared to the direct emissions impact on GW. - Abstract: Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000–240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000–26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000–5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7–14 kg CO{sub 2} per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2–3% with respect to kg CO{sub 2} per tonne of waste combusted.

  11. Quantifying diet for nutrigenomic studies

    Technology Transfer Automated Retrieval System (TEKTRAN)

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nu...

  12. Quantifying anatomical shape variations in neurological disorders.

    PubMed

    Singh, Nikhil; Fletcher, P Thomas; Preston, J Samuel; King, Richard D; Marron, J S; Weiner, Michael W; Joshi, Sarang

    2014-04-01

    We develop a multivariate analysis of brain anatomy to identify the relevant shape deformation patterns and quantify the shape changes that explain corresponding variations in clinical neuropsychological measures. We use kernel Partial Least Squares (PLS) and formulate a regression model in the tangent space of the manifold of diffeomorphisms characterized by deformation momenta. The scalar deformation momenta completely encode the diffeomorphic changes in anatomical shape. In this model, the clinical measures are the response variables, while the anatomical variability is treated as the independent variable. To better understand the "shape-clinical response" relationship, we also control for demographic confounders, such as age, gender, and years of education in our regression model. We evaluate the proposed methodology on the Alzheimer's Disease Neuroimaging Initiative (ADNI) database using baseline structural MR imaging data and neuropsychological evaluation test scores. We demonstrate the ability of our model to quantify the anatomical deformations in units of clinical response. Our results also demonstrate that the proposed method is generic and generates reliable shape deformations both in terms of the extracted patterns and the amount of shape changes. We found that while the hippocampus and amygdala emerge as mainly responsible for changes in test scores for global measures of dementia and memory function, they are not a determinant factor for executive function. Another critical finding was the appearance of thalamus and putamen as most important regions that relate to executive function. These resulting anatomical regions were consistent with very high confidence irrespective of the size of the population used in the study. This data-driven global analysis of brain anatomy was able to reach similar conclusions as other studies in Alzheimer's disease based on predefined ROIs, together with the identification of other new patterns of deformation. The

  13. VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    VIEW SHOWING WEST ELEVATION, EAST SIDE OF MEYER AVENUE. SHOWS 499-501, MUNOZ HOUSE (AZ-73-37) ON FAR RIGHT - Antonio Bustamente House, 485-489 South Meyer Avenue & 186 West Kennedy Street, Tucson, Pima County, AZ

  14. 15. Detail showing lower chord pinconnected to vertical member, showing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    15. Detail showing lower chord pin-connected to vertical member, showing floor beam riveted to extension of vertical member below pin-connection, and showing brackets supporting cantilevered sidewalk. View to southwest. - Selby Avenue Bridge, Spanning Short Line Railways track at Selby Avenue between Hamline & Snelling Avenues, Saint Paul, Ramsey County, MN

  15. Processing of Numerical and Proportional Quantifiers

    ERIC Educational Resources Information Center

    Shikhare, Sailee; Heim, Stefan; Klein, Elise; Huber, Stefan; Willmes, Klaus

    2015-01-01

    Quantifier expressions like "many" and "at least" are part of a rich repository of words in language representing magnitude information. The role of numerical processing in comprehending quantifiers was studied in a semantic truth value judgment task, asking adults to quickly verify sentences about visual displays using…

  16. Deaf Learners' Knowledge of English Universal Quantifiers

    ERIC Educational Resources Information Center

    Berent, Gerald P.; Kelly, Ronald R.; Porter, Jeffrey E.; Fonzi, Judith

    2008-01-01

    Deaf and hearing students' knowledge of English sentences containing universal quantifiers was compared through their performance on a 50-item, multiple-picture task that required students to decide whether each of five pictures represented a possible meaning of a target sentence. The task assessed fundamental knowledge of quantifier sentences,…

  17. Scalar Quantifiers: Logic, Acquisition, and Processing

    ERIC Educational Resources Information Center

    Geurts, Bart; Katsos, Napoleon; Cummins, Chris; Moons, Jonas; Noordman, Leo

    2010-01-01

    Superlative quantifiers ("at least 3", "at most 3") and comparative quantifiers ("more than 2", "fewer than 4") are traditionally taken to be interdefinable: the received view is that "at least n" and "at most n" are equivalent to "more than n-1" and "fewer than n+1", respectively. Notwithstanding the prima facie plausibility of this claim, Geurts…

  18. Dust as interstellar catalyst. I. Quantifying the chemical desorption process

    NASA Astrophysics Data System (ADS)

    Minissale, M.; Dulieu, F.; Cazaux, S.; Hocuk, S.

    2016-01-01

    Context. The presence of dust in the interstellar medium has profound consequences on the chemical composition of regions where stars are forming. Recent observations show that many species formed onto dust are populating the gas phase, especially in cold environments where UV- and cosmic-ray-induced photons do not account for such processes. Aims: The aim of this paper is to understand and quantify the process that releases solid species into the gas phase, the so-called chemical desorption process, so that an explicit formula can be derived that can be included in astrochemical models. Methods: We present a collection of experimental results of more than ten reactive systems. For each reaction, different substrates such as oxidized graphite and compact amorphous water ice were used. We derived a formula for reproducing the efficiencies of the chemical desorption process that considers the equipartition of the energy of newly formed products, followed by classical bounce on the surface. In part II of this study we extend these results to astrophysical conditions. Results: The equipartition of energy correctly describes the chemical desorption process on bare surfaces. On icy surfaces, the chemical desorption process is much less efficient, and a better description of the interaction with the surface is still needed. Conclusions: We show that the mechanism that directly transforms solid species into gas phase species is efficient for many reactions.

  19. quantifying and Predicting Reactive Transport

    SciTech Connect

    Peter C. Burns, Department of Civil Engineering and Geological Sciences, University of Notre Dame

    2009-12-04

    This project was led by Dr. Jiamin Wan at Lawrence Berkeley National Laboratory. Peter Burns provided expertise in uranium mineralogy and in identification of uranium minerals in test materials. Dr. Wan conducted column tests regarding uranium transport at LBNL, and samples of the resulting columns were sent to Dr. Burns for analysis. Samples were analyzed for uranium mineralogy by X-ray powder diffraction and by scanning electron microscopy, and results were provided to Dr. Wan for inclusion in the modeling effort. Full details of the project can be found in Dr. Wan's final reports for the associated effort at LBNL.

  20. Quantifying chaos for ecological stoichiometry

    NASA Astrophysics Data System (ADS)

    Duarte, Jorge; Januário, Cristina; Martins, Nuno; Sardanyés, Josep

    2010-09-01

    The theory of ecological stoichiometry considers ecological interactions among species with different chemical compositions. Both experimental and theoretical investigations have shown the importance of species composition in the outcome of the population dynamics. A recent study of a theoretical three-species food chain model considering stoichiometry [B. Deng and I. Loladze, Chaos 17, 033108 (2007)] shows that coexistence between two consumers predating on the same prey is possible via chaos. In this work we study the topological and dynamical measures of the chaotic attractors found in such a model under ecological relevant parameters. By using the theory of symbolic dynamics, we first compute the topological entropy associated with unimodal Poincaré return maps obtained by Deng and Loladze from a dimension reduction. With this measure we numerically prove chaotic competitive coexistence, which is characterized by positive topological entropy and positive Lyapunov exponents, achieved when the first predator reduces its maximum growth rate, as happens at increasing δ1. However, for higher values of δ1 the dynamics become again stable due to an asymmetric bubble-like bifurcation scenario. We also show that a decrease in the efficiency of the predator sensitive to prey's quality (increasing parameter ζ) stabilizes the dynamics. Finally, we estimate the fractal dimension of the chaotic attractors for the stoichiometric ecological model.

  1. Quantifying Coral Reef Ecosystem Services

    EPA Science Inventory

    Coral reefs have been declining during the last four decades as a result of both local and global anthropogenic stresses. Numerous research efforts to elucidate the nature, causes, magnitude, and potential remedies for the decline have led to the widely held belief that the recov...

  2. Quantifying Diet for Nutrigenomic Studies

    PubMed Central

    Tucker, Katherine L.; Smith, Caren E.; Lai, Chao-Qiang; Ordovas, Jose M.

    2015-01-01

    The field of nutrigenomics shows tremendous promise for improved understanding of the effects of dietary intake on health. The knowledge that metabolic pathways may be altered in individuals with genetic variants in the presence of certain dietary exposures offers great potential for personalized nutrition advice. However, although considerable resources have gone into improving technology for measurement of the genome and biological systems, dietary intake assessment remains inadequate. Each of the methods currently used has limitations tliat may be exaggerated in the context of gene x nutrient interaction in large multiethnic studies. Because of the specificity of most gene x nutrient interactions, valid data are needed for nutrient intakes at the individual level. Most statistical adjustment efforts are designed to improve estimates of nutrient intake distributions in populations and are unlikely to solve this problem. An improved method of direct measurement of individual usual dietary intake that is unbiased across populations is urgently needed. PMID:23642200

  3. Diagnostic measure to quantify loss of clinical components in multi-lead electrocardiogram.

    PubMed

    Tripathy, R K; Sharma, L N; Dandapat, S

    2016-03-01

    In this Letter, a novel principal component (PC)-based diagnostic measure (PCDM) is proposed to quantify loss of clinical components in the multi-lead electrocardiogram (MECG) signals. The analysis of MECG shows that, the clinical components are captured in few PCs. The proposed diagnostic measure is defined as the sum of weighted percentage root mean square difference (PRD) between the PCs of original and processed MECG signals. The values of the weight depend on the clinical importance of PCs. The PCDM is tested over MECG enhancement and a novel MECG data reduction scheme. The proposed measure is compared with weighted diagnostic distortion, wavelet energy diagnostic distortion and PRD. The qualitative evaluation is performed using Spearman rank-order correlation coefficient (SROCC) and Pearson linear correlation coefficient. The simulation result demonstrates that the PCDM performs better to quantify loss of clinical components in MECG and shows a SROCC value of 0.9686 with subjective measure. PMID:27222735

  4. Is it Logical to Count on Quantifiers? Dissociable Neural Networks Underlying Numerical and Logical Quantifiers

    PubMed Central

    Troiani, Vanessa; Peelle, Jonathan E.; Clark, Robin; Grossman, Murray

    2009-01-01

    The present study examined the neural substrate of two classes of quantifiers: Numerical quantifiers like “at least three” which require magnitude processing, and logical quantifiers like “some” which can be satisfied using a simple form of perceptual logic. We assessed these distinct classes of quantifiers with converging observations from two sources: functional imaging data from healthy adults, and behavioral and structural data from patients with corticobasal degeneration, who have acalculia. Our findings are consistent with the claim that numerical quantifier comprehension depends on a parietal-dorsolateral prefrontal network, but logical quantifier comprehension depends instead on a rostral medial prefrontal-posterior cingulate network. These observations emphasize the important contribution of abstract number knowledge to the meaning of numerical quantifiers in semantic memory and the potential role of a logic-based evaluation in the service of non-numerical quantifiers. PMID:18789346

  5. 28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    28. MAP SHOWING LOCATION OF ARVFS FACILITY AS BUILT. SHOWS LINCOLN BOULEVARD, BIG LOST RIVER, AND NAVAL REACTORS FACILITY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-101-2. DATED OCTOBER 12, 1965. INEL INDEX CODE NUMBER: 075 0101 851 151969. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  6. 8. Detail showing concrete abutment, showing substructure of bridge, specifically ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. Detail showing concrete abutment, showing substructure of bridge, specifically west side of arch and substructure. - Presumpscot Falls Bridge, Spanning Presumptscot River at Allen Avenue extension, 0.75 mile west of U.S. Interstate 95, Falmouth, Cumberland County, ME

  7. Pea Plants Show Risk Sensitivity.

    PubMed

    Dener, Efrat; Kacelnik, Alex; Shemesh, Hagai

    2016-07-11

    Sensitivity to variability in resources has been documented in humans, primates, birds, and social insects, but the fit between empirical results and the predictions of risk sensitivity theory (RST), which aims to explain this sensitivity in adaptive terms, is weak [1]. RST predicts that agents should switch between risk proneness and risk aversion depending on state and circumstances, especially according to the richness of the least variable option [2]. Unrealistic assumptions about agents' information processing mechanisms and poor knowledge of the extent to which variability imposes specific selection in nature are strong candidates to explain the gap between theory and data. RST's rationale also applies to plants, where it has not hitherto been tested. Given the differences between animals' and plants' information processing mechanisms, such tests should help unravel the conflicts between theory and data. Measuring root growth allocation by split-root pea plants, we show that they favor variability when mean nutrient levels are low and the opposite when they are high, supporting the most widespread RST prediction. However, the combination of non-linear effects of nitrogen availability at local and systemic levels may explain some of these effects as a consequence of mechanisms not necessarily evolved to cope with variance [3, 4]. This resembles animal examples in which properties of perception and learning cause risk sensitivity even though they are not risk adaptations [5]. PMID:27374342

  8. Quantifying gyrotropy in magnetic reconnection

    NASA Astrophysics Data System (ADS)

    Swisdak, M.

    2016-01-01

    A new scalar measure of the gyrotropy of a pressure tensor is defined. Previously suggested measures are shown to be incomplete by means of examples for which they give unphysical results. To demonstrate its usefulness as an indicator of magnetic topology, the new measure is calculated for electron data taken from numerical simulations of magnetic reconnection, shown to peak at separatrices and X points, and compared to the other measures. The new diagnostic has potential uses in analyzing spacecraft observations, and so a method for calculating it from measurements performed in an arbitrary coordinate system is derived.

  9. Common ecology quantifies human insurgency.

    PubMed

    Bohorquez, Juan Camilo; Gourley, Sean; Dixon, Alexander R; Spagat, Michael; Johnson, Neil F

    2009-12-17

    Many collective human activities, including violence, have been shown to exhibit universal patterns. The size distributions of casualties both in whole wars from 1816 to 1980 and terrorist attacks have separately been shown to follow approximate power-law distributions. However, the possibility of universal patterns ranging across wars in the size distribution or timing of within-conflict events has barely been explored. Here we show that the sizes and timing of violent events within different insurgent conflicts exhibit remarkable similarities. We propose a unified model of human insurgency that reproduces these commonalities, and explains conflict-specific variations quantitatively in terms of underlying rules of engagement. Our model treats each insurgent population as an ecology of dynamically evolving, self-organized groups following common decision-making processes. Our model is consistent with several recent hypotheses about modern insurgency, is robust to many generalizations, and establishes a quantitative connection between human insurgency, global terrorism and ecology. Its similarity to financial market models provides a surprising link between violent and non-violent forms of human behaviour. PMID:20016600

  10. Quantifying hybridization in realistic time.

    PubMed

    Collins, Joshua; Linz, Simone; Semple, Charles

    2011-10-01

    Recently, numerous practical and theoretical studies in evolutionary biology aim at calculating the extent to which reticulation-for example, horizontal gene transfer, hybridization, or recombination-has influenced the evolution for a set of present-day species. It has been shown that inferring the minimum number of hybridization events that is needed to simultaneously explain the evolutionary history for a set of trees is an NP-hard and also fixed-parameter tractable problem. In this article, we give a new fixed-parameter algorithm for computing the minimum number of hybridization events for when two rooted binary phylogenetic trees are given. This newly developed algorithm is based on interleaving-a technique using repeated kernelization steps that are applied throughout the exhaustive search part of a fixed-parameter algorithm. To show that our algorithm runs efficiently to be applicable to a wide range of practical problem instances, we apply it to a grass data set and highlight the significant improvements in terms of running times in comparison to an algorithm that has previously been implemented. PMID:21210735

  11. Quantifying the value of redundant measurements at GRUAN sites

    NASA Astrophysics Data System (ADS)

    Madonna, F.; Rosoldi, M.; Güldner, J.; Haefele, A.; Kivi, R.; Cadeddu, M. P.; Sisterson, D.; Pappalardo, G.

    2014-06-01

    The potential for measurement redundancy to reduce uncertainty in atmospheric variables has not been investigated comprehensively for climate observations. We evaluated the usefulness of entropy and mutual correlation concepts, as defined in information theory, for quantifying random uncertainty and redundancy in time series of atmospheric water vapor provided by five highly instrumented GRUAN (GCOS [Global Climate Observing System] Reference Upper-Air Network) Stations in 2010-2012. Results show that the random uncertainties for radiosonde, frost-point hygrometer, Global Positioning System, microwave and infrared radiometers, and Raman lidar measurements differed by less than 8%. Comparisons of time series of the Integrated Water Vapor (IWV) content from ground-based remote sensing instruments with in situ soundings showed that microwave radiometers have the highest redundancy and therefore the highest potential to reduce random uncertainty of IWV time series estimated by radiosondes. Moreover, the random uncertainty of a time series from one instrument should be reduced of ~ 60% by constraining the measurements with those from another instrument. The best reduction of random uncertainty resulted from conditioning of Raman lidar measurements with microwave radiometer measurements. Specific instruments are recommended for atmospheric water vapor measurements at GRUAN sites. This approach can be applied to the study of redundant measurements for other climate variables.

  12. Quantifying climatological ranges and anomalies for Pacific coral reef ecosystems.

    PubMed

    Gove, Jamison M; Williams, Gareth J; McManus, Margaret A; Heron, Scott F; Sandin, Stuart A; Vetter, Oliver J; Foley, David G

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic-biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will help

  13. Quantifying Climatological Ranges and Anomalies for Pacific Coral Reef Ecosystems

    PubMed Central

    Gove, Jamison M.; Williams, Gareth J.; McManus, Margaret A.; Heron, Scott F.; Sandin, Stuart A.; Vetter, Oliver J.; Foley, David G.

    2013-01-01

    Coral reef ecosystems are exposed to a range of environmental forcings that vary on daily to decadal time scales and across spatial scales spanning from reefs to archipelagos. Environmental variability is a major determinant of reef ecosystem structure and function, including coral reef extent and growth rates, and the abundance, diversity, and morphology of reef organisms. Proper characterization of environmental forcings on coral reef ecosystems is critical if we are to understand the dynamics and implications of abiotic–biotic interactions on reef ecosystems. This study combines high-resolution bathymetric information with remotely sensed sea surface temperature, chlorophyll-a and irradiance data, and modeled wave data to quantify environmental forcings on coral reefs. We present a methodological approach to develop spatially constrained, island- and atoll-scale metrics that quantify climatological range limits and anomalous environmental forcings across U.S. Pacific coral reef ecosystems. Our results indicate considerable spatial heterogeneity in climatological ranges and anomalies across 41 islands and atolls, with emergent spatial patterns specific to each environmental forcing. For example, wave energy was greatest at northern latitudes and generally decreased with latitude. In contrast, chlorophyll-a was greatest at reef ecosystems proximate to the equator and northern-most locations, showing little synchrony with latitude. In addition, we find that the reef ecosystems with the highest chlorophyll-a concentrations; Jarvis, Howland, Baker, Palmyra and Kingman are each uninhabited and are characterized by high hard coral cover and large numbers of predatory fishes. Finally, we find that scaling environmental data to the spatial footprint of individual islands and atolls is more likely to capture local environmental forcings, as chlorophyll-a concentrations decreased at relatively short distances (>7 km) from 85% of our study locations. These metrics will

  14. Quantifying capital goods for waste incineration.

    PubMed

    Brogaard, L K; Riber, C; Christensen, T H

    2013-06-01

    Materials and energy used for the construction of modern waste incineration plants were quantified. The data was collected from five incineration plants (72,000-240,000 tonnes per year) built in Scandinavia (Norway, Finland and Denmark) between 2006 and 2012. Concrete for the buildings was the main material used amounting to 19,000-26,000 tonnes per plant. The quantification further included six main materials, electronic systems, cables and all transportation. The energy used for the actual on-site construction of the incinerators was in the range 4000-5000 MW h. In terms of the environmental burden of producing the materials used in the construction, steel for the building and the machinery contributed the most. The material and energy used for the construction corresponded to the emission of 7-14 kg CO2 per tonne of waste combusted throughout the lifetime of the incineration plant. The assessment showed that, compared to data reported in the literature on direct emissions from the operation of incinerators, the environmental impacts caused by the construction of buildings and machinery (capital goods) could amount to 2-3% with respect to kg CO2 per tonne of waste combusted. PMID:23561797

  15. Asteroid Geophysics and Quantifying the Impact Hazard

    NASA Technical Reports Server (NTRS)

    Sears, D.; Wooden, D. H.; Korycanksy, D. G.

    2015-01-01

    Probably the major challenge in understanding, quantifying, and mitigating the effects of an impact on Earth is understanding the nature of the impactor. Of the roughly 25 meteorite craters on the Earth that have associated meteorites, all but one was produced by an iron meteorite and only one was produced by a stony meteorite. Equally important, even meteorites of a given chemical class produce a wide variety of behavior in the atmosphere. This is because they show considerable diversity in their mechanical properties which have a profound influence on the behavior of meteorites during atmospheric passage. Some stony meteorites are weak and do not reach the surface or reach the surface as thousands of relatively harmless pieces. Some stony meteorites roll into a maximum drag configuration and are strong enough to remain intact so a large single object reaches the surface. Others have high concentrations of water that may facilitate disruption. However, while meteorite falls and meteorites provide invaluable information on the physical nature of the objects entering the atmosphere, there are many unknowns concerning size and scale that can only be determined by from the pre-atmospheric properties of the asteroids. Their internal structure, their thermal properties, their internal strength and composition, will all play a role in determining the behavior of the object as it passes through the atmosphere, whether it produces an airblast and at what height, and the nature of the impact and amount and distribution of ejecta.

  16. The missing metric: quantifying contributions of reviewers

    PubMed Central

    Cantor, Maurício; Gero, Shane

    2015-01-01

    The number of contributing reviewers often outnumbers the authors of publications. This has led to apathy towards reviewing and the conclusion that the peer-review system is broken. Given the trade-offs between submitting and reviewing manuscripts, reviewers and authors naturally want visibility for their efforts. While study after study has called for revolutionizing publication practices, the current paradigm does not recognize reviewers' time and expertise. We propose the R-index as a simple way to quantify scientists' contributions as reviewers. We modelled its performance using simulations based on real data to show that early–mid career scientists, who complete high-quality reviews of longer manuscripts within their field, can perform as well as leading scientists reviewing only for high-impact journals. By giving citeable academic recognition for reviewing, R-index will encourage more participation with better reviews, regardless of the career stage. Moreover, the R-index will allow editors to exploit scores to manage and improve their review team, and for journals to promote high average scores as signals of a practical and efficient service to authors. Peer-review is a pervasive necessity across disciplines and the simple utility of this missing metric will credit a valuable aspect of academic productivity without having to revolutionize the current peer-review system. PMID:26064609

  17. Deciphering faces: quantifiable visual cues to weight.

    PubMed

    Coetzee, Vinet; Chen, Jingying; Perrett, David I; Stephen, Ian D

    2010-01-01

    Body weight plays a crucial role in mate choice, as weight is related to both attractiveness and health. People are quite accurate at judging weight in faces, but the cues used to make these judgments have not been defined. This study consisted of two parts. First, we wanted to identify quantifiable facial cues that are related to body weight, as defined by body mass index (BMI). Second, we wanted to test whether people use these cues to judge weight. In study 1, we recruited two groups of Caucasian and two groups of African participants, determined their BMI and measured their 2-D facial images for: width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio. All three measures were significantly related to BMI in males, while the width-to-height and cheek-to-jaw-width ratios were significantly related to BMI in females. In study 2, these images were rated for perceived weight by Caucasian observers. We showed that these observers use all three cues to judge weight in African and Caucasian faces of both sexes. These three facial cues, width-to-height ratio, perimeter-to-area ratio, and cheek-to-jaw-width ratio, are therefore not only related to actual weight but provide a basis for perceptual attributes as well. PMID:20301846

  18. Quantifying asymmetry: ratios and alternatives.

    PubMed

    Franks, Erin M; Cabo, Luis L

    2014-08-01

    Traditionally, the study of metric skeletal asymmetry has relied largely on univariate analyses, utilizing ratio transformations when the goal is comparing asymmetries in skeletal elements or populations of dissimilar dimensions. Under this approach, raw asymmetries are divided by a size marker, such as a bilateral average, in an attempt to produce size-free asymmetry indices. Henceforth, this will be referred to as "controlling for size" (see Smith: Curr Anthropol 46 (2005) 249-273). Ratios obtained in this manner often require further transformations to interpret the meaning and sources of asymmetry. This model frequently ignores the fundamental assumption of ratios: the relationship between the variables entered in the ratio must be isometric. Violations of this assumption can obscure existing asymmetries and render spurious results. In this study, we examined the performance of the classic indices in detecting and portraying the asymmetry patterns in four human appendicular bones and explored potential methodological alternatives. Examination of the ratio model revealed that it does not fulfill its intended goals in the bones examined, as the numerator and denominator are independent in all cases. The ratios also introduced strong biases in the comparisons between different elements and variables, generating spurious asymmetry patterns. Multivariate analyses strongly suggest that any transformation to control for overall size or variable range must be conducted before, rather than after, calculating the asymmetries. A combination of exploratory multivariate techniques, such as Principal Components Analysis, and confirmatory linear methods, such as regression and analysis of covariance, appear as a promising and powerful alternative to the use of ratios. PMID:24842694

  19. Planning a Successful Tech Show

    ERIC Educational Resources Information Center

    Nikirk, Martin

    2011-01-01

    Tech shows are a great way to introduce prospective students, parents, and local business and industry to a technology and engineering or career and technical education program. In addition to showcasing instructional programs, a tech show allows students to demonstrate their professionalism and skills, practice public presentations, and interact…

  20. Hey Teacher, Your Personality's Showing!

    ERIC Educational Resources Information Center

    Paulsen, James R.

    1977-01-01

    A study of 30 fourth, fifth, and sixth grade teachers and 300 of their students showed that a teacher's age, sex, and years of experience did not relate to students' mathematics achievement, but that more effective teachers showed greater "freedom from defensive behavior" than did less effective teachers. (DT)

  1. Quantifying drug-protein binding in vivo.

    SciTech Connect

    Buchholz, B; Bench, G; Keating III, G; Palmblad, M; Vogel, J; Grant, P G; Hillegonds, D

    2004-02-17

    Accelerator mass spectrometry (AMS) provides precise quantitation of isotope labeled compounds that are bound to biological macromolecules such as DNA or proteins. The sensitivity is high enough to allow for sub-pharmacological (''micro-'') dosing to determine macromolecular targets without inducing toxicities or altering the system under study, whether it is healthy or diseased. We demonstrated an application of AMS in quantifying the physiologic effects of one dosed chemical compound upon the binding level of another compound in vivo at sub-toxic doses [4].We are using tissues left from this study to develop protocols for quantifying specific binding to isolated and identified proteins. We also developed a new technique to quantify nanogram to milligram amounts of isolated protein at precisions that are comparable to those for quantifying the bound compound by AMS.

  2. Satellite Movie Shows Erika Dissipate

    NASA Video Gallery

    This animation of visible and infrared imagery from NOAA's GOES-West satellite from Aug. 27 to 29 shows Tropical Storm Erika move through the Eastern Caribbean Sea and dissipate near eastern Cuba. ...

  3. Portable XRF Technology to Quantify Pb in Bone In Vivo.

    PubMed

    Specht, Aaron James; Weisskopf, Marc; Nie, Linda Huiling

    2014-01-01

    Lead is a ubiquitous toxicant. Bone lead has been established as an important biomarker for cumulative lead exposures and has been correlated with adverse health effects on many systems in the body. K-shell X-ray fluorescence (KXRF) is the standard method for measuring bone lead, but this approach has many difficulties that have limited the widespread use of this exposure assessment method. With recent advancements in X-ray fluorescence (XRF) technology, we have developed a portable system that can quantify lead in bone in vivo within 3 minutes. Our study investigated improvements to the system, four calibration methods, and system validation for in vivo measurements. Our main results show that the detection limit of the system is 2.9 ppm with 2 mm soft tissue thickness, the best calibration method for in vivo measurement is background subtraction, and there is strong correlation between KXRF and portable LXRF bone lead results. Our results indicate that the technology is ready to be used in large human population studies to investigate adverse health effects of lead exposure. The portability of the system and fast measurement time should allow for this technology to greatly advance the research on lead exposure and public/environmental health. PMID:26317033

  4. Portable XRF Technology to Quantify Pb in Bone In Vivo

    PubMed Central

    Specht, Aaron James; Weisskopf, Marc; Nie, Linda Huiling

    2014-01-01

    Lead is a ubiquitous toxicant. Bone lead has been established as an important biomarker for cumulative lead exposures and has been correlated with adverse health effects on many systems in the body. K-shell X-ray fluorescence (KXRF) is the standard method for measuring bone lead, but this approach has many difficulties that have limited the widespread use of this exposure assessment method. With recent advancements in X-ray fluorescence (XRF) technology, we have developed a portable system that can quantify lead in bone in vivo within 3 minutes. Our study investigated improvements to the system, four calibration methods, and system validation for in vivo measurements. Our main results show that the detection limit of the system is 2.9 ppm with 2 mm soft tissue thickness, the best calibration method for in vivo measurement is background subtraction, and there is strong correlation between KXRF and portable LXRF bone lead results. Our results indicate that the technology is ready to be used in large human population studies to investigate adverse health effects of lead exposure. The portability of the system and fast measurement time should allow for this technology to greatly advance the research on lead exposure and public/environmental health. PMID:26317033

  5. Quantifying capital goods for biological treatment of organic waste.

    PubMed

    Brogaard, Line K; Petersen, Per H; Nielsen, Peter D; Christensen, Thomas H

    2015-02-01

    Materials and energy used for construction of anaerobic digestion (AD) and windrow composting plants were quantified in detail. The two technologies were quantified in collaboration with consultants and producers of the parts used to construct the plants. The composting plants were quantified based on the different sizes for the three different types of waste (garden and park waste, food waste and sludge from wastewater treatment) in amounts of 10,000 or 50,000 tonnes per year. The AD plant was quantified for a capacity of 80,000 tonnes per year. Concrete and steel for the tanks were the main materials for the AD plant. For the composting plants, gravel and concrete slabs for the pavement were used in large amounts. To frame the quantification, environmental impact assessments (EIAs) showed that the steel used for tanks at the AD plant and the concrete slabs at the composting plants made the highest contribution to Global Warming. The total impact on Global Warming from the capital goods compared to the operation reported in the literature on the AD plant showed an insignificant contribution of 1-2%. For the composting plants, the capital goods accounted for 10-22% of the total impact on Global Warming from composting. PMID:25595291

  6. Quantifying Urban Groundwater in Environmental Field Observatories

    NASA Astrophysics Data System (ADS)

    Welty, C.; Miller, A. J.; Belt, K.; Smith, J. A.; Band, L. E.; Groffman, P.; Scanlon, T.; Warner, J.; Ryan, R. J.; Yeskis, D.; McGuire, M. P.

    2006-12-01

    Despite the growing footprint of urban landscapes and their impacts on hydrologic and biogeochemical cycles, comprehensive field studies of urban water budgets are few. The cumulative effects of urban infrastructure (buildings, roads, culverts, storm drains, detention ponds, leaking water supply and wastewater pipe networks) on temporal and spatial patterns of groundwater stores, fluxes, and flowpaths are poorly understood. The goal of this project is to develop expertise and analytical tools for urban groundwater systems that will inform future environmental observatory planning and that can be shared with research teams working in urban environments elsewhere. The work plan for this project draws on a robust set of information resources in Maryland provided by ongoing monitoring efforts of the Baltimore Ecosystem Study (BES), USGS, and the U.S. Forest Service working together with university scientists and engineers from multiple institutions. A key concern is to bridge the gap between small-scale intensive field studies and larger-scale and longer-term hydrologic patterns using synoptic field surveys, remote sensing, numerical modeling, data mining and visualization tools. Using the urban water budget as a unifying theme, we are working toward estimating the various elements of the budget in order to quantify the influence of urban infrastructure on groundwater. Efforts include: (1) comparison of base flow behavior from stream gauges in a nested set of watersheds at four different spatial scales from 0.8 to 171 km2, with diverse patterns of impervious cover and urban infrastructure; (2) synoptic survey of well water levels to characterize the regional water table; (3) use of airborne thermal infrared imagery to identify locations of groundwater seepage into streams across a range of urban development patterns; (4) use of seepage transects and tracer tests to quantify the spatial pattern of groundwater fluxes to the drainage network in selected subwatersheds; (5

  7. Quantifying Uncertainties in Land Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2012-01-01

    Uncertainties in the retrievals of microwave land surface emissivities were quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including SSM/I, TMI and AMSR-E, were studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors in the retrievals. Generally these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 14% (312 K) over desert and 17% (320 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.52% (26 K). In particular, at 85.0/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are mostly likely caused by rain/cloud contamination, which can lead to random errors up to 1017 K under the most severe conditions.

  8. Quantifying dynamical spillover in co-evolving multiplex networks

    PubMed Central

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D’Souza, Raissa M.

    2015-01-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways. PMID:26459949

  9. Quantifying dynamical spillover in co-evolving multiplex networks

    NASA Astrophysics Data System (ADS)

    Vijayaraghavan, Vikram S.; Noël, Pierre-André; Maoz, Zeev; D'Souza, Raissa M.

    2015-10-01

    Multiplex networks (a system of multiple networks that have different types of links but share a common set of nodes) arise naturally in a wide spectrum of fields. Theoretical studies show that in such multiplex networks, correlated edge dynamics between the layers can have a profound effect on dynamical processes. However, how to extract the correlations from real-world systems is an outstanding challenge. Here we introduce the Multiplex Markov chain to quantify correlations in edge dynamics found in longitudinal data of multiplex networks. By comparing the results obtained from the multiplex perspective to a null model which assumes layers in a network are independent, we can identify real correlations as distinct from simultaneous changes that occur due to random chance. We use this approach on two different data sets: the network of trade and alliances between nation states, and the email and co-commit networks between developers of open source software. We establish the existence of “dynamical spillover” showing the correlated formation (or deletion) of edges of different types as the system evolves. The details of the dynamics over time provide insight into potential causal pathways.

  10. Quantifying evolutionary dynamics from variant-frequency time series.

    PubMed

    Khatri, Bhavin S

    2016-01-01

    From Kimura's neutral theory of protein evolution to Hubbell's neutral theory of biodiversity, quantifying the relative importance of neutrality versus selection has long been a basic question in evolutionary biology and ecology. With deep sequencing technologies, this question is taking on a new form: given a time-series of the frequency of different variants in a population, what is the likelihood that the observation has arisen due to selection or neutrality? To tackle the 2-variant case, we exploit Fisher's angular transformation, which despite being discovered by Ronald Fisher a century ago, has remained an intellectual curiosity. We show together with a heuristic approach it provides a simple solution for the transition probability density at short times, including drift, selection and mutation. Our results show under that under strong selection and sufficiently frequent sampling these evolutionary parameters can be accurately determined from simulation data and so they provide a theoretical basis for techniques to detect selection from variant or polymorphism frequency time-series. PMID:27616332

  11. Quantifying Uncertainties in Land-Surface Microwave Emissivity Retrievals

    NASA Technical Reports Server (NTRS)

    Tian, Yudong; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Prigent, Catherine; Norouzi, Hamidreza; Aires, Filipe; Boukabara, Sid-Ahmed; Furuzawa, Fumie A.; Masunaga, Hirohiko

    2013-01-01

    Uncertainties in the retrievals of microwaveland-surface emissivities are quantified over two types of land surfaces: desert and tropical rainforest. Retrievals from satellite-based microwave imagers, including the Special Sensor Microwave Imager, the Tropical Rainfall Measuring Mission Microwave Imager, and the Advanced Microwave Scanning Radiometer for Earth Observing System, are studied. Our results show that there are considerable differences between the retrievals from different sensors and from different groups over these two land-surface types. In addition, the mean emissivity values show different spectral behavior across the frequencies. With the true emissivity assumed largely constant over both of the two sites throughout the study period, the differences are largely attributed to the systematic and random errors inthe retrievals. Generally, these retrievals tend to agree better at lower frequencies than at higher ones, with systematic differences ranging 1%-4% (3-12 K) over desert and 1%-7% (3-20 K) over rainforest. The random errors within each retrieval dataset are in the range of 0.5%-2% (2-6 K). In particular, at 85.5/89.0 GHz, there are very large differences between the different retrieval datasets, and within each retrieval dataset itself. Further investigation reveals that these differences are most likely caused by rain/cloud contamination, which can lead to random errors up to 10-17 K under the most severe conditions.

  12. Quantifying Ant Activity Using Vibration Measurements

    PubMed Central

    Oberst, Sebastian; Baro, Enrique Nava; Lai, Joseph C. S.; Evans, Theodore A.

    2014-01-01

    Ant behaviour is of great interest due to their sociality. Ant behaviour is typically observed visually, however there are many circumstances where visual observation is not possible. It may be possible to assess ant behaviour using vibration signals produced by their physical movement. We demonstrate through a series of bioassays with different stimuli that the level of activity of meat ants (Iridomyrmex purpureus) can be quantified using vibrations, corresponding to observations with video. We found that ants exposed to physical shaking produced the highest average vibration amplitudes followed by ants with stones to drag, then ants with neighbours, illuminated ants and ants in darkness. In addition, we devised a novel method based on wavelet decomposition to separate the vibration signal owing to the initial ant behaviour from the substrate response, which will allow signals recorded from different substrates to be compared directly. Our results indicate the potential to use vibration signals to classify some ant behaviours in situations where visual observation could be difficult. PMID:24658467

  13. Quantifying the Complexity of Flaring Active Regions

    NASA Astrophysics Data System (ADS)

    Stark, B.; Hagyard, M. J.

    1997-05-01

    While solar physicists have a better understanding of the importance magnetic fields play in the solar heating mechanism, it is still not possible to predict whether or when an active region will flare. In recent decades, qualitative studies of the changes in active region morphology have shown that there is generally an increase in the complexity of the spatial configuration of a solar active region leading up to a flare event. In this study, we quantify the spatial structure of the region using the Differential Box-Counting Method (DBC)of fractal analysis. We analyze data from NASA/Marshall Space Flight Center's vector magnetograph from two flaring active regions: AR 6089 from June 10, 1990, which produced one M1.7 flare, and AR 6659 from June 8, 9 and 10, 1991, this data set including one C5.7 and two M(6.4 and 3.2) flares. (AR 6659 produced several other flares). Several magnetic parameters are studied, including the transverse and longitudinal magnetic field components (Bt and Bl), the total field (Bmag), and the magnetic shear, which describes the non-potentiality of the field. Results are presented for the time series of magnetograms in relation to the timing of flare events.

  14. Quantifying the Complexity of Flaring Active Regions

    NASA Technical Reports Server (NTRS)

    Stark, B.; Hagyard, M. J.

    1997-01-01

    While solar physicists have a better understanding of the importance magnetic fields play in the solar heating mechanism, it is still not possible to predict whether or when an active region will flare. In recent decades, qualitative studies of the changes in active region morphology have shown that there is generally an increase in the complexity of the spatial configuration of a solar active region leading up to a flare event. In this study, we quantify the spatial structure of the region using the differential Box-Counting Method (DBC) of fractal analysis. We analyze data from NASA/Marshall Space Flight Centr's vector magnetograph from two flaring active regions: AR 6089 from June 10, 1990, which produced one M1.7 flare, and AR 6659 from June 8, 9 and 10, 1991, this data set including one C5.7 and two M(6.4 and 3.2) flare. (AR 6659 produced several other flares). Several magnetic parameters are studied, including the transverse and longitudinal magnetic field components (Bt and B1), the total field (Bmag), and the magnetic shear, which describes the non-potentiality of the field. Results are presented for the time series of magnetograms in relation to the timing of flare events.

  15. Identifying and quantifying urban recharge: a review

    NASA Astrophysics Data System (ADS)

    Lerner, David N.

    2002-02-01

    The sources of and pathways for groundwater recharge in urban areas are more numerous and complex than in rural environments. Buildings, roads, and other surface infrastructure combine with man-made drainage networks to change the pathways for precipitation. Some direct recharge is lost, but additional recharge can occur from storm drainage systems. Large amounts of water are imported into most cities for supply, distributed through underground pipes, and collected again in sewers or septic tanks. The leaks from these pipe networks often provide substantial recharge. Sources of recharge in urban areas are identified through piezometry, chemical signatures, and water balances. All three approaches have problems. Recharge is quantified either by individual components (direct recharge, water-mains leakage, septic tanks, etc.) or holistically. Working with individual components requires large amounts of data, much of which is uncertain and is likely to lead to large uncertainties in the final result. Recommended holistic approaches include the use of groundwater modelling and solute balances, where various types of data are integrated. Urban recharge remains an under-researched topic, with few high-quality case studies reported in the literature.

  16. Quantifying Wrinkle Features of Thin Membrane Structures

    NASA Technical Reports Server (NTRS)

    Jacobson, Mindy B.; Iwasa, Takashi; Naton, M. C.

    2004-01-01

    For future micro-systems utilizing membrane based structures, quantified predictions of wrinkling behavior in terms of amplitude, angle and wavelength are needed to optimize the efficiency and integrity of such structures, as well as their associated control systems. For numerical analyses performed in the past, limitations on the accuracy of membrane distortion simulations have often been related to the assumptions made. This work demonstrates that critical assumptions include: effects of gravity, supposed initial or boundary conditions, and the type of element used to model the membrane. In this work, a 0.2 m x 02 m membrane is treated as a structural material with non-negligible bending stiffness. Finite element modeling is used to simulate wrinkling behavior due to a constant applied in-plane shear load. Membrane thickness, gravity effects, and initial imperfections with respect to flatness were varied in numerous nonlinear analysis cases. Significant findings include notable variations in wrinkle modes for thickness in the range of 50 microns to 1000 microns, which also depend on the presence of an applied gravity field. However, it is revealed that relationships between overall strain energy density and thickness for cases with differing initial conditions are independent of assumed initial conditions. In addition, analysis results indicate that the relationship between wrinkle amplitude scale (W/t) and structural scale (L/t) is independent of the nonlinear relationship between thickness and stiffness.

  17. Quantifying Access Disparities in Response Plans

    PubMed Central

    Indrakanti, Saratchandra; Mikler, Armin R.; O’Neill, Martin; Tiwari, Chetan

    2016-01-01

    Effective response planning and preparedness are critical to the health and well-being of communities in the face of biological emergencies. Response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable subpopulations, resulting in access disparities during emergency response. For a response plan to be effective, sufficient mitigation resources must be made accessible to target populations within short, federally-mandated time frames. A major challenge in response plan design is to establish a balance between the allocation of available resources and the provision of equal access to PODs for all individuals in a given geographic region. Limitations on the availability, granularity, and currency of data to identify vulnerable populations further complicate the planning process. To address these challenges and limitations, data driven methods to quantify vulnerabilities in the context of response plans have been developed and are explored in this article. PMID:26771551

  18. Quantifying pressure variations from petrographic observations

    NASA Astrophysics Data System (ADS)

    Vrijmoed, Johannes C.; Podladchikov, Yuri Y.

    2015-04-01

    The existence of grain scale pressure variations has been established over the last decennia. Mineral reactions are often accompanied by volume and shape changes in a system where much heterogeneity in material properties exists. This gives rise to internal stresses and pressure variation during metamorphic reactions. The residual pressure in inclusions can be measured by Raman spectroscopy, but is restricted to a narrow range of minerals that (potentially) have a well calibrated Raman shift with pressure. Several alternative methods to quantify pressure variations from petrographic observations are presented. We distinguish equilibrium and non-equilibrium methods. Equilibrium methods are based on a newly developed method to predict phase equilibria and composition under a given pressure gradient. The pressure gradient can be found by iteratively matching predicted phase assemblages and composition with petrographic observations. Non-equilibrium methods involve the estimation of pressure variation in initial stages of reaction in which the system may still be isochoric. It then results in the potential pressure buildup for a given unreacted rock for example in the initial stages of dehydration of serpentinite in subduction settings.

  19. Quantifying Potential Groundwater Recharge In South Texas

    NASA Astrophysics Data System (ADS)

    Basant, S.; Zhou, Y.; Leite, P. A.; Wilcox, B. P.

    2015-12-01

    Groundwater in South Texas is heavily relied on for human consumption and irrigation for food crops. Like most of the south west US, woody encroachment has altered the grassland ecosystems here too. While brush removal has been widely implemented in Texas with the objective of increasing groundwater recharge, the linkage between vegetation and groundwater recharge in South Texas is still unclear. Studies have been conducted to understand plant-root-water dynamics at the scale of plants. However, little work has been done to quantify the changes in soil water and deep percolation at the landscape scale. Modeling water flow through soil profiles can provide an estimate of the total water flowing into deep percolation. These models are especially powerful with parameterized and calibrated with long term soil water data. In this study we parameterize the HYDRUS soil water model using long term soil water data collected in Jim Wells County in South Texas. Soil water was measured at every 20 cm intervals up to a depth of 200 cm. The parameterized model will be used to simulate soil water dynamics under a variety of precipitation regimes ranging from well above normal to severe drought conditions. The results from the model will be compared with the changes in soil moisture profile observed in response to vegetation cover and treatments from a study in a similar. Comparative studies like this can be used to build new and strengthen existing hypotheses regarding deep percolation and the role of soil texture and vegetation in groundwater recharge.

  20. Quantifying Access Disparities in Response Plans.

    PubMed

    Indrakanti, Saratchandra; Mikler, Armin R; O'Neill, Martin; Tiwari, Chetan

    2016-01-01

    Effective response planning and preparedness are critical to the health and well-being of communities in the face of biological emergencies. Response plans involving mass prophylaxis may seem feasible when considering the choice of dispensing points within a region, overall population density, and estimated traffic demands. However, the plan may fail to serve particular vulnerable subpopulations, resulting in access disparities during emergency response. For a response plan to be effective, sufficient mitigation resources must be made accessible to target populations within short, federally-mandated time frames. A major challenge in response plan design is to establish a balance between the allocation of available resources and the provision of equal access to PODs for all individuals in a given geographic region. Limitations on the availability, granularity, and currency of data to identify vulnerable populations further complicate the planning process. To address these challenges and limitations, data driven methods to quantify vulnerabilities in the context of response plans have been developed and are explored in this article. PMID:26771551

  1. Quantifying Annual Aboveground Net Primary Production in the Intermountain West

    Technology Transfer Automated Retrieval System (TEKTRAN)

    As part of a larger project, methods were developed to quantify current year growth on grasses, forbs, and shrubs. Annual aboveground net primary production (ANPP) data are needed for this project to calibrate results from computer simulation models and remote-sensing data. Measuring annual ANPP of ...

  2. Quantifying the Risk of Blood Exposure in Optometric Clinical Education.

    ERIC Educational Resources Information Center

    Hoppe, Elizabeth

    1997-01-01

    A study attempted to quantify risk of blood exposure in optometric clinical education by surveying optometric interns in their fourth year at the Southern California College of Optometry concerning their history of exposure or use of a needle. Results indicate blood exposure or needle use ranged from 0.95 to 18.71 per 10,000 patient encounters.…

  3. Creating Slide Show Book Reports.

    ERIC Educational Resources Information Center

    Taylor, Harriet G.; Stuhlmann, Janice M.

    1995-01-01

    Describes the use of "Kid Pix 2" software by fourth grade students to develop slide-show book reports. Highlights include collaboration with education majors from Louisiana State University, changes in attitudes of the education major students and elementary students, and problems with navigation and disk space. (LRW)

  4. Producing Talent and Variety Shows.

    ERIC Educational Resources Information Center

    Szabo, Chuck

    1995-01-01

    Identifies key aspects of producing talent shows and outlines helpful hints for avoiding pitfalls and ensuring a smooth production. Presents suggestions concerning publicity, scheduling, and support personnel. Describes types of acts along with special needs and problems specific to each act. Includes a list of resources. (MJP)

  5. Shakespeare and other English Renaissance authors as characterized by Information Theory complexity quantifiers

    NASA Astrophysics Data System (ADS)

    Rosso, Osvaldo A.; Craig, Hugh; Moscato, Pablo

    2009-03-01

    We introduce novel Information Theory quantifiers in a computational linguistic study that involves a large corpus of English Renaissance literature. The 185 texts studied (136 plays and 49 poems in total), with first editions that range from 1580 to 1640, form a representative set of its period. Our data set includes 30 texts unquestionably attributed to Shakespeare; in addition we also included A Lover’s Complaint, a poem which generally appears in Shakespeare collected editions but whose authorship is currently in dispute. Our statistical complexity quantifiers combine the power of Jensen-Shannon’s divergence with the entropy variations as computed from a probability distribution function of the observed word use frequencies. Our results show, among other things, that for a given entropy poems display higher complexity than plays, that Shakespeare’s work falls into two distinct clusters in entropy, and that his work is remarkable for its homogeneity and for its closeness to overall means.

  6. Rectal Swabs Are Suitable for Quantifying the Carriage Load of KPC-Producing Carbapenem-Resistant Enterobacteriaceae

    PubMed Central

    Lerner, A.; Romano, J.; Chmelnitsky, I.; Navon-Venezia, S.; Edgar, R.

    2013-01-01

    It is more convenient and practical to collect rectal swabs than stool specimens to study carriage of colon pathogens. In this study, we examined the ability to use rectal swabs rather than stool specimens to quantify Klebsiella pneumoniae carbapenemase (KPC)-producing carbapenem-resistant Enterobacteriaceae (CRE). We used a quantitative real-time PCR (qPCR) assay to determine the concentration of the blaKPC gene relative to the concentration of 16S rRNA genes and a quantitative culture-based method to quantify CRE relative to total aerobic bacteria. Our results demonstrated that rectal swabs are suitable for quantifying the concentration of KPC-producing CRE and that qPCR showed higher correlation between rectal swabs and stool specimens than the culture-based method. PMID:23295937

  7. Quantifying the provenance of aeolian sediments using multiple composite fingerprints

    NASA Astrophysics Data System (ADS)

    Liu, Benli; Niu, Qinghe; Qu, Jianjun; Zu, Ruiping

    2016-09-01

    We introduce a new fingerprinting method that uses multiple composite fingerprints for studies of aeolian sediment provenance. We used this method to quantify the provenance of sediments on both sides of the Qinghai-Tibetan Railway (QTR) in the Cuona Lake section of the Tibetan Plateau (TP), in an environment characterized by aeolian and fluvial interactions. The method involves repeatedly solving a linear mixing model based on mass conservation; the model is not limited to spatial scale or transport types and uses all the tracer groups that passed the range check, Kruskal-Wallis H-test, and a strict analytical solution screening. The proportional estimates that result from using different composite fingerprints are highly variable; however, the average of these fingerprints has a greater accuracy and certainty than any single fingerprint. The results show that sand from the lake beach, hilly surface, and gullies contribute, respectively, 48%, 31% and 21% to the western railway sediments and 43%, 33% and 24% to the eastern railway sediments. The difference between contributions from various sources on either side of the railway, which may increase in the future, was clearly related to variations in local transport characteristics, a conclusion that is supported by grain size analysis. The construction of the QTR changed the local cycling of materials, and the difference in provenance between the sediments that are separated by the railway reflects the changed sedimentary conditions on either side of the railway. The effectiveness of this method suggests that it will be useful in other studies of aeolian sediments.

  8. Quantifying Sentiment and Influence in Blogspaces

    SciTech Connect

    Hui, Peter SY; Gregory, Michelle L.

    2010-07-25

    The weblog, or blog, has become a popular form of social media, through which authors can write posts, which can in turn generate feedback in the form of user comments. When considered in totality, a collection of blogs can thus be viewed as a sort of informal collection of mass sentiment and opinion. An obvious topic of interest might be to mine this collection to obtain some gauge of public sentiment over the wide variety of topics contained therein. However, the sheer size of the so-called blogosphere, combined with the fact that the subjects of posts can vary over a practically limitless number of topics poses some serious challenges when any meaningful analysis is attempted. Namely, the fact that largely anyone with access to the Internet can author their own blog, raises the serious issue of credibility— should some blogs be considered to be more influential than others, and consequently, when gauging sentiment with respect to a topic, should some blogs be weighted more heavily than others? In addition, as new posts and comments can be made on almost a constant basis, any blog analysis algorithm must be able to handle such updates efficiently. In this paper, we give a formalization of the blog model. We give formal methods of quantifying sentiment and influence with respect to a hierarchy of topics, with the specific aim of facilitating the computation of a per-topic, influence-weighted sentiment measure. Finally, as efficiency is a specific endgoal, we give upper bounds on the time required to update these values with new posts, showing that our analysis and algorithms are scalable.

  9. Magic Carpet Shows Its Colors

    NASA Technical Reports Server (NTRS)

    2004-01-01

    The upper left image in this display is from the panoramic camera on the Mars Exploration Rover Spirit, showing the 'Magic Carpet' region near the rover at Gusev Crater, Mars, on Sol 7, the seventh martian day of its journey (Jan. 10, 2004). The lower image, also from the panoramic camera, is a monochrome (single filter) image of a rock in the 'Magic Carpet' area. Note that colored portions of the rock correlate with extracted spectra shown in the plot to the side. Four different types of materials are shown: the rock itself, the soil in front of the rock, some brighter soil on top of the rock, and some dust that has collected in small recesses on the rock face ('spots'). Each color on the spectra matches a line on the graph, showing how the panoramic camera's different colored filters are used to broadly assess the varying mineral compositions of martian rocks and soils.

  10. Uncertainty of natural tracer methods for quantifying river-aquifer interaction in a large river

    NASA Astrophysics Data System (ADS)

    Xie, Yueqing; Cook, Peter G.; Shanafield, Margaret; Simmons, Craig T.; Zheng, Chunmiao

    2016-04-01

    The quantification of river-aquifer interaction is critical to the conjunctive management of surface water and groundwater, in particular in the arid and semiarid environment with much higher potential evapotranspiration than precipitation. A variety of natural tracer methods are available to quantify river-aquifer interaction at different scales. These methods however have only been tested in rivers with relatively low flow rates (mostly less than 5 m3 s-1). In this study, several natural tracers including heat, radon-222 and electrical conductivity were measured both on vertical riverbed profiles and on longitudinal river samples to quantify river-aquifer exchange flux at both point and regional scales in the Heihe River (northwest China; flow rate 63 m3 s-1). Results show that the radon-222 profile method can estimate a narrower range of point-scale flux than the temperature profile method. In particular, three vertical radon-222 profiles failed to estimate the upper bounds of plausible flux ranges. Results also show that when quantifying regional-scale river-aquifer exchange flux, the river chemistry method constrained the flux (5.20-10.39 m2 d-1) better than the river temperature method (-100 to 100 m2 d-1). The river chemistry method also identified spatial variability of flux, whereas the river temperature method did not have sufficient resolution. Overall, for quantifying river-aquifer exchange flux in a large river, both the temperature profile method and the radon-222 profile method provide useful complementary information at the point scale to complement each other, whereas the river chemistry method is recommended over the river temperature method at the regional scale.

  11. Quantifying consistent individual differences in habitat selection.

    PubMed

    Leclerc, Martin; Vander Wal, Eric; Zedrosser, Andreas; Swenson, Jon E; Kindberg, Jonas; Pelletier, Fanie

    2016-03-01

    Habitat selection is a fundamental behaviour that links individuals to the resources required for survival and reproduction. Although natural selection acts on an individual's phenotype, research on habitat selection often pools inter-individual patterns to provide inferences on the population scale. Here, we expanded a traditional approach of quantifying habitat selection at the individual level to explore the potential for consistent individual differences of habitat selection. We used random coefficients in resource selection functions (RSFs) and repeatability estimates to test for variability in habitat selection. We applied our method to a detailed dataset of GPS relocations of brown bears (Ursus arctos) taken over a period of 6 years, and assessed whether they displayed repeatable individual differences in habitat selection toward two habitat types: bogs and recent timber-harvest cut blocks. In our analyses, we controlled for the availability of habitat, i.e. the functional response in habitat selection. Repeatability estimates of habitat selection toward bogs and cut blocks were 0.304 and 0.420, respectively. Therefore, 30.4 and 42.0 % of the population-scale habitat selection variability for bogs and cut blocks, respectively, was due to differences among individuals, suggesting that consistent individual variation in habitat selection exists in brown bears. Using simulations, we posit that repeatability values of habitat selection are not related to the value and significance of β estimates in RSFs. Although individual differences in habitat selection could be the results of non-exclusive factors, our results illustrate the evolutionary potential of habitat selection. PMID:26597548

  12. Quantifying the curvilinear metabolic scaling in mammals.

    PubMed

    Packard, Gary C

    2015-10-01

    A perplexing problem confronting students of metabolic allometry concerns the convex curvature that seemingly occurs in log-log plots of basal metabolic rate (BMR) vs. body mass in mammals. This putative curvilinearity has typically been interpreted in the context of a simple power function, Y=a*Xb, on the arithmetic scale, with the allometric exponent, b, supposedly increasing steadily as a dependent function of body size. The relationship can be quantified in arithmetic domain by exponentiating a quadratic equation fitted to logarithmic transformations of the original data, but the resulting model is not in the form of a power function and it is unlikely to describe accurately the pattern in the original distribution. I therefore re-examined a dataset for 636 species of mammal and discovered that the relationship between BMR and body mass is well-described by a power function with an explicit, non-zero intercept and lognormal, heteroscedastic error. The model has an invariant allometric exponent of 0.75, so the appearance in prior investigations of a steadily increasing exponent probably was an aberration resulting from undue reliance on logarithmic transformations to estimate statistical models in arithmetic domain. Theoretical constructs relating BMR to body mass in mammals may need to be modified to accommodate a positive intercept in the statistical model, but they do not need to be revised, or rejected, at present time on grounds that the allometric exponent varies with body size. New data from planned experiments will be needed to confirm any hypothesis based on data currently available. PMID:26173580

  13. ENVITEC shows off air technologies

    SciTech Connect

    McIlvaine, R.W.

    1995-08-01

    The ENVITEC International Trade Fair for Environmental Protection and Waste Management Technologies, held in June in Duesseldorf, Germany, is the largest air pollution exhibition in the world and may be the largest environmental technology show overall. Visitors saw thousands of environmental solutions from 1,318 companies representing 29 countries and occupying roughly 43,000 square meters of exhibit space. Many innovations were displayed under the category, ``thermal treatment of air pollutants.`` New technologies include the following: regenerative thermal oxidizers; wet systems for removing pollutants; biological scrubbers;electrostatic precipitators; selective adsorption systems; activated-coke adsorbers; optimization of scrubber systems; and air pollution monitors.

  14. Quantifying forest mortality with the remote sensing of snow

    NASA Astrophysics Data System (ADS)

    Baker, Emily Hewitt

    Greenhouse gas emissions have altered global climate significantly, increasing the frequency of drought, fire, and pest-related mortality in forests across the western United States, with increasing area affected each year. Associated changes in forests are of great concern for the public, land managers, and the broader scientific community. These increased stresses have resulted in a widespread, spatially heterogeneous decline of forest canopies, which in turn exerts strong controls on the accumulation and melt of the snowpack, and changes forest-atmosphere exchanges of carbon, water, and energy. Most satellite-based retrievals of summer-season forest data are insufficient to quantify canopy, as opposed to the combination of canopy and undergrowth, since the signals of the two types of vegetation greenness have proven persistently difficult to distinguish. To overcome this issue, this research develops a method to quantify forest canopy cover using winter-season fractional snow covered area (FSCA) data from NASA's Moderate Resolution Imaging Spectroradiometer (MODIS) snow covered area and grain size (MODSCAG) algorithm. In areas where the ground surface and undergrowth are completely snow-covered, a pixel comprises only forest canopy and snow. Following a snowfall event, FSCA initially rises, as snow is intercepted in the canopy, and then falls, as snow unloads. A select set of local minima in a winter F SCA timeseries form a threshold where canopy is snow-free, but forest understory is snow-covered. This serves as a spatially-explicit measurement of forest canopy, and viewable gap fraction (VGF) on a yearly basis. Using this method, we determine that MODIS-observed VGF is significantly correlated with an independent product of yearly crown mortality derived from spectral analysis of Landsat imagery at 25 high-mortality sites in northern Colorado. (r =0.96 +/-0.03, p =0.03). Additionally, we determine the lag timing between green-stage tree mortality and

  15. Quantifying and Communicating Uncertainty in Preclinical Human Dose-Prediction

    PubMed Central

    Sundqvist, M; Lundahl, A; Någård, MB; Bredberg, U; Gennemark, P

    2015-01-01

    Human dose-prediction is fundamental for ranking lead-optimization compounds in drug discovery and to inform design of early clinical trials. This tutorial describes how uncertainty in such predictions can be quantified and efficiently communicated to facilitate decision-making. Using three drug-discovery case studies, we show how several uncertain pieces of input information can be integrated into one single uncomplicated plot with key predictions, including their uncertainties, for many compounds or for many scenarios, or both. PMID:26225248

  16. ShowMe3D

    Energy Science and Technology Software Center (ESTSC)

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from themore » displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.« less

  17. ShowMe3D

    SciTech Connect

    Sinclair, Michael B

    2012-01-05

    ShowMe3D is a data visualization graphical user interface specifically designed for use with hyperspectral image obtained from the Hyperspectral Confocal Microscope. The program allows the user to select and display any single image from a three dimensional hyperspectral image stack. By moving a slider control, the user can easily move between images of the stack. The user can zoom into any region of the image. The user can select any pixel or region from the displayed image and display the fluorescence spectrum associated with that pixel or region. The user can define up to 3 spectral filters to apply to the hyperspectral image and view the image as it would appear from a filter-based confocal microscope. The user can also obtain statistics such as intensity average and variance from selected regions.

  18. Quantifying the reheating temperature of the universe

    NASA Astrophysics Data System (ADS)

    Mazumdar, Anupam; Zaldívar, Bryan

    2014-09-01

    The aim of this paper is to determine an exact definition of the reheat temperature for a generic perturbative decay of the inflaton. In order to estimate the reheat temperature, there are two important conditions one needs to satisfy: (a) the decay products of the inflaton must dominate the energy density of the universe, i.e. the universe becomes completely radiation dominated, and (b) the decay products of the inflaton have attained local thermodynamical equilibrium. For some choices of parameters, the latter is a more stringent condition, such that the decay products may thermalise much after the beginning of radiation-domination. Consequently, we have obtained that the reheat temperature can be much lower than the standard-lore estimation. In this paper we describe under what conditions our universe could have efficient or inefficient thermalisation, and quantify the reheat temperature for both the scenarios. This result has an immediate impact on many applications which rely on the thermal history of the universe, in particular gravitino abundance. Instant thermalisation: when the inflaton decay products instantly thermalise upon decay. Efficient thermalisation: when the inflaton decay products thermalise right at the instant when radiation epoch starts dominating the universe. Delayed thermalisation: when the inflaton decay products thermalise deep inside the radiation dominated epoch after the transition from inflaton-to-radiation domination had occurred. This paper is organised as follows. In Section 2 we set the stage and write down the relevant equations for our analysis. The standard lore about the reheating epoch is briefly commented in Section 3. Section 4 is devoted to present our analysis, in which we study the conditions under which the plasma attains thermalisation. Later on, in Section 5 we discuss the concept of reheat temperature such as to properly capture the issues of thermalisation. Finally, we conclude in Section 6.

  19. Taking the high (or low) road: a quantifier priming perspective on basic anchoring effects.

    PubMed

    Sleeth-Keppler, David

    2013-01-01

    Current explanations of basic anchoring effects, defined as the influence of an arbitrary number standard on an uncertain judgment, confound numerical values with vague quantifiers. I show that the consideration of numerical anchors may bias subsequent judgments primarily through the priming of quantifiers, rather than the numbers themselves. Study 1 varied the target of a numerical comparison judgment in a between--participants design, while holding the numerical anchor value constant. This design yielded an anchoring effect consistent with a quantifier priming hypothesis. Study 2 included a direct manipulation of vague quantifiers in the traditional anchoring paradigm. Finally, Study 3 examined the notion that specific associations between quantifiers, reflecting values on separate judgmental dimensions (i.e., the price and height of a target) can affect the direction of anchoring effects. Discussion focuses on the nature of vague quantifier priming in numerically anchored judgments. PMID:23951950

  20. Quantifying singlet fission in novel organic materials using nonlinear optics

    NASA Astrophysics Data System (ADS)

    Busby, Erik; Xia, Jianlong; Yaffe, Omer; Kumar, Bharat; Berkelbach, Timothy; Wu, Qin; Miller, John; Nuckolls, Colin; Zhu, Xiaoyang; Reichman, David; Campos, Luis; Sfeir, Matthew Y.

    2014-10-01

    Singlet fission is a form of multiple exciton generation in which two triplet excitons are produced from the decay of a photoexcited singlet exciton. In a small number of organic materials, most notably pentacene, this conversion process has been shown to occur with unity quantum yield on sub-ps timescales. However, a poorly understood mechanism for fission along with strict energy and geometry requirements have so far limited the observation of this process to a few classes of organic materials, with only a subset of these (most notably the polyacenes) showing both efficient fission and long-lived triplets. Here, we utilize novel organic materials to investigate how the efficiency of the fission process depends on the coupling and the energetic driving force between chromophores in both intra- and intermolecular singlet fission materials. We demonstrate how the triplet yield can be accurately quantified using a combination of traditional transient spectroscopies and recently developed excited state saturable absorption techniques. These results allow us to gain mechanistic insight into the fission process and suggest general strategies for generating new materials that can undergo efficient fission.

  1. Quantifying subsurface mixing of groundwater from lowland stream perspective.

    NASA Astrophysics Data System (ADS)

    van der Velde, Ype; Torfs, Paul; van der Zee, Sjoerd; Uijlenhoet, Remko

    2013-04-01

    The distribution of time it takes water from the moment of precipitation to reach the catchment outlet is widely used as a characteristic for catchment discharge behaviour, catchment vulnerability to pollution spreading and pollutant loads from catchments to downstream waters. However, this distribution tends to vary in time driven by variability in precipitation and evapotranspiration. Subsurface mixing controls to what extent dynamics in rainfall and evpotranspiration are translated into dynamics of travel time distributions. This insight in hydrologic functioning of catchments requires new definitions and concepts that link dynamics of catchment travel time distributions to the degree of subsurface mixing. In this presentation we propose the concept of STorage Outflow Probability (STOP) functions, that quantify the probability of water parcels stored in a catchment, to leave this catchment by discharge or evapotranspiration. We will show how STOPs relate to the topography and subsurface and how they can be used for deriving time varying travel time distributions of a catchment. The presented analyses will combine a unique dataset of high-frequent discharge and nitrate concentration measurements with results of a spatially distributed groundwater model and conceptual models of water flow and solute transport. Remarkable findings are the large contrasts in discharge behaviour expressed in travel time between lowland and sloping catchments and the strong relationship between evapotranspiration and stream water nutrient concentration dynamics.

  2. Toward quantifying the effectiveness of water trading under uncertainty.

    PubMed

    Luo, B; Huang, G H; Zou, Y; Yin, Y Y

    2007-04-01

    This paper presents a methodology for quantifying the effectiveness of water-trading under uncertainty, by developing an optimization model based on the interval-parameter two-stage stochastic program (TSP) technique. In the study, the effectiveness of a water-trading program is measured by the water volume that can be released through trading from a statistical point of view. The methodology can also deal with recourse water allocation problems generated by randomness in water availability and, at the same time, tackle uncertainties expressed as intervals in the trading system. The developed methodology was tested with a hypothetical water-trading program in an agricultural system in the Swift Current Creek watershed, Canada. Study results indicate that the methodology can effectively measure the effectiveness of a trading program through estimating the water volume being released through trading in a long-term view. A sensitivity analysis was also conducted to analyze the effects of different trading costs on the trading program. It shows that the trading efforts would become ineffective when the trading costs are too high. The case study also demonstrates that the trading program is more effective in a dry season when total water availability is in shortage. PMID:16624478

  3. Microfluidic experiments to quantify microbes encountering oil water interfaces

    NASA Astrophysics Data System (ADS)

    Sheng, Jian; Jalali, Maryam; Molaei, Mehdi

    2015-11-01

    It is known that marine microbes are one of the components of biodegradation of crude oil. Biodegradation of crude oil is initiated by microbes encountering the droplet. To elucidate the key processes involved in bacterial encountering the rising oil droplets we have established microfluidic devices with hydrophilic surfaces to create micro oil droplets with controlled sizes. To quantify effect of motility of bacteria on their encounter rate, using high speed microscopy, we simultaneously tracked motile bacteria and solid particles with equivalent sizes encountering oil droplets. The results show that in the advection dominant regime, where the droplet size and the rising velocity are large, bacterial motility plays no role in the encountering rate; however, in the diffusion dominant regime, where the swimming velocity of the cells are comparable with rising velocity and Peclet number of particles is small, motility of the cells increases their encounter rate. Ongoing analysis focus on developing a mathematical model to predict the encounter rate of the cells based on their size, swimming speed, and dispersion rate and the size of oil droplets. GoMRI.

  4. Statistical physics approach to quantifying differences in myelinated nerve fibers

    PubMed Central

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-01-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross–sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum. PMID:24676146

  5. Statistical physics approach to quantifying differences in myelinated nerve fibers

    NASA Astrophysics Data System (ADS)

    Comin, César H.; Santos, João R.; Corradini, Dario; Morrison, Will; Curme, Chester; Rosene, Douglas L.; Gabrielli, Andrea; da F. Costa, Luciano; Stanley, H. Eugene

    2014-03-01

    We present a new method to quantify differences in myelinated nerve fibers. These differences range from morphologic characteristics of individual fibers to differences in macroscopic properties of collections of fibers. Our method uses statistical physics tools to improve on traditional measures, such as fiber size and packing density. As a case study, we analyze cross-sectional electron micrographs from the fornix of young and old rhesus monkeys using a semi-automatic detection algorithm to identify and characterize myelinated axons. We then apply a feature selection approach to identify the features that best distinguish between the young and old age groups, achieving a maximum accuracy of 94% when assigning samples to their age groups. This analysis shows that the best discrimination is obtained using the combination of two features: the fraction of occupied axon area and the effective local density. The latter is a modified calculation of axon density, which reflects how closely axons are packed. Our feature analysis approach can be applied to characterize differences that result from biological processes such as aging, damage from trauma or disease or developmental differences, as well as differences between anatomical regions such as the fornix and the cingulum bundle or corpus callosum.

  6. Casimir experiments showing saturation effects

    SciTech Connect

    Sernelius, Bo E.

    2009-10-15

    We address several different Casimir experiments where theory and experiment disagree. First out is the classical Casimir force measurement between two metal half spaces; here both in the form of the torsion pendulum experiment by Lamoreaux and in the form of the Casimir pressure measurement between a gold sphere and a gold plate as performed by Decca et al.; theory predicts a large negative thermal correction, absent in the high precision experiments. The third experiment is the measurement of the Casimir force between a metal plate and a laser irradiated semiconductor membrane as performed by Chen et al.; the change in force with laser intensity is larger than predicted by theory. The fourth experiment is the measurement of the Casimir force between an atom and a wall in the form of the measurement by Obrecht et al. of the change in oscillation frequency of a {sup 87}Rb Bose-Einstein condensate trapped to a fused silica wall; the change is smaller than predicted by theory. We show that saturation effects can explain the discrepancies between theory and experiment observed in all these cases.

  7. Quantifying metastatic inefficiency: rare genotypes versus rare dynamics

    NASA Astrophysics Data System (ADS)

    Cisneros, Luis H.; Newman, Timothy J.

    2014-08-01

    We introduce and solve a ‘null model’ of stochastic metastatic colonization. The model is described by a single parameter θ: the ratio of the rate of cell division to the rate of cell death for a disseminated tumour cell in a given secondary tissue environment. We are primarily interested in the case in which colonizing cells are poorly adapted for proliferation in the local tissue environment, so that cell death is more likely than cell division, i.e. \\theta \\lt 1. We quantify the rare event statistics for the successful establishment of a metastatic colony of size N. For N\\gg 1, we find that the probability of establishment is exponentially rare, as expected, and yet the mean time for such rare events is of the form \\sim log (N)/(1-\\theta ) while the standard deviation of colonization times is \\sim 1/(1-\\theta ). Thus, counter to naive expectation, for \\theta \\lt 1, the average time for establishment of successful metastatic colonies decreases with decreasing cell fitness, and colonies seeded from lower fitness cells show less stochastic variation in their growth. These results indicate that metastatic growth from poorly adapted cells is rare, exponentially explosive and essentially deterministic. These statements are brought into sharper focus by the finding that the temporal statistics of the early stages of metastatic colonization from low-fitness cells (\\theta \\lt 1) are statistically indistinguishable from those initiated from high-fitness cells (\\theta \\gt 1), i.e. the statistics show a duality mapping (1-\\theta )\\to (\\theta -1). We conclude our analysis with a study of heterogeneity in the fitness of colonising cells, and describe a phase diagram delineating parameter regions in which metastatic colonization is dominated either by low or high fitness cells, showing that both are plausible given our current knowledge of physiological conditions in human cancer.

  8. Subtleties of Hidden Quantifiers in Implication

    ERIC Educational Resources Information Center

    Shipman, Barbara A.

    2016-01-01

    Mathematical conjectures and theorems are most often of the form P(x) ? Q(x), meaning ?x,P(x) ? Q(x). The hidden quantifier ?x is crucial in understanding the implication as a statement with a truth value. Here P(x) and Q(x) alone are only predicates, without truth values, since they contain unquantified variables. But standard textbook…

  9. Quantifying the Thermal Fatigue of CPV Modules

    SciTech Connect

    Bosco, N.; Kurtz, S.

    2011-02-01

    A method is presented to quantify thermal fatigue in the CPV die-attach from meteorological data. A comparative; study between cities demonstrates a significant difference in the accumulated damage. These differences are most; sensitive to the number of larger (ΔT) thermal cycles experienced for a location. High frequency data (<1/min) may; be required to most accurately employ this method.

  10. Quantifying the Reuse of Learning Objects

    ERIC Educational Resources Information Center

    Elliott, Kristine; Sweeney, Kevin

    2008-01-01

    This paper reports the findings of one case study from a larger project, which aims to quantify the claimed efficiencies of reusing learning objects to develop e-learning resources. The case study describes how an online inquiry project "Diabetes: A waste of energy" was developed by searching for, evaluating, modifying and then integrating as many…

  11. Classifying and quantifying basins of attraction

    SciTech Connect

    Sprott, J. C.; Xiong, Anda

    2015-08-15

    A scheme is proposed to classify the basins for attractors of dynamical systems in arbitrary dimensions. There are four basic classes depending on their size and extent, and each class can be further quantified to facilitate comparisons. The calculation uses a Monte Carlo method and is applied to numerous common dissipative chaotic maps and flows in various dimensions.

  12. Quantify Prostate Cancer by Automated Histomorphometry

    NASA Astrophysics Data System (ADS)

    Braumann, Ulf-Dietrich; Kuska, Jens-Peer; Löffler, Markus; Wernert, Nicolas

    A new method is presented to quantify malignant changes in histological sections of prostate tissue immunohistochemically stained for prostate-specific antigen (PSA) by means of image processing. The morphological analysis of the prostate tissue uses the solidity of PSA-positive prostate tissue segments to compute a quantitative measure that turns out highly correlated with scores obtained from routine diagnosis (Gleason, Dhom).

  13. Protocol comparison for quantifying in situ mineralization

    Technology Transfer Automated Retrieval System (TEKTRAN)

    In situ mineralization methods are intended to quantify mineralization under realistic environmental conditions. This study was conducted to compare soil moisture and temperature in intake soil cores contained in cylinders to that in adjacent bulk soil, compare the effect of two resin bag techniques...

  14. Quantifiable diagnosis of muscular dystrophies and neurogenic atrophies through network analysis

    PubMed Central

    2013-01-01

    Background The diagnosis of neuromuscular diseases is strongly based on the histological characterization of muscle biopsies. However, this morphological analysis is mostly a subjective process and difficult to quantify. We have tested if network science can provide a novel framework to extract useful information from muscle biopsies, developing a novel method that analyzes muscle samples in an objective, automated, fast and precise manner. Methods Our database consisted of 102 muscle biopsy images from 70 individuals (including controls, patients with neurogenic atrophies and patients with muscular dystrophies). We used this to develop a new method, Neuromuscular DIseases Computerized Image Analysis (NDICIA), that uses network science analysis to capture the defining signature of muscle biopsy images. NDICIA characterizes muscle tissues by representing each image as a network, with fibers serving as nodes and fiber contacts as links. Results After a ‘training’ phase with control and pathological biopsies, NDICIA was able to quantify the degree of pathology of each sample. We validated our method by comparing NDICIA quantification of the severity of muscular dystrophies with a pathologist’s evaluation of the degree of pathology, resulting in a strong correlation (R = 0.900, P <0.00001). Importantly, our approach can be used to quantify new images without the need for prior ‘training’. Therefore, we show that network science analysis captures the useful information contained in muscle biopsies, helping the diagnosis of muscular dystrophies and neurogenic atrophies. Conclusions Our novel network analysis approach will serve as a valuable tool for assessing the etiology of muscular dystrophies or neurogenic atrophies, and has the potential to quantify treatment outcomes in preclinical and clinical trials. PMID:23514382

  15. Mimas Showing False Colors #1

    NASA Technical Reports Server (NTRS)

    2005-01-01

    False color images of Saturn's moon, Mimas, reveal variation in either the composition or texture across its surface.

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    The image at the left is a narrow angle clear-filter image, which was separately processed to enhance the contrast in brightness and sharpness of visible features. The image at the right is a color composite of narrow-angle ultraviolet, green, infrared and clear filter images, which have been specially processed to accentuate subtle changes in the spectral properties of Mimas' surface materials. To create this view, three color images (ultraviolet, green and infrared) were combined into a single black and white picture that isolates and maps regional color differences. This 'color map' was then superimposed over the clear-filter image at the left.

    The combination of color map and brightness image shows how the color differences across the Mimas surface materials are tied to geological features. Shades of blue and violet in the image at the right are used to identify surface materials that are bluer in color and have a weaker infrared brightness than average Mimas materials, which are represented by green.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of each image. The unusual bluer materials are seen to broadly surround Herschel crater. However, the bluer material is not uniformly distributed in and around the crater. Instead, it appears to be concentrated on the outside of the crater and more to the west than to the north or south. The origin of the color differences is not yet understood. It may represent ejecta material that was excavated from inside Mimas when the Herschel impact occurred. The bluer color of these materials may be caused by subtle differences in

  16. Entropy generation method to quantify thermal comfort

    NASA Technical Reports Server (NTRS)

    Boregowda, S. C.; Tiwari, S. N.; Chaturvedi, S. K.

    2001-01-01

    The present paper presents a thermodynamic approach to assess the quality of human-thermal environment interaction and quantify thermal comfort. The approach involves development of entropy generation term by applying second law of thermodynamics to the combined human-environment system. The entropy generation term combines both human thermal physiological responses and thermal environmental variables to provide an objective measure of thermal comfort. The original concepts and definitions form the basis for establishing the mathematical relationship between thermal comfort and entropy generation term. As a result of logic and deterministic approach, an Objective Thermal Comfort Index (OTCI) is defined and established as a function of entropy generation. In order to verify the entropy-based thermal comfort model, human thermal physiological responses due to changes in ambient conditions are simulated using a well established and validated human thermal model developed at the Institute of Environmental Research of Kansas State University (KSU). The finite element based KSU human thermal computer model is being utilized as a "Computational Environmental Chamber" to conduct series of simulations to examine the human thermal responses to different environmental conditions. The output from the simulation, which include human thermal responses and input data consisting of environmental conditions are fed into the thermal comfort model. Continuous monitoring of thermal comfort in comfortable and extreme environmental conditions is demonstrated. The Objective Thermal Comfort values obtained from the entropy-based model are validated against regression based Predicted Mean Vote (PMV) values. Using the corresponding air temperatures and vapor pressures that were used in the computer simulation in the regression equation generates the PMV values. The preliminary results indicate that the OTCI and PMV values correlate well under ideal conditions. However, an experimental study

  17. Quantifying Wetland Functions: A Case Study

    NASA Astrophysics Data System (ADS)

    Potter, K. W.; Rogers, J. S.; Hoffman, A. R.; Wu, C.; Hoopes, J. A.; Armstrong, D. E.

    2007-05-01

    Wetlands are reputed to reduce peak flows and improve water quality by trapping sediment and phosphorus. However, there are relatively few studies that quantify these wetland functions. This paper reports on a study of a 45-hectare wetland in southern Wisconsin. The wetland is traversed by a stream channel that drains a predominantly agricultural 17.4 km2 watershed. During the spring and summer of 2006, we collected stage data and water samples at stations upstream and downstream of the wetland, with the former accounting for 82% of the contributing area. Continuous measurements of water stage at these stations were used to construct a streamflow record. During storm events water samples were taken automatically at 2-hour intervals for the first 12 samples and 8-hour intervals for the next 12 samples. Samples were analyzed for total suspended solids, total phosphorus, and dissolved reactive phosphorus. Ten events were observed during the observation period; the two largest events were 1 to 2-year storms. One-dimensional unsteady flow routing was used to estimate the maximum extent of wetland inundation for each event. When normalized for flow volume, all peak flows were attenuated by the wetland, with the maximum attenuation corresponding to the intermediate events. The reduced attenuation of the larger events appears to be due to filling of storage, either due to antecedent conditions or the event itself. In the case of sediment, the amount leaving the wetland in the two largest storms, which accounted for 96% of the exported sediment during the period of observation, was twice the amount entering the wetland. The failure of the wetland to trap sediment is apparently due to the role of drainage ditches, which trap sediment during the wetland-filling phase and release it during drainage. The export of sediment during the largest events appears to result from remobilization of sediment deposited in the low-gradient stream channel during smaller events. This

  18. DOE: Quantifying the Value of Hydropower in the Electric Grid

    SciTech Connect

    2012-12-31

    The report summarizes research to Quantify the Value of Hydropower in the Electric Grid. This 3-year DOE study focused on defining value of hydropower assets in a changing electric grid. Methods are described for valuation and planning of pumped storage and conventional hydropower. The project team conducted plant case studies, electric system modeling, market analysis, cost data gathering, and evaluations of operating strategies and constraints. Five other reports detailing these research results are available a project website, www.epri.com/hydrogrid. With increasing deployment of wind and solar renewable generation, many owners, operators, and developers of hydropower have recognized the opportunity to provide more flexibility and ancillary services to the electric grid. To quantify value of services, this study focused on the Western Electric Coordinating Council region. A security-constrained, unit commitment and economic dispatch model was used to quantify the role of hydropower for several future energy scenarios up to 2020. This hourly production simulation considered transmission requirements to deliver energy, including future expansion plans. Both energy and ancillary service values were considered. Addressing specifically the quantification of pumped storage value, no single value stream dominated predicted plant contributions in various energy futures. Modeling confirmed that service value depends greatly on location and on competition with other available grid support resources. In this summary, ten different value streams related to hydropower are described. These fell into three categories; operational improvements, new technologies, and electricity market opportunities. Of these ten, the study was able to quantify a monetary value in six by applying both present day and future scenarios for operating the electric grid. This study confirmed that hydropower resources across the United States contribute significantly to operation of the grid in terms

  19. Digital Optical Method to quantify the visual opacity of fugitive plumes

    NASA Astrophysics Data System (ADS)

    Du, Ke; Shi, Peng; Rood, Mark J.; Wang, Kai; Wang, Yang; Varma, Ravi M.

    2013-10-01

    Fugitive emissions of particulate matter (PM) raise public concerns due to their adverse impacts on human health and atmospheric visibility. Although the United States Environmental Protection Agency (USEPA) has not developed a standard method for quantifying the opacities of fugitive plumes, select states have developed human vision-based opacity methods for such applications. A digital photographic method, Digital Optical Method for fugitive plumes (DOMfugitive), is described herein for quantifying the opacities of fugitive plume emissions. Field campaigns were completed to evaluate this method by driving vehicles on unpaved roads to generate dust plumes. DOMfugitive was validated by performing simultaneous measurements using a co-located laser transmissometer. For 84% of the measurements, the individual absolute opacity difference values between the two methods were ≤15%. The average absolute opacity difference for all the measurements was 8.5%. The paired t-test showed no significant difference between the two methods at 99% confidence level. Comparisons of wavelength dependent opacities with grayscale opacities indicated that DOMfugitive was not sensitive to the wavelength in the visible spectrum evaluated during these field campaigns. These results encourage the development of a USEPA standard method for quantifying the opacities of fugitive PM plumes using digital photography, as an alternative to human-vision based approaches.

  20. Quantifiers more or less quantify online: ERP evidence for partial incremental interpretation

    PubMed Central

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge (Farmers grow crops/worms as their primary source of income), Experiment 1 found larger N400s for atypical (worms) than typical objects (crops). Experiment 2 crossed object typicality with non-logical subject-noun phrase quantifiers (most, few). Off-line plausibility ratings exhibited the crossover interaction predicted by full quantifier interpretation: Most farmers grow crops and Few farmers grow worms were rated more plausible than Most farmers grow worms and Few farmers grow crops. Object N400s, although modulated in the expected direction, did not reverse. Experiment 3 replicated these findings with adverbial quantifiers (Farmers often/rarely grow crops/worms). Interpretation of quantifier expressions thus is neither fully immediate nor fully delayed. Furthermore, object atypicality was associated with a frontal slow positivity in few-type/rarely quantifier contexts, suggesting systematic processing differences among quantifier types. PMID:20640044

  1. Quantifying Coastal Change Patterns Using LIDAR

    NASA Astrophysics Data System (ADS)

    Tebbens, S. F.; Murray, A.; Ashton, A. D.

    2005-12-01

    Shorelines undergo continuous change, primarily in response to the action of waves. New technologies including LIDAR surveys are just beginning to reveal surprising shoreline behaviors over a range of space and time scales (e.g. List and Farris, 1999; Tebbens et al, 2002). This early stage of coastal physical science calls for further documentation and analysis of the range of phenomena involved. Wavelt analysis of the changes along the North Carolina Outer Banks, USA, over a single annual interval (Tebbens et al., 2002) quantify statistics including: 1) the amount of shoreline change as a function of alongshore length scale; 2) the distribution of the alongshore-lengths of contiguous zones of erosion and accretion; and 3) the distribution of the magnitudes of erosion and accretion occurring during a time interval. The statistics of the patterns of shoreline varied among the different coastline segments measured. Because these these shoreline segments have different orientations, they are affected by different effective wave climates. Analyses over other time intervals test whether the statistics and the variations from one coastline segment to another are robust. The work also tests a hypothesis and potential model for the main cause of these observed shoreline behaviors. The statistics describing the patterns of shoreline change vary as a function of regional wave climate, suggesting the hypothesis that these changes are driven chiefly by gradients in alongshore transport associated with subtle deviations from a smooth shoreline. Recent work has shown that when waves approach shore from deep water at relative angles greater than approximately 45°, shoreline perturbations grow, causing alongshore-heterogeneous shoreline changes on any scale at which perturbations exist (Ashton et al., 2001). Waves approaching from deep-water angles closer to shore-normal tend to smooth out the shoreline. The patterns of alongshore change over some extended time period will result

  2. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1988-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach was developed to quantify the effects of the random uncertainties. The results indicate that only the variations in geometry have significant effects.

  3. Quantifying Stock Return Distributions in Financial Markets

    PubMed Central

    Botta, Federico; Moat, Helen Susannah; Stanley, H. Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  4. Quantifying Semantic Linguistic Maturity in Children.

    PubMed

    Hansson, Kristina; Bååth, Rasmus; Löhndorf, Simone; Sahlén, Birgitta; Sikström, Sverker

    2016-10-01

    We propose a method to quantify semantic linguistic maturity (SELMA) based on a high dimensional semantic representation of words created from the co-occurrence of words in a large text corpus. The method was applied to oral narratives from 108 children aged 4;0-12;10. By comparing the SELMA measure with maturity ratings made by human raters we found that SELMA predicted the rating of semantic maturity made by human raters over and above the prediction made using a child's age and number of words produced. We conclude that the semantic content of narratives changes in a predictable pattern with children's age and argue that SELMA is a measure quantifying semantic linguistic maturity. The study opens up the possibility of using quantitative measures for studying the development of semantic representation in children's narratives, and emphasizes the importance of word co-occurrences for understanding the development of meaning. PMID:26440529

  5. Portable device for quantifying parkinsonian wrist rigidity.

    PubMed

    Caligiuri, M P

    1994-01-01

    The need for objectivity in the assessment of parkinsonism prompted the development of a portable transducer capable of quantifying muscular rigidity. This paper describes the development and use of a device for measuring wrist rigidity and reports the preliminary findings from 25 normal healthy controls and 29 patients, many of whom were undergoing antiparkinsonian treatment to alleviate rigidity or antipsychotic treatment, which produced parkinsonian rigidity. An objective rigidity score, representing the degree to which motor activity increases muscular stiffness in the wrist, correlates highly with clinical ratings of parkinsonian rigidity and demonstrates 89% specificity and 82% sensitivity. Unlike previous techniques for quantifying rigidity, this transducer offers greater portability and apparent face validity. PMID:7908119

  6. Quantifying Stock Return Distributions in Financial Markets.

    PubMed

    Botta, Federico; Moat, Helen Susannah; Stanley, H Eugene; Preis, Tobias

    2015-01-01

    Being able to quantify the probability of large price changes in stock markets is of crucial importance in understanding financial crises that affect the lives of people worldwide. Large changes in stock market prices can arise abruptly, within a matter of minutes, or develop across much longer time scales. Here, we analyze a dataset comprising the stocks forming the Dow Jones Industrial Average at a second by second resolution in the period from January 2008 to July 2010 in order to quantify the distribution of changes in market prices at a range of time scales. We find that the tails of the distributions of logarithmic price changes, or returns, exhibit power law decays for time scales ranging from 300 seconds to 3600 seconds. For larger time scales, we find that the distributions tails exhibit exponential decay. Our findings may inform the development of models of market behavior across varying time scales. PMID:26327593

  7. COMPLEXITY&APPROXIMABILITY OF QUANTIFIED&STOCHASTIC CONSTRAINT SATISFACTION PROBLEMS

    SciTech Connect

    Hunt, H. B.; Marathe, M. V.; Stearns, R. E.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S and T be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SATc(S).) Here, we study simultaneously the complexity of decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. We present simple yet general techniques to characterize simultaneously, the complexity or efficient approximability of a number of versions/variants of the problems SAT(S), Q-SAT(S), S-SAT(S),MAX-Q-SAT(S) etc., for many different such D,C ,S, T. These versions/variants include decision, counting, maximization and approximate maximization problems, for unquantified, quantified and stochastically quantified formulas. Our unified approach is based on the following two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic represent ability. Some of the results extend the earlier results in [Pa85,LMP99,CF+93,CF+94O]u r techniques and results reported here also provide significant steps towards obtaining dichotomy theorems, for a number of the problems above, including the problems MAX-&-SAT( S), and MAX-S-SAT(S). The discovery of such dichotomy theorems, for unquantified formulas, has received significant recent attention in the literature [CF+93,CF+94,Cr95,KSW97

  8. The Arizona Sun Corridor: Quantifying climatic implications of megapolitan development

    NASA Astrophysics Data System (ADS)

    Georgescu, M.; Moustaoui, M.; Mahalov, A.

    2010-12-01

    The local and regional-scale hydro-climatic impacts of land use and land cover change (LULCC) that result from urbanization require attention in light of future urban growth projections and related concerns for environmental sustainability. This is an especially serious issue over the southwestern U.S. where mounting pressure on the area’s natural desert environment and increasingly limited resources (e.g. water) exists, and is likely to worsen, due to unrelenting sprawl and associated urbanization. While previous modeling results have shown the degree to which the built environment has contributed to the region’s warming summertime climate, we use projections of future landscape change over the rapidly urbanizing Arizona Sun Corridor - an anticipated stretch of urban expanse that includes current metro Phoenix and Tucson - as surface boundary conditions to conduct high-resolution (order of 1-km) numerical simulations, over the seasonal timescale, to quantify the climatic effect of this relentlessly growing and increasingly vulnerable region. We use the latest version of the WRF modeling system to take advantage of several new capabilities, including a newly implemented nesting method used to refine the vertical mesh, and a comprehensive multi-story urban canopy scheme. We quantify the impact of projected (circa 2050) Sun Corridor megapolitan area on further development of the urban heat island (UHI), assess changes in the surface energy budget, with important implications for the near surface temperature and stability, and discuss modeled impacts on regional rainfall. Lastly, simulated effects are compared with projected warming due to increasing greenhouse gases (the GCMs from which these results are obtained currently do not take into account effects of urbanizing regions) and quantify the degree to which LULCC over the Arizona Sun Corridor will exacerbate regional anthropogenic climate change. A number of potential mitigation strategies are discussed

  9. A novel real time imaging platform to quantify macrophage phagocytosis.

    PubMed

    Kapellos, Theodore S; Taylor, Lewis; Lee, Heyne; Cowley, Sally A; James, William S; Iqbal, Asif J; Greaves, David R

    2016-09-15

    Phagocytosis of pathogens, apoptotic cells and debris is a key feature of macrophage function in host defense and tissue homeostasis. Quantification of macrophage phagocytosis in vitro has traditionally been technically challenging. Here we report the optimization and validation of the IncuCyte ZOOM® real time imaging platform for macrophage phagocytosis based on pHrodo® pathogen bioparticles, which only fluoresce when localized in the acidic environment of the phagolysosome. Image analysis and fluorescence quantification were performed with the automated IncuCyte™ Basic Software. Titration of the bioparticle number showed that the system is more sensitive than a spectrofluorometer, as it can detect phagocytosis when using 20× less E. coli bioparticles. We exemplified the power of this real time imaging platform by studying phagocytosis of murine alveolar, bone marrow and peritoneal macrophages. We further demonstrate the ability of this platform to study modulation of the phagocytic process, as pharmacological inhibitors of phagocytosis suppressed bioparticle uptake in a concentration-dependent manner, whereas opsonins augmented phagocytosis. We also investigated the effects of macrophage polarization on E. coli phagocytosis. Bone marrow-derived macrophage (BMDM) priming with M2 stimuli, such as IL-4 and IL-10 resulted in higher engulfment of bioparticles in comparison with M1 polarization. Moreover, we demonstrated that tolerization of BMDMs with lipopolysaccharide (LPS) results in impaired E. coli bioparticle phagocytosis. This novel real time assay will enable researchers to quantify macrophage phagocytosis with a higher degree of accuracy and sensitivity and will allow investigation of limited populations of primary phagocytes in vitro. PMID:27475716

  10. Quantifying thermal modifications on laser welded skin tissue

    NASA Astrophysics Data System (ADS)

    Tabakoglu, Hasim Ö.; Gülsoy, Murat

    2011-02-01

    Laser tissue welding is a potential medical treatment method especially on closing cuts implemented during any kind of surgery. Photothermal effects of laser on tissue should be quantified in order to determine optimal dosimetry parameters. Polarized light and phase contrast techniques reveal information about extend of thermal change over tissue occurred during laser welding application. Change in collagen structure in skin tissue stained with hematoxilen and eosin samples can be detected. In this study, three different near infrared laser wavelengths (809 nm, 980 nm and 1070 nm) were compared for skin welding efficiency. 1 cm long cuts were treated spot by spot laser application on Wistar rats' dorsal skin, in vivo. In all laser applications, 0.5 W of optical power was delivered to the tissue, 5 s continuously, resulting in 79.61 J/cm2 energy density (15.92 W/cm2 power density) for each spot. The 1st, 4th, 7th, 14th, and 21st days of recovery period were determined as control days, and skin samples needed for histology were removed on these particular days. The stained samples were examined under a light microscope. Images were taken with a CCD camera and examined with imaging software. 809 Nm laser was found to be capable of creating strong full-thickness closure, but thermal damage was evident. The thermal damage from 980 nm laser welding was found to be more tolerable. The results showed that 1070 nm laser welding produced noticeably stronger bonds with minimal scar formation.

  11. Progress toward quantifying landscape-scale movement patterns of the glassy-winged sharpshooter and its natural enemies using a novel marl-capture technique

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Here we present the results of the first year of our research targeted at quantifying the landscape-level movement patterns of GWSS and its natural enemies. We showed that protein markers can be rapidly acquired and retained on insects for several weeks after marking directly in the field. Specifica...

  12. The logic in language: How all quantifiers are alike, but each quantifier is different.

    PubMed

    Feiman, Roman; Snedeker, Jesse

    2016-06-01

    Quantifier words like each, every, all and three are among the most abstract words in language. Unlike nouns, verbs and adjectives, the meanings of quantifiers are not related to a referent out in the world. Rather, quantifiers specify what relationships hold between the sets of entities, events and properties denoted by other words. When two quantifiers are in the same clause, they create a systematic ambiguity. "Every kid climbed a tree" could mean that there was only one tree, climbed by all, or many different trees, one per climbing kid. In the present study, participants chose a picture to indicate their preferred reading of different ambiguous sentences - those containing every, as well as the other three quantifiers. In Experiment 1, we found large systematic differences in preference, depending on the quantifier word. In Experiment 2, we then manipulated the choice of a particular reading of one sentence, and tested how this affected participants' reading preference on a subsequent target sentence. We found a priming effect for all quantifiers, but only when the prime and target sentences contained the same quantifier. For example, all-a sentences prime other all-a sentences, while each-a primes each-a, but sentences with each do not prime sentences with all or vice versa. In Experiment 3, we ask whether the lack of priming across quantifiers could be due to the two sentences sharing one fewer word. We find that changing the verb between the prime and target sentence does not reduce the priming effect. In Experiment 4, we discover one case where there is priming across quantifiers - when one number (e.g. three) is in the prime, and a different one (e.g. four) is in the target. We discuss how these findings relate to linguistic theories of quantifier meaning and what they tell us about the division of labor between conceptual content and combinatorial semantics, as well as the mental representations of quantification and of the abstract logical structure of

  13. Quantifying Error in the CMORPH Satellite Precipitation Estimates

    NASA Astrophysics Data System (ADS)

    Xu, B.; Yoo, S.; Xie, P.

    2010-12-01

    As part of the collaboration between China Meteorological Administration (CMA) National Meteorological Information Centre (NMIC) and NOAA Climate Prediction Center (CPC), a new system is being developed to construct hourly precipitation analysis on a 0.25olat/lon grid over China by merging information derived from gauge observations and CMORPH satellite precipitation estimates. Foundation to the development of the gauge-satellite merging algorithm is the definition of the systematic and random error inherent in the CMORPH satellite precipitation estimates. In this study, we quantify the CMORPH error structures through comparisons against a gauge-based analysis of hourly precipitation derived from station reports from a dense network over China. First, systematic error (bias) of the CMORPH satellite estimates are examined with co-located hourly gauge precipitation analysis over 0.25olat/lon grid boxes with at least one reporting station. The CMORPH exhibits biases of regional variations showing over-estimates over eastern China, and seasonal changes with over-/under-estimates during warm/cold seasons. The CMORPH bias presents range-dependency. In general, the CMORPH tends to over-/under-estimate weak / strong rainfall. The bias, when expressed in the form of ratio between the gauge observations and the CMORPH satellite estimates, increases with the rainfall intensity but tends to saturate at a certain level for high rainfall. Based on the above results, a prototype algorithm is developed to remove the CMORPH bias through matching the PDF of original CMORPH estimates against that of the gauge analysis using data pairs co-located over grid boxes with at least one reporting gauge over a 30-day period ending at the target date. The spatial domain for collecting the co-located data pairs is expanded so that at least 5000 pairs of data are available to ensure statistical availability. The bias-corrected CMORPH is then compared against the gauge data to quantify the

  14. Quantifying the Impact of Scenic Environments on Health

    PubMed Central

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-01-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing. PMID:26603464

  15. Quantifying the Impact of Scenic Environments on Health

    NASA Astrophysics Data System (ADS)

    Seresinhe, Chanuki Illushka; Preis, Tobias; Moat, Helen Susannah

    2015-11-01

    Few people would deny an intuitive sense of increased wellbeing when spending time in beautiful locations. Here, we ask: can we quantify the relationship between environmental aesthetics and human health? We draw on data from Scenic-Or-Not, a website that crowdsources ratings of “scenicness” for geotagged photographs across Great Britain, in combination with data on citizen-reported health from the Census for England and Wales. We find that inhabitants of more scenic environments report better health, across urban, suburban and rural areas, even when taking core socioeconomic indicators of deprivation into account, such as income, employment and access to services. Our results provide evidence in line with the striking hypothesis that the aesthetics of the environment may have quantifiable consequences for our wellbeing.

  16. Quantifying variances in comparative RNA secondary structure prediction

    PubMed Central

    2013-01-01

    Background With the advancement of next-generation sequencing and transcriptomics technologies, regulatory effects involving RNA, in particular RNA structural changes are being detected. These results often rely on RNA secondary structure predictions. However, current approaches to RNA secondary structure modelling produce predictions with a high variance in predictive accuracy, and we have little quantifiable knowledge about the reasons for these variances. Results In this paper we explore a number of factors which can contribute to poor RNA secondary structure prediction quality. We establish a quantified relationship between alignment quality and loss of accuracy. Furthermore, we define two new measures to quantify uncertainty in alignment-based structure predictions. One of the measures improves on the “reliability score” reported by PPfold, and considers alignment uncertainty as well as base-pair probabilities. The other measure considers the information entropy for SCFGs over a space of input alignments. Conclusions Our predictive accuracy improves on the PPfold reliability score. We can successfully characterize many of the underlying reasons for and variances in poor prediction. However, there is still variability unaccounted for, which we therefore suggest comes from the RNA secondary structure predictive model itself. PMID:23634662

  17. Quantifying the underlying landscape and paths of cancer

    PubMed Central

    Li, Chunhe; Wang, Jin

    2014-01-01

    Cancer is a disease regulated by the underlying gene networks. The emergence of normal and cancer states as well as the transformation between them can be thought of as a result of the gene network interactions and associated changes. We developed a global potential landscape and path framework to quantify cancer and associated processes. We constructed a cancer gene regulatory network based on the experimental evidences and uncovered the underlying landscape. The resulting tristable landscape characterizes important biological states: normal, cancer and apoptosis. The landscape topography in terms of barrier heights between stable state attractors quantifies the global stability of the cancer network system. We propose two mechanisms of cancerization: one is by the changes of landscape topography through the changes in regulation strengths of the gene networks. The other is by the fluctuations that help the system to go over the critical barrier at fixed landscape topography. The kinetic paths from least action principle quantify the transition processes among normal state, cancer state and apoptosis state. The kinetic rates provide the quantification of transition speeds among normal, cancer and apoptosis attractors. By the global sensitivity analysis of the gene network parameters on the landscape topography, we uncovered some key gene regulations determining the transitions between cancer and normal states. This can be used to guide the design of new anti-cancer tactics, through cocktail strategy of targeting multiple key regulation links simultaneously, for preventing cancer occurrence or transforming the early cancer state back to normal state. PMID:25232051

  18. Gains and Pitfalls of Quantifier Elimination as a Teaching Tool

    ERIC Educational Resources Information Center

    Oldenburg, Reinhard

    2015-01-01

    Quantifier Elimination is a procedure that allows simplification of logical formulas that contain quantifiers. Many mathematical concepts are defined in terms of quantifiers and especially in calculus their use has been identified as an obstacle in the learning process. The automatic deduction provided by quantifier elimination thus allows…

  19. Winter wren populations show adaptation to local climate

    PubMed Central

    Morrison, Catriona A.; Robinson, Robert A.; Pearce-Higgins, James W.

    2016-01-01

    Most studies of evolutionary responses to climate change have focused on phenological responses to warming, and provide only weak evidence for evolutionary adaptation. This could be because phenological changes are more weakly linked to fitness than more direct mechanisms of climate change impacts, such as selective mortality during extreme weather events which have immediate fitness consequences for the individuals involved. Studies examining these other mechanisms may be more likely to show evidence for evolutionary adaptation. To test this, we quantify regional population responses of a small resident passerine (winter wren Troglodytes troglodytes) to a measure of winter severity (number of frost days). Annual population growth rate was consistently negatively correlated with this measure, but the point at which different populations achieved stability (λ = 1) varied across regions and was closely correlated with the historic average number of frost days, providing strong evidence for local adaptation. Despite this, regional variation in abundance remained negatively related to the regional mean number of winter frost days, potentially as a result of a time-lag in the rate of evolutionary response to climate change. As expected from Bergmann's rule, individual wrens were heavier in colder regions, suggesting that local adaptation may be mediated through body size. However, there was no evidence for selective mortality of small individuals in cold years, with annual variation in mean body size uncorrelated with the number of winter frost days, so the extent to which local adaptation occurs through changes in body size, or another mechanism remains uncertain. PMID:27429782

  20. Winter wren populations show adaptation to local climate.

    PubMed

    Morrison, Catriona A; Robinson, Robert A; Pearce-Higgins, James W

    2016-06-01

    Most studies of evolutionary responses to climate change have focused on phenological responses to warming, and provide only weak evidence for evolutionary adaptation. This could be because phenological changes are more weakly linked to fitness than more direct mechanisms of climate change impacts, such as selective mortality during extreme weather events which have immediate fitness consequences for the individuals involved. Studies examining these other mechanisms may be more likely to show evidence for evolutionary adaptation. To test this, we quantify regional population responses of a small resident passerine (winter wren Troglodytes troglodytes) to a measure of winter severity (number of frost days). Annual population growth rate was consistently negatively correlated with this measure, but the point at which different populations achieved stability (λ = 1) varied across regions and was closely correlated with the historic average number of frost days, providing strong evidence for local adaptation. Despite this, regional variation in abundance remained negatively related to the regional mean number of winter frost days, potentially as a result of a time-lag in the rate of evolutionary response to climate change. As expected from Bergmann's rule, individual wrens were heavier in colder regions, suggesting that local adaptation may be mediated through body size. However, there was no evidence for selective mortality of small individuals in cold years, with annual variation in mean body size uncorrelated with the number of winter frost days, so the extent to which local adaptation occurs through changes in body size, or another mechanism remains uncertain. PMID:27429782

  1. Oxygen-enhanced MRI accurately identifies, quantifies, and maps tumor hypoxia in preclinical cancer models

    PubMed Central

    O’Connor, James PB; Boult, Jessica KR; Jamin, Yann; Babur, Muhammad; Finegan, Katherine G; Williams, Kaye J; Little, Ross A; Jackson, Alan; Parker, Geoff JM; Reynolds, Andrew R; Waterton, John C; Robinson, Simon P

    2015-01-01

    There is a clinical need for non-invasive biomarkers of tumor hypoxia for prognostic and predictive studies, radiotherapy planning and therapy monitoring. Oxygen enhanced MRI (OE-MRI) is an emerging imaging technique for quantifying the spatial distribution and extent of tumor oxygen delivery in vivo. In OE-MRI, the longitudinal relaxation rate of protons (ΔR1) changes in proportion to the concentration of molecular oxygen dissolved in plasma or interstitial tissue fluid. Therefore, well-oxygenated tissues show positive ΔR1. We hypothesized that the fraction of tumor tissue refractory to oxygen challenge (lack of positive ΔR1, termed “Oxy-R fraction”) would be a robust biomarker of hypoxia in models with varying vascular and hypoxic features. Here we demonstrate that OE-MRI signals are accurate, precise and sensitive to changes in tumor pO2 in highly vascular 786-0 renal cancer xenografts. Furthermore, we show that Oxy-R fraction can quantify the hypoxic fraction in multiple models with differing hypoxic and vascular phenotypes, when used in combination with measurements of tumor perfusion. Finally, Oxy-R fraction can detect dynamic changes in hypoxia induced by the vasomodulator agent hydralazine. In contrast, more conventional biomarkers of hypoxia (derived from blood oxygenation-level dependent MRI and dynamic contrast-enhanced MRI) did not relate to tumor hypoxia consistently. Our results show that the Oxy-R fraction accurately quantifies tumor hypoxia non-invasively and is immediately translatable to the clinic. PMID:26659574

  2. An index for quantifying flocking behavior.

    PubMed

    Quera, Vicenç; Herrando, Salvador; Beltran, Francesc S; Salas, Laura; Miñano, Meritxell

    2007-12-01

    One of the classic research topics in adaptive behavior is the collective displacement of groups of organisms such as flocks of birds, schools of fish, herds of mammals, and crowds of people. However, most agent-based simulations of group behavior do not provide a quantitative index for determining the point at which the flock emerges. An index was developed of the aggregation of moving individuals in a flock and an example was provided of how it can be used to quantify the degree to which a group of moving individuals actually forms a flock. PMID:18229552

  3. Inducing and Quantifying Clostridium difficile Spore Formation.

    PubMed

    Shen, Aimee; Fimlaid, Kelly A; Pishdadian, Keyan

    2016-01-01

    The Gram-positive nosocomial pathogen Clostridium difficile induces sporulation during growth in the gastrointestinal tract. Sporulation is necessary for this obligate anaerobe to form metabolically dormant spores that can resist antibiotic treatment, survive exit from the mammalian host, and transmit C. difficile infections. In this chapter, we describe a method for inducing C. difficile sporulation in vitro. This method can be used to study sporulation and maximize spore purification yields for a number of C. difficile strain backgrounds. We also describe procedures for visualizing spore formation using phase-contrast microscopy and for quantifying the efficiency of sporulation using heat resistance as a measure of functional spore formation. PMID:27507338

  4. Quantifying and scaling airplane performance in turbulence

    NASA Astrophysics Data System (ADS)

    Richardson, Johnhenri R.

    This dissertation studies the effects of turbulent wind on airplane airspeed and normal load factor, determining how these effects scale with airplane size and developing envelopes to account for them. The results have applications in design and control of aircraft, especially small scale aircraft, for robustness with respect to turbulence. Using linearized airplane dynamics and the Dryden gust model, this dissertation presents analytical and numerical scaling laws for airplane performance in gusts, safety margins that guarantee, with specified probability, that steady flight can be maintained when stochastic wind gusts act upon an airplane, and envelopes to visualize these safety margins. Presented here for the first time are scaling laws for the phugoid natural frequency, phugoid damping ratio, airspeed variance in turbulence, and flight path angle variance in turbulence. The results show that small aircraft are more susceptible to high frequency gusts, that the phugoid damping ratio does not depend directly on airplane size, that the airspeed and flight path angle variances can be parameterized by the ratio of the phugoid natural frequency to a characteristic turbulence frequency, and that the coefficient of variation of the airspeed decreases with increasing airplane size. Accompanying numerical examples validate the results using eleven different airplanes models, focusing on NASA's hypothetical Boeing 757 analog the Generic Transport Model and its operational 5.5% scale model, the NASA T2. Also presented here for the first time are stationary flight, where the flight state is a stationary random process, and the stationary flight envelope, an adjusted steady flight envelope to visualize safety margins for stationary flight. The dissertation shows that driving the linearized airplane equations of motion with stationary, stochastic gusts results in stationary flight. It also shows how feedback control can enlarge the stationary flight envelope by alleviating

  5. Quantifying the Determinants of Evolutionary Dynamics Leading to Drug Resistance

    PubMed Central

    Chevereau, Guillaume; Dravecká, Marta; Batur, Tugce; Guvenek, Aysegul; Ayhan, Dilay Hazal; Toprak, Erdal; Bollenbach, Tobias

    2015-01-01

    The emergence of drug resistant pathogens is a serious public health problem. It is a long-standing goal to predict rates of resistance evolution and design optimal treatment strategies accordingly. To this end, it is crucial to reveal the underlying causes of drug-specific differences in the evolutionary dynamics leading to resistance. However, it remains largely unknown why the rates of resistance evolution via spontaneous mutations and the diversity of mutational paths vary substantially between drugs. Here we comprehensively quantify the distribution of fitness effects (DFE) of mutations, a key determinant of evolutionary dynamics, in the presence of eight antibiotics representing the main modes of action. Using precise high-throughput fitness measurements for genome-wide Escherichia coli gene deletion strains, we find that the width of the DFE varies dramatically between antibiotics and, contrary to conventional wisdom, for some drugs the DFE width is lower than in the absence of stress. We show that this previously underappreciated divergence in DFE width among antibiotics is largely caused by their distinct drug-specific dose-response characteristics. Unlike the DFE, the magnitude of the changes in tolerated drug concentration resulting from genome-wide mutations is similar for most drugs but exceptionally small for the antibiotic nitrofurantoin, i.e., mutations generally have considerably smaller resistance effects for nitrofurantoin than for other drugs. A population genetics model predicts that resistance evolution for drugs with this property is severely limited and confined to reproducible mutational paths. We tested this prediction in laboratory evolution experiments using the “morbidostat”, a device for evolving bacteria in well-controlled drug environments. Nitrofurantoin resistance indeed evolved extremely slowly via reproducible mutations—an almost paradoxical behavior since this drug causes DNA damage and increases the mutation rate. Overall

  6. Quantifying uncertainties in U.S. wildland fire emissions across space and time scales

    NASA Astrophysics Data System (ADS)

    Larkin, N. K.; Strand, T. T.; Raffuse, S. M.; Drury, S.

    2011-12-01

    Smoke from wildland fire is a growing concern as air quality regulations tighten and public acceptance declines. Wildland fire emissions inventories are not only important for understanding smoke impacts on air quality but also in quantifying sources of greenhouse gas emissions. Wildland fire emissions can be calculated using a number of models and methods. We show an overview of results from the Smoke and Emissions Model Intercomparison Project (SEMIP) describing uncertainties in calculations of U.S. wildland fire emissions across space and time scales from single fires to annual national totals. Differences in emissions calculated from different models and systems and satallite algorithms and ground based systems are shown. The relative importance of uncertainties in fire size and available fuel data, consumption modeling techniques, and emissions factors are compared and quantified and can be applied to various use cases that include air quality impact modeling and greenhouse gas accounting. The results of this work show where additional information and updated models can most improve wildland fire emission inventories.

  7. Quantifying the surface chemistry of 3D matrices in situ

    NASA Astrophysics Data System (ADS)

    Tzeranis, Dimitrios S.; So, Peter T. C.; Yannas, Ioannis V.

    2014-03-01

    Despite the major role of the matrix (the insoluble environment around cells) in physiology and pathology, there are very few and limited methods that can quantify the surface chemistry of a 3D matrix such as a biomaterial or tissue ECM. This study describes a novel optical-based methodology that can quantify the surface chemistry (density of adhesion ligands for particular cell adhesion receptors) of a matrix in situ. The methodology utilizes fluorescent analogs (markers) of the receptor of interest and a series of binding assays, where the amount of bound markers on the matrix is quantified via spectral multi-photon imaging. The study provides preliminary results for the quantification of the ligands for the two major collagen-binding integrins (α1β1, α2β1) in porous collagen scaffolds that have been shown to be able to induce maximum regeneration in transected peripheral nerves. The developed methodology opens the way for quantitative descriptions of the insoluble microenvironment of cells in physiology and pathology, and for integrating the matrix in quantitative models of cell signaling. α

  8. Precise thermal NDE for quantifying structural damage

    SciTech Connect

    Del Grande, N.K.; Durbin, P.F.

    1995-09-18

    The authors demonstrated a fast, wide-area, precise thermal NDE imaging system to quantify aircraft corrosion damage, such as percent metal loss, above a threshold of 5% with 3% overall uncertainties. The DBIR precise thermal imaging and detection method has been used successfully to characterize defect types, and their respective depths, in aircraft skins, and multi-layered composite materials used for wing patches, doublers and stiffeners. This precise thermal NDE inspection tool has long-term potential benefits to evaluate the structural integrity of airframes, pipelines and waste containers. They proved the feasibility of the DBIR thermal NDE imaging system to inspect concrete and asphalt-concrete bridge decks. As a logical extension to the successful feasibility study, they plan to inspect a concrete bridge deck from a moving vehicle to quantify the volumetric damage within the deck and the percent of the deck which has subsurface delaminations. Potential near-term benefits are in-service monitoring from a moving vehicle to inspect the structural integrity of the bridge deck. This would help prioritize the repair schedule for a reported 200,000 bridge decks in the US which need substantive repairs. Potential long-term benefits are affordable, and reliable, rehabilitation for bridge decks.

  9. Quantifying the micrometorological controls on fog deposition

    NASA Astrophysics Data System (ADS)

    Farlin, J. P.; Paw U, K. T.; Underwood, J.

    2014-12-01

    Fog deposition has been shown to be a significant water input into many arid ecosystems. However, deposition of fog onto foliage depends on many factors. Previously, characterizing fog droplet size distributions was labor intensive, but currently we can characterize changes in fog droplet composition in the 2-50 μm in 2 μm intervals in real time. Evaluating how droplet size and ambient micrometeorological conditions affect deposition rates will allowing tremendous new insight into fog formation and deposition processes. Previous work has characterized fog deposition as it alters with wind speed in natural systems, but extensively testing how droplet size, wind speed, angle of interception all co-vary would be impossible in a natural setting. We utilized a wind tunnel with artificial fog generating nebulizers to simulate fog events across micrometeorological conditions. Using a weighing lysimeter, we were able to quantify the differential rates of deposition on different theoretical leaf types as droplet size and micrometeorological conditions vary. We hope to inform fog collector designs with this information to ensure we are accurately quantifying the fluxes of fog-derived water into these systems.

  10. Quantifying meta-correlations in financial markets

    NASA Astrophysics Data System (ADS)

    Kenett, Dror Y.; Preis, Tobias; Gur-Gershgoren, Gitit; Ben-Jacob, Eshel

    2012-08-01

    Financial markets are modular multi-level systems, in which the relationships between the individual components are not constant in time. Sudden changes in these relationships significantly affect the stability of the entire system, and vice versa. Our analysis is based on historical daily closing prices of the 30 components of the Dow Jones Industrial Average (DJIA) from March 15th, 1939 until December 31st, 2010. We quantify the correlation among these components by determining Pearson correlation coefficients, to investigate whether mean correlation of the entire portfolio can be used as a precursor for changes in the index return. To this end, we quantify the meta-correlation - the correlation of mean correlation and index return. We find that changes in index returns are significantly correlated with changes in mean correlation. Furthermore, we study the relationship between the index return and correlation volatility - the standard deviation of correlations for a given time interval. This parameter provides further evidence of the effect of the index on market correlations and their fluctuations. Our empirical findings provide new information and quantification of the index leverage effect, and have implications to risk management, portfolio optimization, and to the increased stability of financial markets.

  11. Obtaining Laws Through Quantifying Experiments: Justifications of Pre-service Physics Teachers in the Case of Electric Current, Voltage and Resistance

    NASA Astrophysics Data System (ADS)

    Mäntylä, Terhi; Hämäläinen, Ari

    2015-07-01

    The language of physics is mathematics, and physics ideas, laws and models describing phenomena are usually represented in mathematical form. Therefore, an understanding of how to navigate between phenomena and the models representing them in mathematical form is important for a physics teacher so that the teacher can make physics understandable to students. Here, the focus is on the "experimental mathematization," how laws are established through quantifying experiments. A sequence from qualitative experiments to mathematical formulations through quantifying experiments on electric current, voltage and resistance in pre-service physics teachers' laboratory reports is examined. The way students reason and justify the mathematical formulation of the measurement results and how they combine the treatment and presentation of empirical data to their justifications is analyzed. The results show that pre-service physics teachers understand the basic idea of how quantifying experiments establish the quantities and laws but are not able to argue it in a justified manner.

  12. Quantifying uncertainty in observational rainfall datasets

    NASA Astrophysics Data System (ADS)

    Lennard, Chris; Dosio, Alessandro; Nikulin, Grigory; Pinto, Izidine; Seid, Hussen

    2015-04-01

    rainfall datasets available over Africa on monthly, daily and sub-daily time scales as appropriate to quantify spatial and temporal differences between the datasets. We find regional wet and dry biases between datasets (using the ensemble mean as a reference) with generally larger biases in reanalysis products. Rainfall intensity is poorly represented in some datasets which demonstrates some datasets should not be used for rainfall intensity analyses. Using 10 CORDEX models we show in east Africa that the spread between observed datasets is often similar to the spread between models. We recommend that specific observational rainfall datasets datasets be used for specific investigations and also that where many datasets are applicable to an investigation, a probabilistic view be adopted for rainfall studies over Africa. Endris, H. S., P. Omondi, S. Jain, C. Lennard, B. Hewitson, L. Chang'a, J. L. Awange, A. Dosio, P. Ketiem, G. Nikulin, H-J. Panitz, M. Büchner, F. Stordal, and L. Tazalika (2013) Assessment of the Performance of CORDEX Regional Climate Models in Simulating East African Rainfall. J. Climate, 26, 8453-8475. DOI: 10.1175/JCLI-D-12-00708.1 Gbobaniyi, E., A. Sarr, M. B. Sylla, I. Diallo, C. Lennard, A. Dosio, A. Dhie ?diou, A. Kamga, N. A. B. Klutse, B. Hewitson, and B. Lamptey (2013) Climatology, annual cycle and interannual variability of precipitation and temperature in CORDEX simulations over West Africa. Int. J. Climatol., DOI: 10.1002/joc.3834 Hernández-Díaz, L., R. Laprise, L. Sushama, A. Martynov, K. Winger, and B. Dugas (2013) Climate simulation over CORDEX Africa domain using the fifth-generation Canadian Regional Climate Model (CRCM5). Clim. Dyn. 40, 1415-1433. DOI: 10.1007/s00382-012-1387-z Kalognomou, E., C. Lennard, M. Shongwe, I. Pinto, A. Favre, M. Kent, B. Hewitson, A. Dosio, G. Nikulin, H. Panitz, and M. Büchner (2013) A diagnostic evaluation of precipitation in CORDEX models over southern Africa. Journal of Climate, 26, 9477-9506. DOI:10

  13. Effect of soil structure on the growth of bacteria in soil quantified using CARD-FISH

    NASA Astrophysics Data System (ADS)

    Juyal, Archana; Eickhorst, Thilo; Falconer, Ruth; Otten, Wilfred

    2014-05-01

    It has been reported that compaction of soil due to use of heavy machinery has resulted in the reduction of crop yield. Compaction affects the physical properties of soil such as bulk density, soil strength and porosity. This causes an alteration in the soil structure which limits the mobility of nutrients, water and air infiltration and root penetration in soil. Several studies have been conducted to explore the effect of soil compaction on plant growth and development. However, there is scant information on the effect of soil compaction on the microbial community and its activities in soil. Understanding the effect of soil compaction on microbial community is essential as microbial activities are very sensitive to abrupt environmental changes in soil. Therefore, the aim of this work was to investigate the effect of soil structure on growth of bacteria in soil. The bulk density of soil was used as a soil physical parameter to quantify the effect of soil compaction. To detect and quantify bacteria in soil the method of catalyzed reporter deposition-fluorescence in situ hybridization (CARD-FISH) was used. This technique results in high intensity fluorescent signals which make it easy to quantify bacteria against high levels of autofluorescence emitted by soil particles and organic matter. In this study, bacterial strains Pseudomonas fluorescens SBW25 and Bacillus subtilis DSM10 were used. Soils of aggregate size 2-1mm were packed at five different bulk densities in polyethylene rings (4.25 cm3).The soil rings were sampled at four different days. Results showed that the total number of bacteria counts was reduced significantly (P

  14. Quantifying near-surface water exchange to assess hydrometeorological models

    NASA Astrophysics Data System (ADS)

    Parent, Annie-Claude; Anctil, François; Morais, Anne

    2013-04-01

    Modelling water exchange from the lower atmosphere, crop and soil system using hydrometeorological models allows processing an actual evapotranspiration (ETa) which is a complex but critical value for numerous hydrological purposes e.g. hydrological modelling and crop irrigation. This poster presents a summary of the hydrometeorological research activity conducted by our research group. The first purpose of this research is to quantify ETa and drainage of a rainfed potato crop located in South-Eastern Canada. Then, the outputs of the hydrometeorological models under study are compared with the observed turbulent fluxes. Afterwards, the sensibility of the hydrometeorological models to different inputs is assessed for an environment under a changing climate. ETa was measured from micrometeorological instrumentation (CSAT3, Campbell SCI Inc.; Li7500, LiCor Inc.), and the eddy covariance techniques. Near surface soil heat flux and soil water content at different layers from 10 cm to 100 cm were also measured. Other parameters required by the hydrometeorological models were observed using meteorological standard instrumentation: shortwave and longwave solar radiation, wind speed, air temperature, atmospheric pressure and precipitation. The cumulative ETa during the growth season (123 days) was 331.5 mm, with a daily maximum of 6.5 mm at full coverage; precipitation was 350.6 mm which is rather small compared with the historical mean (563.3 mm). This experimentation allowed calculating crop coefficients that vary among the growth season for a rainfed potato crop. Land surface schemes as CLASS (Canadian Land Surface Scheme) and c-ISBA (a Canadian version of the model Interaction Sol-Biosphère-Atmosphère) are 1-D physical hydrometeorological models that produce turbulent fluxes (including ETa) for a given crop. The schemes performances were assessed for both energy and water balance, based on the resulting turbulent fluxes and the given observations. CLASS showed

  15. Quantifying proteinuria in hypertensive disorders of pregnancy.

    PubMed

    Amin, Sapna V; Illipilla, Sireesha; Hebbar, Shripad; Rai, Lavanya; Kumar, Pratap; Pai, Muralidhar V

    2014-01-01

    Background. Progressive proteinuria indicates worsening of the condition in hypertensive disorders of pregnancy and hence its quantification guides clinician in decision making and treatment planning. Objective. To evaluate the efficacy of spot dipstick analysis and urinary protein-creatinine ratio (UPCR) in hypertensive disease of pregnancy for predicting 24-hour proteinuria. Subjects and Methods. A total of 102 patients qualifying inclusion criteria were evaluated with preadmission urine dipstick test and UPCR performed on spot voided sample. After admission, the entire 24-hour urine sample was collected and analysed for daily protein excretion. Dipstick estimation and UPCR were compared to the 24-hour results. Results. Seventy-eight patients (76.5%) had significant proteinuria of more than 300 mg/24 h. Dipstick method showed 59% sensitivity and 67% specificity for prediction of significant proteinuria. Area under curve for UPCR was 0.89 (95% CI: 0.83 to 0.95, P < 0.001) showing 82% sensitivity and 12.5% false positive rate for cutoff value of 0.45. Higher cutoff values (1.46 and 1.83) predicted heavy proteinuria (2 g and 3 g/24 h, resp.). Conclusion. This study suggests that random urinary protein : creatine ratio is a reliable investigation compared to dipstick method to assess proteinuria in hypertensive pregnant women. However, clinical laboratories should standardize the reference values for their setup. PMID:25302114

  16. Crisis of Japanese vascular flora shown by quantifying extinction risks for 1618 taxa.

    PubMed

    Kadoya, Taku; Takenaka, Akio; Ishihama, Fumiko; Fujita, Taku; Ogawa, Makoto; Katsuyama, Teruo; Kadono, Yasuro; Kawakubo, Nobumitsu; Serizawa, Shunsuke; Takahashi, Hideki; Takamiya, Masayuki; Fujii, Shinji; Matsuda, Hiroyuki; Muneda, Kazuo; Yokota, Masatsugu; Yonekura, Koji; Yahara, Tetsukazu

    2014-01-01

    Although many people have expressed alarm that we are witnessing a mass extinction, few projections have been quantified, owing to limited availability of time-series data on threatened organisms, especially plants. To quantify the risk of extinction, we need to monitor changes in population size over time for as many species as possible. Here, we present the world's first quantitative projection of plant species loss at a national level, with stochastic simulations based on the results of population censuses of 1618 threatened plant taxa in 3574 map cells of ca. 100 km2. More than 500 lay botanists helped monitor those taxa in 1994-1995 and in 2003-2004. We projected that between 370 and 561 vascular plant taxa will go extinct in Japan during the next century if past trends of population decline continue. This extinction rate is approximately two to three times the global rate. Using time-series data, we show that existing national protected areas (PAs) covering ca. 7% of Japan will not adequately prevent population declines: even core PAs can protect at best <60% of local populations from decline. Thus, the Aichi Biodiversity Target to expand PAs to 17% of land (and inland water) areas, as committed to by many national governments, is not enough: only 29.2% of currently threatened species will become non-threatened under the assumption that probability of protection success by PAs is 0.5, which our assessment shows is realistic. In countries where volunteers can be organized to monitor threatened taxa, censuses using our method should be able to quantify how fast we are losing species and to assess how effective current conservation measures such as PAs are in preventing species extinction. PMID:24922311

  17. Quantifying fault recovery in multiprocessor systems

    NASA Technical Reports Server (NTRS)

    Malek, Miroslaw; Harary, Frank

    1990-01-01

    Various aspects of reliable computing are formalized and quantified with emphasis on efficient fault recovery. The mathematical model which proves to be most appropriate is provided by the theory of graphs. New measures for fault recovery are developed and the value of elements of the fault recovery vector are observed to depend not only on the computation graph H and the architecture graph G, but also on the specific location of a fault. In the examples, a hypercube is chosen as a representative of parallel computer architecture, and a pipeline as a typical configuration for program execution. Dependability qualities of such a system is defined with or without a fault. These qualities are determined by the resiliency triple defined by three parameters: multiplicity, robustness, and configurability. Parameters for measuring the recovery effectiveness are also introduced in terms of distance, time, and the number of new, used, and moved nodes and edges.

  18. Quantifying the Anthropogenic Footprint in Eastern China

    PubMed Central

    Meng, Chunlei; Dou, Youjun

    2016-01-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities. PMID:27067132

  19. Quantifying the Anthropogenic Footprint in Eastern China.

    PubMed

    Meng, Chunlei; Dou, Youjun

    2016-01-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities. PMID:27067132

  20. Quantifying International Travel Flows Using Flickr

    PubMed Central

    Barchiesi, Daniele; Moat, Helen Susannah; Alis, Christian; Bishop, Steven; Preis, Tobias

    2015-01-01

    Online social media platforms are opening up new opportunities to analyse human behaviour on an unprecedented scale. In some cases, the fast, cheap measurements of human behaviour gained from these platforms may offer an alternative to gathering such measurements using traditional, time consuming and expensive surveys. Here, we use geotagged photographs uploaded to the photo-sharing website Flickr to quantify international travel flows, by extracting the location of users and inferring trajectories to track their movement across time. We find that Flickr based estimates of the number of visitors to the United Kingdom significantly correlate with the official estimates released by the UK Office for National Statistics, for 28 countries for which official estimates are calculated. Our findings underline the potential for indicators of key aspects of human behaviour, such as mobility, to be generated from data attached to the vast volumes of photographs posted online. PMID:26147500

  1. Quantifying the Anthropogenic Footprint in Eastern China

    NASA Astrophysics Data System (ADS)

    Meng, Chunlei; Dou, Youjun

    2016-04-01

    Urban heat island (UHI) is one of the most focuses in urban climate study. The parameterization of the anthropogenic heat (AH) is crucial important in UHI study, but universal method to parameterize the spatial pattern of the AH is lacking now. This paper uses the NOAA DMSP/OLS nighttime light data to parameterize the spatial pattern of the AH. Two experiments were designed and performed to quantify the influences of the AH to land surface temperature (LST) in eastern China and 24 big cities. The annual mean heating caused by AH is up to 1 K in eastern China. This paper uses the relative LST differences rather than the absolute LST differences between the control run and contrast run of common land model (CoLM) to find the drivers. The heating effect of the anthropogenic footprint has less influence on relatively warm and wet cities.

  2. How to quantify conduits in wood?

    PubMed

    Scholz, Alexander; Klepsch, Matthias; Karimi, Zohreh; Jansen, Steven

    2013-01-01

    Vessels and tracheids represent the most important xylem cells with respect to long distance water transport in plants. Wood anatomical studies frequently provide several quantitative details of these cells, such as vessel diameter, vessel density, vessel element length, and tracheid length, while important information on the three dimensional structure of the hydraulic network is not considered. This paper aims to provide an overview of various techniques, although there is no standard protocol to quantify conduits due to high anatomical variation and a wide range of techniques available. Despite recent progress in image analysis programs and automated methods for measuring cell dimensions, density, and spatial distribution, various characters remain time-consuming and tedious. Quantification of vessels and tracheids is not only important to better understand functional adaptations of tracheary elements to environment parameters, but will also be essential for linking wood anatomy with other fields such as wood development, xylem physiology, palaeobotany, and dendrochronology. PMID:23507674

  3. Quantifying and predicting interpretational uncertainty in cross-sections

    NASA Astrophysics Data System (ADS)

    Randle, Charles; Bond, Clare; Monaghan, Alison; Lark, Murray

    2015-04-01

    Cross-sections are often constructed from data to create a visual impression of the geologist's interpretation of the sub-surface geology. However as with all interpretations, this vision of the sub-surface geology is uncertain. We have designed and carried out an experiment with the aim of quantifying the uncertainty in geological cross-sections created by experts interpreting borehole data. By analysing different attributes of the data and interpretations we reflect on the main controls on uncertainty. A group of ten expert modellers at the British Geological Survey were asked to interpret an 11.4 km long cross-section from south-east Glasgow, UK. The data provided consisted of map and borehole data of the superficial deposits and shallow bedrock. Each modeller had a unique set of 11 boreholes removed from their dataset, to which their interpretations of the top of the bedrock were compared. This methodology allowed quantification of how far from the 'correct answer' each interpretation is at 11 points along each interpreted cross-section line; through comparison of the interpreted and actual bedrock elevations in the boreholes. This resulted in the collection of 110 measurements of the error to use in further analysis. To determine the potential control on uncertainty various attributes relating to the modeller, the interpretation and the data were recorded. Modellers were asked to fill out a questionnaire asking for information; such as how much 3D modelling experience they had, and how long it took them to complete the interpretation. They were also asked to record their confidence in their interpretations graphically, in the form of a confidence level drawn onto the cross-section. Initial analysis showed the majority of the experts' interpreted bedrock elevations within 5 metres of those recorded in the withheld boreholes. Their distribution is peaked and symmetrical about a mean of zero, indicating that there was no tendency for the experts to either under

  4. Quantifying structural states of soft mudrocks

    NASA Astrophysics Data System (ADS)

    Li, B.; Wong, R. C. K.

    2016-05-01

    In this paper, a cm model is proposed to quantify structural states of soft mudrocks, which are dependent on clay fractions and porosities. Physical properties of natural and reconstituted soft mudrock samples are used to derive two parameters in the cm model. With the cm model, a simplified homogenization approach is proposed to estimate geomechanical properties and fabric orientation distributions of soft mudrocks based on the mixture theory. Soft mudrocks are treated as a mixture of nonclay minerals and clay-water composites. Nonclay minerals have a high stiffness and serve as a structural framework of mudrocks when they have a high volume fraction. Clay-water composites occupy the void space among nonclay minerals and serve as an in-fill matrix. With the increase of volume fraction of clay-water composites, there is a transition in the structural state from the state of framework supported to the state of matrix supported. The decreases in shear strength and pore size as well as increases in compressibility and anisotropy in fabric are quantitatively related to such transition. The new homogenization approach based on the proposed cm model yields better performance evaluation than common effective medium modeling approaches because the interactions among nonclay minerals and clay-water composites are considered. With wireline logging data, the cm model is applied to quantify the structural states of Colorado shale formations at different depths in the Cold Lake area, Alberta, Canada. Key geomechancial parameters are estimated based on the proposed homogenization approach and the critical intervals with low strength shale formations are identified.

  5. A mass-balance model to separate and quantify colloidal and solute redistributions in soil

    USGS Publications Warehouse

    Bern, C.R.; Chadwick, O.A.; Hartshorn, A.S.; Khomo, L.M.; Chorover, J.

    2011-01-01

    Studies of weathering and pedogenesis have long used calculations based upon low solubility index elements to determine mass gains and losses in open systems. One of the questions currently unanswered in these settings is the degree to which mass is transferred in solution (solutes) versus suspension (colloids). Here we show that differential mobility of the low solubility, high field strength (HFS) elements Ti and Zr can trace colloidal redistribution, and we present a model for distinguishing between mass transfer in suspension and solution. The model is tested on a well-differentiated granitic catena located in Kruger National Park, South Africa. Ti and Zr ratios from parent material, soil and colloidal material are substituted into a mixing equation to quantify colloidal movement. The results show zones of both colloid removal and augmentation along the catena. Colloidal losses of 110kgm-2 (-5% relative to parent material) are calculated for one eluviated soil profile. A downslope illuviated profile has gained 169kgm-2 (10%) colloidal material. Elemental losses by mobilization in true solution are ubiquitous across the catena, even in zones of colloidal accumulation, and range from 1418kgm-2 (-46%) for an eluviated profile to 195kgm-2 (-23%) at the bottom of the catena. Quantification of simultaneous mass transfers in solution and suspension provide greater specificity on processes within soils and across hillslopes. Additionally, because colloids include both HFS and other elements, the ability to quantify their redistribution has implications for standard calculations of soil mass balances using such index elements. ?? 2011.

  6. Quantifying nonverbal communicative behavior in face-to-face human dialogues

    NASA Astrophysics Data System (ADS)

    Skhiri, Mustapha; Cerrato, Loredana

    2002-11-01

    The referred study is based on the assumption that understanding how humans use nonverbal behavior in dialogues can be very useful in the design of more natural-looking animated talking heads. The goal of the study is twofold: (1) to explore how people use specific facial expressions and head movements to serve important dialogue functions, and (2) to show evidence that it is possible to measure and quantify the entity of these movements with the Qualisys MacReflex motion tracking system. Naturally elicited dialogues between humans have been analyzed with focus on the attention on those nonverbal behaviors that serve the very relevant functions of regulating the conversational flux (i.e., turn taking) and producing information about the state of communication (i.e., feedback). The results show that eyebrow raising, head nods, and head shakes are typical signals involved during the exchange of speaking turns, as well as in the production and elicitation of feedback. These movements can be easily measured and quantified, and this measure can be implemented in animated talking heads.

  7. Quantifying the statistical complexity of low-frequency fluctuations in semiconductor lasers with optical feedback

    SciTech Connect

    Tiana-Alsina, J.; Torrent, M. C.; Masoller, C.; Garcia-Ojalvo, J.

    2010-07-15

    Low-frequency fluctuations (LFFs) represent a dynamical instability that occurs in semiconductor lasers when they are operated near the lasing threshold and subject to moderate optical feedback. LFFs consist of sudden power dropouts followed by gradual, stepwise recoveries. We analyze experimental time series of intensity dropouts and quantify the complexity of the underlying dynamics employing two tools from information theory, namely, Shannon's entropy and the Martin, Plastino, and Rosso statistical complexity measure. These measures are computed using a method based on ordinal patterns, by which the relative length and ordering of consecutive interdropout intervals (i.e., the time intervals between consecutive intensity dropouts) are analyzed, disregarding the precise timing of the dropouts and the absolute durations of the interdropout intervals. We show that this methodology is suitable for quantifying subtle characteristics of the LFFs, and in particular the transition to fully developed chaos that takes place when the laser's pump current is increased. Our method shows that the statistical complexity of the laser does not increase continuously with the pump current, but levels off before reaching the coherence collapse regime. This behavior coincides with that of the first- and second-order correlations of the interdropout intervals, suggesting that these correlations, and not the chaotic behavior, are what determine the level of complexity of the laser's dynamics. These results hold for two different dynamical regimes, namely, sustained LFFs and coexistence between LFFs and steady-state emission.

  8. Quantifying touch feel perception: tribological aspects

    NASA Astrophysics Data System (ADS)

    Liu, X.; Yue, Z.; Cai, Z.; Chetwynd, D. G.; Smith, S. T.

    2008-08-01

    We report a new investigation into how surface topography and friction affect human touch-feel perception. In contrast with previous work based on micro-scale mapping of surface mechanical and tribological properties, this investigation focuses on the direct measurement of the friction generated when a fingertip is stroked on a test specimen. A special friction apparatus was built for the in situ testing, based on a linear flexure mechanism with both contact force and frictional force measured simultaneously. Ten specimens, already independently assessed in a 'perception clinic', with materials including natural wood, leather, engineered plastics and metal were tested and the results compared with the perceived rankings. Because surface geometrical features are suspected to play a significant role in perception, a second set of samples, all of one material, were prepared and tested in order to minimize the influence of properties such as hardness and thermal conductivity. To minimize subjective effects, all specimens were also tested in a roller-on-block configuration based upon the same friction apparatus, with the roller materials being steel, brass and rubber. This paper reports the detailed design and instrumentation of the friction apparatus, the experimental set-up and the friction test results. Attempts have been made to correlate the measured properties and the perceived feelings for both roughness and friction. The results show that the measured roughness and friction coefficient both have a strong correlation with the rough-smooth and grippy-slippery feelings.

  9. Quantifying Nitrate Uptake in an Anabranching, Unsteady, Antarctic Stream

    NASA Astrophysics Data System (ADS)

    Koch, J. C.; McKnight, D. M.

    2006-12-01

    We conducted a thirty-three hour nitrate tracer injection in Huey Creek, a high-gradient, first-order stream located in the McMurdo Dry Valleys of Antarctica. Bacteria are the dominant life form in Huey Creek. A nitrate injection will allow us to quantify the rates and processes associated with bacterial activity, as well as to further constrain the location of the communities. Transient storage models (TSM's) are often used to analyze stream tracer injections, but such analyses are hindered by unsteady flow in Huey Creek. Daily flood pulses result in branching of the stream channel, and subsequent infiltration of branch waters into the subsurface as streamflow declines. Preliminary analyses suggest that subsurface residence times of infiltrating water are on the order of tens of hours, which is significantly longer than storage resulting from near-channel hyporheic exchange. Channel branching creates a second storage zone that must be added to a TSM in order to correctly quantify model parameters. Furthermore, the diel flood cycle may result in temporal variations in model parameters including subsurface storage area (As), exchange rate (α), and bacterial activity (λ). In order to avoid these complexities, the Huey Creek nutrient injection was simulated using a groundwater flow model. The system's hydrology was modeled using MODFLOW and the streamflow routing package, DAFLOW. Solute movement was quantified with MT3DMS. These models were calibrated by comparing simulated and observed solute breakthrough curves. We believe that using a groundwater flow model will more accurately describe this system's hydrology, leading to greater confidence in nutrient uptake rates.

  10. Can we quantify local groundwater recharge using electrical resistivity tomography?

    NASA Astrophysics Data System (ADS)

    Noell, U.; Günther, T.; Ganz, C.; Lamparter, A.

    2012-04-01

    Electrical resistivity tomography (ERT) has become a common tool to observe flow processes within the saturated/unsaturated zones. While it is still doubtful whether the method can reliably yield quantitative results the qualitative success has been shown in "numerous" examples. To quantify the rate of rainfall which reaches the groundwater table is still a problematic venture due to a sad combination of several physical and mathematical obstacles that may lead to huge errors. In 2007 an infiltration experiment was performed and observed using 3D array ERT. The site is located close to Hannover, Germany, on a well studied sandy soil. The groundwater table at this site was at a depth of about 1.3 m. The inversion results of the ERT data yield reliably looking pictures of the infiltration process. Later experiments nearby using tracer fluid and combined TDR and resistivity measurements in the subsurface strongly supported the assumption that the resistivity pictures indeed depict the water distributions during infiltration reliably. The quantitative interpretation shows that two days after infiltration about 40% of the water has reached the groundwater. However, the question remains how reliable this quantitative interpretation actually is. The first obstacle: The inversion of the ERT data gives one possible resistivity distribution within the subsurface that can explain the data. It is not necessarily the right one and the result depends on the error model and the inversion parameters and method. For these measurements we assume the same error for every single quadrupole (3%), applied the Gauss-Newton method and minimum length constraints in order to reduce the smoothing to a minimum (very small lambda). Numerical experiments showed little smoothing using this approach, and smoothing must be suppressed if preferential flow is to be seen. The inversion showed artefacts of minor amplitude compared with other inversion parameter settings. The second obstacle: The

  11. Probabilistic structural analysis to quantify uncertainties associated with turbopump blades

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.; Rubinstein, Robert; Chamis, Christos C.

    1987-01-01

    A probabilistic study of turbopump blades has been in progress at NASA Lewis Research Center for over the last two years. The objectives of this study are to evaluate the effects of uncertainties in geometry and material properties on the structural response of the turbopump blades to evaluate the tolerance limits on the design. A methodology based on probabilistic approach has been developed to quantify the effects of the random uncertainties. The results of this study indicate that only the variations in geometry have significant effects.

  12. Quantifiers More or Less Quantify On-Line: ERP Evidence for Partial Incremental Interpretation

    ERIC Educational Resources Information Center

    Urbach, Thomas P.; Kutas, Marta

    2010-01-01

    Event-related brain potentials were recorded during RSVP reading to test the hypothesis that quantifier expressions are incrementally interpreted fully and immediately. In sentences tapping general knowledge ("Farmers grow crops/worms as their primary source of income"), Experiment 1 found larger N400s for atypical ("worms") than typical objects…

  13. Mapping the Galactic Halo. VIII. Quantifying Substructure

    NASA Astrophysics Data System (ADS)

    Starkenburg, Else; Helmi, Amina; Morrison, Heather L.; Harding, Paul; van Woerden, Hugo; Mateo, Mario; Olszewski, Edward W.; Sivarani, Thirupathi; Norris, John E.; Freeman, Kenneth C.; Shectman, Stephen A.; Dohm-Palmer, R. C.; Frey, Lucy; Oravetz, Dan

    2009-06-01

    We have measured the amount of kinematic substructure in the Galactic halo using the final data set from the Spaghetti project, a pencil-beam high-latitude sky survey. Our sample contains 101 photometrically selected and spectroscopically confirmed giants with accurate distance, radial velocity, and metallicity information. We have developed a new clustering estimator: the "4distance" measure, which when applied to our data set leads to the identification of one group and seven pairs of clumped stars. The group, with six members, can confidently be matched to tidal debris of the Sagittarius dwarf galaxy. Two pairs match the properties of known Virgo structures. Using models of the disruption of Sagittarius in Galactic potentials with different degrees of dark halo flattening, we show that this favors a spherical or prolate halo shape, as demonstrated by Newberg et al. using the Sloan Digital Sky Survey data. One additional pair can be linked to older Sagittarius debris. We find that 20% of the stars in the Spaghetti data set are in substructures. From comparison with random data sets, we derive a very conservative lower limit of 10% to the amount of substructure in the halo. However, comparison to numerical simulations shows that our results are also consistent with a halo entirely built up from disrupted satellites, provided that the dominating features are relatively broad due to early merging or relatively heavy progenitor satellites.

  14. Quantifying Hierarchy Stimuli in Systematic Desensitization Via GSR: A Preliminary Investigation

    ERIC Educational Resources Information Center

    Barabasz, Arreed F.

    1974-01-01

    The aim of the method for quantifying hierarchy stimuli by Galvanic Skin Resistance recordings is to improve the results of systematic desensitization by attenuating the subjective influences in hierarchy construction which are common in traditional procedures. (Author/CS)

  15. Quantifying Uncertainties in Rainfall Maps from Cellular Communication Networks

    NASA Astrophysics Data System (ADS)

    Uijlenhoet, R.; Rios Gaona, M. F.; Overeem, A.; Leijnse, H.

    2014-12-01

    The core idea behind rainfall retrievals from commercial microwave link networks is to measure the decrease in power due to attenuation of the electromagnetic signal by raindrops along the link path. Accurate rainfall measurements are of vital importance in hydrological applications, for instance, flash-flood early-warning systems, agriculture, and climate modeling. Hence, such an alternative technique fulfills the need for measurements with higher resolution in time and space, especially in places where standard rain gauge-networks are scarce or poorly maintained. Rainfall estimation via commercial microwave link networks, at country-wide scales, has recently been demonstrated. Despite their potential applicability in rainfall estimation at higher spatiotemporal resolutions, the uncertainties present in link-based rainfall maps are not yet fully comprehended. Now we attempt to quantify the inherent sources of uncertainty present in interpolated maps computed from commercial microwave link rainfall retrievals. In order to disentangle these sources of uncertainty we identified four main sources of error: 1) microwave link measurements, 2) availability of microwave link measurements, 3) spatial distribution of the network, and 4) interpolation methodology. We computed more than 1000 rainfall fields, for The Netherlands, from real and simulated microwave link data. These rainfall fields were compared to quality-controlled gauge-adjusted radar rainfall maps considered as ground-truth. Thus we were able to quantify the contribution of errors in microwave link measurements to the overall uncertainty. The actual performance of the commercial microwave link network is affected by the intermittent availability of the links, not only in time but also in space. We simulated a fully-operational network in time and space, and thus we quantified the role of the availability of microwave link measurements to the overall uncertainty. This research showed that the largest source of

  16. Quantifying Different Tactile Sensations Evoked by Cutaneous Electrical Stimulation Using Electroencephalography Features.

    PubMed

    Zhang, Dingguo; Xu, Fei; Xu, Heng; Shull, Peter B; Zhu, Xiangyang

    2016-03-01

    Psychophysical tests and standardized questionnaires are often used to analyze tactile sensation based on subjective judgment in conventional studies. In contrast with the subjective evaluation, a novel method based on electroencephalography (EEG) is proposed to explore the possibility of quantifying tactile sensation in an objective way. The proposed experiments adopt cutaneous electrical stimulation to generate two kinds of sensations (vibration and pressure) with three grades (low/medium/strong) on eight subjects. Event-related potentials (ERPs) and event-related synchronization/desynchronization (ERS/ERD) are extracted from EEG, which are used as evaluation indexes to distinguish between vibration and pressure, and also to discriminate sensation grades. Results show that five-phase P1–N1–P2–N2–P3 deflection is induced in EEG. Using amplitudes of latter ERP components (N2 and P3), vibration and pressure sensations can be discriminated on both individual and grand-averaged ERP (p < 0.05). The grand-average ERPs can distinguish the three sensations grades, but there is no significant difference on individuals. In addition, ERS/ERD features of mu rhythm (8–13 Hz) are adopted. Vibration and pressure sensations can be discriminated on grand-average ERS/ERD (p < 0.05), but only some individuals show significant difference. The grand-averaged results show that most sensation grades can be differentiated, and most pairwise comparisons show significant difference on individuals (p < 0.05). The work suggests that ERP- and ERS/ERD-based EEG features may have potential to quantify tactile sensations for medical diagnosis or engineering applications. PMID:26762865

  17. Quantifying the Robustness of the English Sibilant Fricative Contrast in Children

    PubMed Central

    Reidy, Patrick F.; Beckman, Mary E.; Edwards, Jan

    2015-01-01

    Purpose Four measures of children's developing robustness of phonological contrast were compared to see how they correlated with age, vocabulary size, and adult listeners' correctness ratings. Method Word-initial sibilant fricative productions from eighty-one 2- to 5-year-old children and 20 adults were phonetically transcribed and acoustically analyzed. Four measures of robustness of contrast were calculated for each speaker on the basis of the centroid frequency measured from each fricative token. Productions that were transcribed as correct from different children were then used as stimuli in a perception experiment in which adult listeners rated the goodness of each production. Results Results showed that the degree of category overlap, quantified as the percentage of a child's productions whose category could be correctly predicted from the output of a mixed-effects logistic regression model, was the measure that correlated best with listeners' goodness judgments. Conclusions Even when children's productions have been transcribed as correct, adult listeners are sensitive to within-category variation quantified by the child's degree of category overlap. Further research is needed to explore the relationship between the age of a child and adults' sensitivity to different types of within-category variation in children's speech. PMID:25766040

  18. New computational solution to quantify synthetic material porosity from optical microscopic images.

    PubMed

    De Albuquerque, V H C; Filho, P P Rebouças; Cavalcante, T S; Tavares, J M R S

    2010-10-01

    This paper presents a new computational solution to quantify the porosity of synthetic materials from optical microscopic images. The solution is based on an artificial neuronal network of the multilayer perceptron type and a backpropagation algorithm is used for training. To evaluate this new solution, 40 sample images of a synthetic material were analysed and the quality of the results was confirmed by human visual analysis. In addition, these results were compared with ones obtained with a commonly used commercial system confirming their superior quality and the shorter time needed. The effect of images with noise was also studied and the new solution showed itself to be more reliable. The training phase of the new solution was analysed confirming that it can be performed in a very easy and straightforward manner. Thus, the new solution demonstrated that it is a valid and adequate option for researchers, engineers, specialists and other professionals to quantify the porosity of materials from microscopic images in an automatic, fast, efficient and reliable manner. PMID:21050213

  19. Using nitrate to quantify quick flow in a karst aquifer

    USGS Publications Warehouse

    Mahler, B.J.; Garner, B.D.

    2009-01-01

    In karst aquifers, contaminated recharge can degrade spring water quality, but quantifying the rapid recharge (quick flow) component of spring flow is challenging because of its temporal variability. Here, we investigate the use of nitrate in a two-endmember mixing model to quantify quick flow in Barton Springs, Austin, Texas. Historical nitrate data from recharging creeks and Barton Springs were evaluated to determine a representative nitrate concentration for the aquifer water endmember (1.5 mg/L) and the quick flow endmember (0.17 mg/L for nonstormflow conditions and 0.25 mg/L for stormflow conditions). Under nonstormflow conditions for 1990 to 2005, model results indicated that quick flow contributed from 0% to 55% of spring flow. The nitrate-based two-endmember model was applied to the response of Barton Springs to a storm and results compared to those produced using the same model with ??18O and specific conductance (SC) as tracers. Additionally, the mixing model was modified to allow endmember quick flow values to vary over time. Of the three tracers, nitrate appears to be the most advantageous because it is conservative and because the difference between the concentrations in the two endmembers is large relative to their variance. The ??18O- based model was very sensitive to variability within the quick flow endmember, and SC was not conservative over the timescale of the storm response. We conclude that a nitrate-based two-endmember mixing model might provide a useful approach for quantifying the temporally variable quick flow component of spring flow in some karst systems. ?? 2008 National Ground Water Association.

  20. Quantifying Volume of Groundwater in High Elevation Meadows

    NASA Astrophysics Data System (ADS)

    Ciruzzi, D.; Lowry, C.

    2013-12-01

    Assessing the current and future water needs of high elevation meadows is dependent on quantifying the volume of groundwater stored within the meadow sediment. As groundwater dependent ecosystems, these meadows rely on their ability to capture and store water in order to support ecologic function and base flow to streams. Previous research of these meadows simplified storage by assuming a homogenous reservoir of constant thickness. These previous storage models were able to close the water mass balance, but it is unclear if these assumptions will be successful under future anthropogenic impacts, such as increased air temperature resulting in dryer and longer growing seasons. Applying a geophysical approach, ground-penetrating radar was used at Tuolumne Meadows, CA to qualitatively and quantitatively identify the controls on volume of groundwater storage. From the geophysical results, a three-dimensional model of Tuolumne Meadows was created, which identified meadow thickness and bedrock geometry. This physical model was used in a suite of numerical models simulating high elevation meadows in order to quantify volume of groundwater stored with temporal and spatial variability. Modeling efforts tested both wet and dry water years in order to quantify the variability in the volume of groundwater storage for a range of aquifer properties. Each model was evaluated based on the seasonal depth to water in order to evaluate a particular scenario's ability to support ecological function and base flow. Depending on the simulated meadows ability or inability to support its ecosystem, each representative meadow was categorized as successful or unsuccessful. Restoration techniques to increase active storage volume were suggested at unsuccessful meadows.

  1. Quantifying the BICEP2-Planck tension over gravitational waves.

    PubMed

    Smith, Kendrick M; Dvorkin, Cora; Boyle, Latham; Turok, Neil; Halpern, Mark; Hinshaw, Gary; Gold, Ben

    2014-07-18

    The recent BICEP2 measurement of B-mode polarization in the cosmic microwave background (r = 0.2(-0.05)(+0.07)), a possible indication of primordial gravity waves, appears to be in tension with the upper limit from WMAP (r < 0.13 at 95% C.L.) and Planck (r < 0.11 at 95% C.L.). We carefully quantify the level of tension and show that it is very significant (around 0.1% unlikely) when the observed deficit of large-scale temperature power is taken into account. We show that measurements of TE and EE power spectra in the near future will discriminate between the hypotheses that this tension is either a statistical fluke or a sign of new physics. We also discuss extensions of the standard cosmological model that relieve the tension and some novel ways to constrain them. PMID:25083631

  2. Quantifying Nitrogen Loss From Flooded Hawaiian Taro Fields

    NASA Astrophysics Data System (ADS)

    Deenik, J. L.; Penton, C. R.; Bruland, G. L.; Popp, B. N.; Engstrom, P.; Mueller, J. A.; Tiedje, J.

    2010-12-01

    In 2004 a field fertilization experiment showed that approximately 80% of the fertilizer nitrogen (N) added to flooded Hawaiian taro (Colocasia esculenta) fields could not be accounted for using classic N balance calculations. To quantify N loss through denitrification and anaerobic ammonium oxidation (anammox) pathways in these taro systems we utilized a slurry-based isotope pairing technique (IPT). Measured nitrification rates and porewater N profiles were also used to model ammonium and nitrate fluxes through the top 10 cm of soil. Quantitative PCR of nitrogen cycling functional genes was used to correlate porewater N dynamics with potential microbial activity. Rates of denitrification calculated using porewater profiles were compared to those obtained using the slurry method. Potential denitrification rates of surficial sediments obtained with the slurry method were found to drastically overestimate the calculated in-situ rates. The largest discrepancies were present in fields greater than one month after initial fertilization, reflecting a microbial community poised to denitrify the initial N pulse. Potential surficial nitrification rates varied between 1.3% of the slurry-measured denitrification potential in a heavily-fertilized site to 100% in an unfertilized site. Compared to the use of urea, fish bone meal fertilizer use resulted in decreased N loss through denitrification in the surface sediment, according to both porewater modeling and IPT measurements. In addition, sub-surface porewater profiles point to root-mediated coupled nitrification/denitrification as a potential N loss pathway that is not captured in surface-based incubations. Profile-based surface plus subsurface coupled nitrification/denitrification estimates were between 1.1 and 12.7 times denitrification estimates from the surface only. These results suggest that the use of a ‘classic’ isotope pairing technique that employs 15NO3- in fertilized agricultural systems can lead to a drastic

  3. Quantifying Biofilm in Porous Media Using Rock Physics Models

    NASA Astrophysics Data System (ADS)

    Alhadhrami, F. M.; Jaiswal, P.; Atekwana, E. A.

    2012-12-01

    Biofilm formation and growth in porous rocks can change their material properties such as porosity, permeability which in turn will impact fluid flow. Finding a non-intrusive method to quantify biofilms and their byproducts in rocks is a key to understanding and modeling bioclogging in porous media. Previous geophysical investigations have documented that seismic techniques are sensitive to biofilm growth. These studies pointed to the fact that microbial growth and biofilm formation induces heterogeneity in the seismic properties. Currently there are no rock physics models to explain these observations and to provide quantitative interpretation of the seismic data. Our objectives are to develop a new class of rock physics model that incorporate microbial processes and their effect on seismic properties. Using the assumption that biofilms can grow within pore-spaces or as a layer coating the mineral grains, P-wave velocity (Vp) and S-wave (Vs) velocity models were constructed using travel-time and waveform tomography technique. We used generic rock physics schematics to represent our rock system numerically. We simulated the arrival times as well as waveforms by treating biofilms either as fluid (filling pore spaces) or as part of matrix (coating sand grains). The preliminary results showed that there is a 1% change in Vp and 3% change in Vs when biofilms are represented discrete structures in pore spaces. On the other hand, a 30% change in Vp and 100% change in Vs was observed when biofilm was represented as part of matrix coating sand grains. Therefore, Vp and Vs changes are more rapid when biofilm grows as grain-coating phase. The significant change in Vs associated with biofilms suggests that shear velocity can be used as a diagnostic tool for imaging zones of bioclogging in the subsurface. The results obtained from this study have significant implications for the study of the rheological properties of biofilms in geological media. Other applications include

  4. Quantifying the role of mitigation hills in reducing tsunami runup

    NASA Astrophysics Data System (ADS)

    Marras, S.; Suckale, J.; Lunghino, B.; Giraldo, F.; Hood, K. M.

    2015-12-01

    Coastal communities around the world are being encouraged to plant or restore vegetation along their shores for the purpose of mitigating tsunami damage. A common setup for these projects is to develop 'mitigation hills' - an ensemble of vegetated hills along the coast - instead of one continuous stretch of vegetation. The rationale behind a staggered-hill setup is to give tree roots more space to grow and deepen. From a fluid-dynamical point of view, however, staggered mitigation hills may have significant drawbacks such as diverting the flow into the low-lying areas of the park, which could entail strong currents in the narrow channels between the hills and lead to erosion of the hills from the sides. The goal of this study is to quantify how mitigation hills affect tsunami runup and to provide constraints on the design of mitigation hills that mitigate tsunami damage using numerical simulations. Our computations of tsunami runup are based on the non-linear shallow water equation solved through a fully implicit, high-order, discontinuous Galerkin method. The adaptive computational grid is fitted to the hill topography to capture geometric effects accurately. A new dynamic subgrid-scale eddy viscosity originally designed for large eddy simulation of compressible flows is used for stabilization and to capture the obstacle-generated turbulence. We have carefully benchmarked our model in 1D and 2D against classical test cases. The included figure shows an example run of tsunami runup through coastal mitigation hills. In the interest of providing generalizable results, we perform a detailed scaling analysis of our model runs. We find that the protective value of mitigation hills depends sensitively on the non-linearity of the incoming wave and the relative height of the wave to the hills. Our simulations also suggest that the assumed initial condition is consequential and we hence consider a range of incoming waves ranging from a simple soliton to a more realistic N

  5. The Physics of Equestrian Show Jumping

    NASA Astrophysics Data System (ADS)

    Stinner, Art

    2014-04-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this one, but when I searched the Internet for information and looked at YouTube presentations, I could only find simplistic references to Newton's laws and the conservation of mechanical energy principle. Nowhere could I find detailed calculations. On the other hand, there were several biomechanical articles with empirical reports of the results of kinetic and dynamic investigations of show jumping using high-speed digital cameras and force plates. They summarize their results in tables that give information about the motion of a horse jumping over high fences (1.40 m) and the magnitudes of the forces encountered when landing. However, they do not describe the physics of these results.

  6. Quantifying magma mixing with the Shannon entropy: Application to simulations and experiments

    NASA Astrophysics Data System (ADS)

    Perugini, D.; De Campos, C. P.; Petrelli, M.; Morgavi, D.; Vetere, F. P.; Dingwell, D. B.

    2015-11-01

    We introduce a new quantity to petrology, the Shannon entropy, as a tool for quantifying mixing as well as the rate of production of hybrid compositions in the mixing system. The Shannon entropy approach is applied to time series numerical simulations and high-temperature experiments performed with natural melts. We note that in both cases the Shannon entropy increases linearly during the initial stages of mixing and then saturates toward constant values. Furthermore, chemical elements with different mobilities display different rates of increase of the Shannon entropy. This indicates that the hybrid composition for the different elements is attained at different times generating a wide range of spatio-compositional domains which further increase the apparent complexity of the mixing process. Results from the application of the Shannon entropy analysis are compared with the concept of Relaxation of Concentration Variance (RCV), a measure recently introduced in petrology to quantify chemical exchanges during magma mixing. We derive a linear expression relating the change of concentration variance during mixing and the Shannon entropy. We show that the combined use of Shannon entropy and RCV provides the most complete information about the space and time complexity of magma mixing. As a consequence, detailed information about this fundamental petrogenetic and volcanic process can be gathered. In particular, the Shannon entropy can be used as complement to the RCV method to quantify the mobility of chemical elements in magma mixing systems, to obtain information about the rate of production of compositional heterogeneities, and to derive empirical relationships linking the rate of chemical exchanges between interacting magmas and mixing time.

  7. Quantifying individual performance in Cricket — A network analysis of batsmen and bowlers

    NASA Astrophysics Data System (ADS)

    Mukherjee, Satyam

    2014-01-01

    Quantifying individual performance in the game of Cricket is critical for team selection in International matches. The number of runs scored by batsmen and wickets taken by bowlers serves as a natural way of quantifying the performance of a cricketer. Traditionally the batsmen and bowlers are rated on their batting or bowling average respectively. However, in a game like Cricket it is always important the manner in which one scores the runs or claims a wicket. Scoring runs against a strong bowling line-up or delivering a brilliant performance against a team with a strong batting line-up deserves more credit. A player’s average is not able to capture this aspect of the game. In this paper we present a refined method to quantify the ‘quality’ of runs scored by a batsman or wickets taken by a bowler. We explore the application of Social Network Analysis (SNA) to rate the players in a team performance. We generate a directed and weighted network of batsmen-bowlers using the player-vs-player information available for Test cricket and ODI cricket. Additionally we generate a network of batsmen and bowlers based on the dismissal record of batsmen in the history of cricket-Test (1877-2011) and ODI (1971-2011). Our results show that M. Muralitharan is the most successful bowler in the history of Cricket. Our approach could potentially be applied in domestic matches to judge a player’s performance which in turn paves the way for a balanced team selection for International matches.

  8. Model for quantifying photoelastic fringe degradation by imperfect retroreflective backings.

    PubMed

    Woolard, D; Hinders, M

    2000-05-01

    In any automated algorithm for interpreting photoelastic fringe patterns it is necessary to understand and quantify sources of error in the measurement system. We have been considering how the various components of the coating affect the photoelastic measurement, because this source of error has received fairly little attention in the literature. Because the reflective backing is not a perfect retroreflector, it does not preserve the polarization of light and thereby introduces noise into the measurement that depends on the angle of obliqueness and roughness of the reflective surface. This is of particular concern in resolving the stress tensor through the combination of thermoelasticity and photoelasticity where the components are sensitive to errors in the principal angle and difference of the principal stresses. We have developed a physical model that accounts for this and other sources of measurement error to be introduced in a systematic way so that the individual effects on the fringe patterns can be quantified. Simulations show altered photoelastic fringes when backing roughness and oblique incident angles are incorporated into the model. PMID:18345104

  9. A revised metric for quantifying body shape in vertebrates.

    PubMed

    Collar, David C; Reynaga, Crystal M; Ward, Andrea B; Mehta, Rita S

    2013-08-01

    Vertebrates exhibit tremendous diversity in body shape, though quantifying this variation has been challenging. In the past, researchers have used simplified metrics that either describe overall shape but reveal little about its anatomical basis or that characterize only a subset of the morphological features that contribute to shape variation. Here, we present a revised metric of body shape, the vertebrate shape index (VSI), which combines the four primary morphological components that lead to shape diversity in vertebrates: head shape, length of the second major body axis (depth or width), and shape of the precaudal and caudal regions of the vertebral column. We illustrate the usefulness of VSI on a data set of 194 species, primarily representing five major vertebrate clades: Actinopterygii, Lissamphibia, Squamata, Aves, and Mammalia. We quantify VSI diversity within each of these clades and, in the course of doing so, show how measurements of the morphological components of VSI can be obtained from radiographs, articulated skeletons, and cleared and stained specimens. We also demonstrate that head shape, secondary body axis, and vertebral characteristics are important independent contributors to body shape diversity, though their importance varies across vertebrate groups. Finally, we present a functional application of VSI to test a hypothesized relationship between body shape and the degree of axial bending associated with locomotor modes in ray-finned fishes. Altogether, our study highlights the promise VSI holds for identifying the morphological variation underlying body shape diversity as well as the selective factors driving shape evolution. PMID:23746908

  10. Using multiscale norms to quantify mixing and transport

    NASA Astrophysics Data System (ADS)

    Thiffeault, Jean-Luc

    2012-02-01

    Mixing is relevant to many areas of science and engineering, including the pharmaceutical and food industries, oceanography, atmospheric sciences and civil engineering. In all these situations one goal is to quantify and often then to improve the degree of homogenization of a substance being stirred, referred to as a passive scalar or tracer. A classical measure of mixing is the variance of the concentration of the scalar, which is the L2 norm of a mean-zero concentration field. Recently, other norms have been used to quantify mixing, in particular the mix-norm as well as negative Sobolev norms. These norms have the advantage that unlike variance they decay even in the absence of diffusion, and their decay corresponds to the flow being mixing in the sense of ergodic theory. General Sobolev norms weigh scalar gradients differently, and are known as multiscale norms for mixing. We review the applications of such norms to mixing and transport, and show how they can be used to optimize the stirring and mixing of a decaying passive scalar. We then review recent work on the less-studied case of a continuously replenished scalar field—the source-sink problem. In that case the flows that optimally reduce the norms are associated with transport rather than mixing: they push sources onto sinks, and vice versa.

  11. A framework for quantifying net benefits of alternative prognostic models.

    PubMed

    Rapsomaniki, Eleni; White, Ian R; Wood, Angela M; Thompson, Simon G

    2012-01-30

    New prognostic models are traditionally evaluated using measures of discrimination and risk reclassification, but these do not take full account of the clinical and health economic context. We propose a framework for comparing prognostic models by quantifying the public health impact (net benefit) of the treatment decisions they support, assuming a set of predetermined clinical treatment guidelines. The change in net benefit is more clinically interpretable than changes in traditional measures and can be used in full health economic evaluations of prognostic models used for screening and allocating risk reduction interventions. We extend previous work in this area by quantifying net benefits in life years, thus linking prognostic performance to health economic measures; by taking full account of the occurrence of events over time; and by considering estimation and cross-validation in a multiple-study setting. The method is illustrated in the context of cardiovascular disease risk prediction using an individual participant data meta-analysis. We estimate the number of cardiovascular-disease-free life years gained when statin treatment is allocated based on a risk prediction model with five established risk factors instead of a model with just age, gender and region. We explore methodological issues associated with the multistudy design and show that cost-effectiveness comparisons based on the proposed methodology are robust against a range of modelling assumptions, including adjusting for competing risks. PMID:21905066

  12. Automated Counting of Particles To Quantify Cleanliness

    NASA Technical Reports Server (NTRS)

    Rhode, James

    2005-01-01

    A machine vision system, similar to systems used in microbiological laboratories to count cultured microbes, has been proposed for quantifying the cleanliness of nominally precisely cleaned hardware by counting residual contaminant particles. The system would include a microscope equipped with an electronic camera and circuitry to digitize the camera output, a personal computer programmed with machine-vision and interface software, and digital storage media. A filter pad, through which had been aspirated solvent from rinsing the hardware in question, would be placed on the microscope stage. A high-resolution image of the filter pad would be recorded. The computer would analyze the image and present a histogram of sizes of particles on the filter. On the basis of the histogram and a measure of the desired level of cleanliness, the hardware would be accepted or rejected. If the hardware were accepted, the image would be saved, along with other information, as a quality record. If the hardware were rejected, the histogram and ancillary information would be recorded for analysis of trends. The software would perceive particles that are too large or too numerous to meet a specified particle-distribution profile. Anomalous particles or fibrous material would be flagged for inspection.

  13. Quantifying instantaneous performance in alpine ski racing.

    PubMed

    Federolf, Peter Andreas

    2012-01-01

    Alpine ski racing is a popular sport in many countries and a lot of research has gone into optimising athlete performance. Two factors influence athlete performance in a ski race: speed and the chosen path between the gates. However, to date there is no objective, quantitative method to determine instantaneous skiing performance that takes both of these factors into account. The purpose of this short communication was to define a variable quantifying instantaneous skiing performance and to study how this variable depended on the skiers' speed and on their chosen path. Instantaneous skiing performance was defined as time loss per elevation difference dt/dz, which depends on the skier's speed v(z), and the distance travelled per elevation difference ds/dz. Using kinematic data collected in an earlier study, it was evaluated how these variables can be used to assess the individual performance of six ski racers in two slalom turns. The performance analysis conducted in this study might be a useful tool not only for athletes and coaches preparing for competition, but also for sports scientists investigating skiing techniques or engineers developing and testing skiing equipment. PMID:22620279

  14. Quantifying and Mapping Global Data Poverty

    PubMed Central

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this ‘proof of concept’ study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  15. Quantifying the transmission potential of pandemic influenza

    NASA Astrophysics Data System (ADS)

    Chowell, Gerardo; Nishiura, Hiroshi

    2008-03-01

    This article reviews quantitative methods to estimate the basic reproduction number of pandemic influenza, a key threshold quantity to help determine the intensity of interventions required to control the disease. Although it is difficult to assess the transmission potential of a probable future pandemic, historical epidemiologic data is readily available from previous pandemics, and as a reference quantity for future pandemic planning, mathematical and statistical analyses of historical data are crucial. In particular, because many historical records tend to document only the temporal distribution of cases or deaths (i.e. epidemic curve), our review focuses on methods to maximize the utility of time-evolution data and to clarify the detailed mechanisms of the spread of influenza. First, we highlight structured epidemic models and their parameter estimation method which can quantify the detailed disease dynamics including those we cannot observe directly. Duration-structured epidemic systems are subsequently presented, offering firm understanding of the definition of the basic and effective reproduction numbers. When the initial growth phase of an epidemic is investigated, the distribution of the generation time is key statistical information to appropriately estimate the transmission potential using the intrinsic growth rate. Applications of stochastic processes are also highlighted to estimate the transmission potential using similar data. Critically important characteristics of influenza data are subsequently summarized, followed by our conclusions to suggest potential future methodological improvements.

  16. Smartphone quantifies Salmonella from paper microfluidics.

    PubMed

    Park, Tu San; Li, Wenyue; McCracken, Katherine E; Yoon, Jeong-Yeol

    2013-12-21

    Smartphone-based optical detection is a potentially easy-to-use, handheld, true point-of-care diagnostic tool for the early and rapid detection of pathogens. Paper microfluidics is a low-cost, field-deployable, and easy-to-use alternative to conventional microfluidic devices. Most paper-based microfluidic assays typically utilize dyes or enzyme-substrate binding, while bacterial detection on paper microfluidics is rare. We demonstrate a novel application of smartphone-based detection of Salmonella on paper microfluidics. Each paper microfluidic channel was pre-loaded with anti-Salmonella Typhimurium and anti-Escherichia coli conjugated submicroparticles. Dipping the paper microfluidic device into the Salmonella solutions led to the antibody-conjugated particles that were still confined within the paper fibers to immunoagglutinate. The extent of immunoagglutination was quantified by evaluating Mie scattering from the digital images taken at an optimized angle and distance with a smartphone. A smartphone application was designed and programmed to allow the user to position the smartphone at an optimized angle and distance from the paper microfluidic device, and a simple image processing algorithm was implemented to calculate and display the bacterial concentration on the smartphone. The detection limit was single-cell-level and the total assay time was less than one minute. PMID:24162816

  17. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  18. Data Used in Quantified Reliability Models

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Kleinhammer, Roger K.; Kahn, C. J.

    2014-01-01

    Data is the crux to developing quantitative risk and reliability models, without the data there is no quantification. The means to find and identify reliability data or failure numbers to quantify fault tree models during conceptual and design phases is often the quagmire that precludes early decision makers consideration of potential risk drivers that will influence design. The analyst tasked with addressing a system or product reliability depends on the availability of data. But, where is does that data come from and what does it really apply to? Commercial industries, government agencies, and other international sources might have available data similar to what you are looking for. In general, internal and external technical reports and data based on similar and dissimilar equipment is often the first and only place checked. A common philosophy is "I have a number - that is good enough". But, is it? Have you ever considered the difference in reported data from various federal datasets and technical reports when compared to similar sources from national and/or international datasets? Just how well does your data compare? Understanding how the reported data was derived, and interpreting the information and details associated with the data is as important as the data itself.

  19. Quantifying Climate Risks for Urban Environments

    NASA Astrophysics Data System (ADS)

    Hayhoe, K.; Stoner, A. K.; Dickson, L.

    2013-12-01

    High-density urban areas are both uniquely vulnerable and uniquely able to adapt to climate change. Enabling this potential requires identifying the vulnerabilities, however, and these depend strongly on location: the challenges climate change poses for a southern coastal city such as Miami, for example, have little in common with those facing a northern inland city such as Chicago. By combining local knowledge with climate science, risk assessment, engineering analysis, and adaptation planning, it is possible to develop relevant climate information that feeds directly into vulnerability assessment and long-term planning. Key steps include: developing climate projections tagged to long-term weather stations within the city itself that reflect local characteristics; mining local knowledge to identify existing vulnerabilities to, and costs of, weather and climate extremes; understanding how future projections can be integrated into the planning process; and identifying ways in which the city may adapt. Using examples from our work in the cities of Boston, Chicago, and Mobile we illustrate the practical application of this approach to quantify the impacts of climate change on these cities and identify robust adaptation options as diverse as reducing the urban heat island effect, protecting essential infrastructure, changing design standards and building codes, developing robust emergency management plans, and rebuilding storm sewer systems.

  20. Quantifying the evolutionary dynamics of language

    PubMed Central

    Lieberman, Erez; Michel, Jean-Baptiste; Jackson, Joe; Tang, Tina; Nowak, Martin A.

    2008-01-01

    Human language is based on grammatical rules1–4. Cultural evolution allows these rules to change over time5. Rules compete with each other: as new rules rise to prominence, old ones die away. To quantify the dynamics of language evolution, we studied the regularization of English verbs over the last 1200 years. Although an elaborate system of productive conjugations existed in English’s proto-Germanic ancestor, modern English uses the dental suffix, -ed, to signify past tense6. Here, we describe the emergence of this linguistic rule amidst the evolutionary decay of its exceptions, known to us as irregular verbs. We have generated a dataset of verbs whose conjugations have been evolving for over a millennium, tracking inflectional changes to 177 Old English irregulars. Of these irregulars, 145 remained irregular in Middle English and 98 are still irregular today. We study how the rate of regularization depends on the frequency of word usage. The half-life of an irregular verb scales as the square root of its usage frequency: a verb that is 100 times less frequent regularizes 10 times as fast. Our study provides a quantitative analysis of the regularization process by which ancestral forms gradually yield to an emerging linguistic rule. PMID:17928859

  1. Choosing appropriate techniques for quantifying groundwater recharge

    USGS Publications Warehouse

    Scanlon, B.R.; Healy, R.W.; Cook, P.G.

    2002-01-01

    Various techniques are available to quantify recharge; however, choosing appropriate techniques is often difficult. Important considerations in choosing a technique include space/time scales, range, and reliability of recharge estimates based on different techniques; other factors may limit the application of particular techniques. The goal of the recharge study is important because it may dictate the required space/time scales of the recharge estimates. Typical study goals include water-resource evaluation, which requires information on recharge over large spatial scales and on decadal time scales; and evaluation of aquifer vulnerability to contamination, which requires detailed information on spatial variability and preferential flow. The range of recharge rates that can be estimated using different approaches should be matched to expected recharge rates at a site. The reliability of recharge estimates using different techniques is variable. Techniques based on surface-water and unsaturated-zone data provide estimates of potential recharge, whereas those based on groundwater data generally provide estimates of actual recharge. Uncertainties in each approach to estimating recharge underscore the need for application of multiple techniques to increase reliability of recharge estimates.

  2. Quantifying and Mapping Global Data Poverty.

    PubMed

    Leidig, Mathias; Teeuw, Richard M

    2015-01-01

    Digital information technologies, such as the Internet, mobile phones and social media, provide vast amounts of data for decision-making and resource management. However, access to these technologies, as well as their associated software and training materials, is not evenly distributed: since the 1990s there has been concern about a "Digital Divide" between the data-rich and the data-poor. We present an innovative metric for evaluating international variations in access to digital data: the Data Poverty Index (DPI). The DPI is based on Internet speeds, numbers of computer owners and Internet users, mobile phone ownership and network coverage, as well as provision of higher education. The datasets used to produce the DPI are provided annually for almost all the countries of the world and can be freely downloaded. The index that we present in this 'proof of concept' study is the first to quantify and visualise the problem of global data poverty, using the most recent datasets, for 2013. The effects of severe data poverty, particularly limited access to geoinformatic data, free software and online training materials, are discussed in the context of sustainable development and disaster risk reduction. The DPI highlights countries where support is needed for improving access to the Internet and for the provision of training in geoinfomatics. We conclude that the DPI is of value as a potential metric for monitoring the Sustainable Development Goals of the Sendai Framework for Disaster Risk Reduction. PMID:26560884

  3. Quantifying prosthetic gait deviation using simple outcome measures

    PubMed Central

    Kark, Lauren; Odell, Ross; McIntosh, Andrew S; Simmons, Anne

    2016-01-01

    AIM: To develop a subset of simple outcome measures to quantify prosthetic gait deviation without needing three-dimensional gait analysis (3DGA). METHODS: Eight unilateral, transfemoral amputees and 12 unilateral, transtibial amputees were recruited. Twenty-eight able-bodied controls were recruited. All participants underwent 3DGA, the timed-up-and-go test and the six-minute walk test (6MWT). The lower-limb amputees also completed the Prosthesis Evaluation Questionnaire. Results from 3DGA were summarised using the gait deviation index (GDI), which was subsequently regressed, using stepwise regression, against the other measures. RESULTS: Step-length (SL), self-selected walking speed (SSWS) and the distance walked during the 6MWT (6MWD) were significantly correlated with GDI. The 6MWD was the strongest, single predictor of the GDI, followed by SL and SSWS. The predictive ability of the regression equations were improved following inclusion of self-report data related to mobility and prosthetic utility. CONCLUSION: This study offers a practicable alternative to quantifying kinematic deviation without the need to conduct complete 3DGA. PMID:27335814

  4. Quantifying the Decanalizing Effects of Spontaneous Mutations in Rhabditid Nematodes

    PubMed Central

    Baer, Charles F.

    2013-01-01

    The evolution of canalization, the robustness of the phenotype to environmental or genetic perturbation, has attracted considerable recent interest. A key step toward understanding the evolution of any phenotype is characterizing the rate at which mutation introduces genetic variation for the trait (the mutational variance, VM) and the average directional effects of mutations on the trait mean (ΔM). In this study, the mutational parameters for canalization of productivity and body volume are quantified in two sets of mutation accumulation lines of nematodes in the genus Caenorhabditis and are compared with the mutational parameters for the traits themselves. Four results emerge: (1) spontaneous mutations consistently decanalize the phenotype; (2) the mutational parameters for decanalization, VM (quantified as mutational heritability) and ΔM, are of the same order of magnitude as the same parameters for the traits themselves; (3) the mutational parameters for canalization are roughly correlated with the parameters for the traits themselves across taxa; and (4) there is no evidence that residual segregating overdominant loci contribute to the decay of canalization. These results suggest that canalization is readily evolvable and that any evolutionary factor that causes mutations to accumulate will, on average, decanalize the phenotype. PMID:18582167

  5. Quantifying weld solidification cracking susceptibility using the varestraint test

    SciTech Connect

    Lin, W.; Lippold, J.C.; Nelson, T.W.

    1994-12-31

    Since the introduction of the original Varestraint concept in the 1960`s, the longitudinal- and transverse-type Varestraint tests have become the most widely utilized techniques for quantifying weld solidification cracking susceptibility. Conventionally, cracking susceptibility is assessed by threshold strain to cause cracking and the degree of cracking as quantified by total crack strain to cause cracking and the degree of cracking as quantified by total crack length or maximum crack length. Although material-specific quantifications such as the brittle temperature range (BTR) have been proposed for the transverse-type test, similar quantifications have not been developed for the longitudinal type test. Various alloys including 304, 310, 316L, A-286, AL6XN, 20Cb-3, RA253, and RA333 stainless steels, 625, 690, and 718 nickel-base alloys, 2090, 2219, 5083, and 6061 aluminum alloys were investigated using both longitudinal- and transverse-type Varestraint tests. Tests were performed using a newly developed, computer-controlled Varestraint unit equipped with a 3-axis movable torch, spring-loaded fixture and a servo-hydraulic loading system. It was found that extensive cracking was observed in the fusion zone emanating radially from the solid-liquid inteface toward the fusion boundary in the longitudinal-type test, while weld centerline cracking was prevalent in the transverse-type test. The theoretical basis for the formation of the CSR is that liquation-related cracking only occurs in a certain temperature range known as the BTR. The detailed procedure in the development of the CSR in the fusion zone is described and discussed. This approach allows a weldability data base to be created and the comparison of results from different laboratories using different test techniques.

  6. Quantifying VOC emissions for the strategic petroleum reserve.

    SciTech Connect

    Knowlton, Robert G.; Lord, David L.

    2013-06-01

    A very important aspect of the Department of Energy's (DOE's) Strategic Petroleum Reserve (SPR) program is regulatory compliance. One of the regulatory compliance issues deals with limiting the amount of volatile organic compounds (VOCs) that are emitted into the atmosphere from brine wastes when they are discharged to brine holding ponds. The US Environmental Protection Agency (USEPA) has set limits on the amount of VOCs that can be discharged to the atmosphere. Several attempts have been made to quantify the VOC emissions associated with the brine ponds going back to the late 1970's. There are potential issues associated with each of these quantification efforts. Two efforts were made to quantify VOC emissions by analyzing VOC content of brine samples obtained from wells. Efforts to measure air concentrations were mentioned in historical reports but no data have been located to confirm these assertions. A modeling effort was also performed to quantify the VOC emissions. More recently in 2011- 2013, additional brine sampling has been performed to update the VOC emissions estimate. An analysis of the statistical confidence in these results is presented here. Arguably, there are uncertainties associated with each of these efforts. The analysis herein indicates that the upper confidence limit in VOC emissions based on recent brine sampling is very close to the 0.42 ton/MMB limit used historically on the project. Refining this estimate would require considerable investment in additional sampling, analysis, and monitoring. An analysis of the VOC emissions at each site suggests that additional discharges could be made and stay within current regulatory limits.

  7. Quantifying Floods of a Flood Regime in Space and Time

    NASA Astrophysics Data System (ADS)

    Whipple, A. A.; Fleenor, W. E.; Viers, J. H.

    2015-12-01

    Interaction between a flood hydrograph and floodplain topography results in spatially and temporally variable conditions important for ecosystem process and function. Individual floods whose frequency and dimensionality comprise a river's flood regime contribute to that variability and in aggregate are important drivers of floodplain ecosystems. Across the globe, water management actions, land use changes as well as hydroclimatic change associated with climate change have profoundly affected natural flood regimes and their expression within the floodplain landscape. Homogenization of riverscapes has degraded once highly diverse and productive ecosystems. Improved understanding of the range of flood conditions and spatial variability within floodplains, or hydrospatial conditions, is needed to improve water and land management and restoration activities to support the variable conditions under which species adapted. This research quantifies the flood regime of a floodplain site undergoing restoration through levee breaching along the lower Cosumnes River of California. One of the few lowland alluvial rivers of California with an unregulated hydrograph and regular floodplain connectivity, the Cosumnes River provides a useful test-bed for exploring river-floodplain interaction. Representative floods of the Cosumnes River are selected from previously-established flood types comprising the flood regime and applied within a 2D hydrodynamic model representing the floodplain restoration site. Model output is analyzed and synthesized to quantify and compare conditions in space and time, using metrics such as depth and velocity. This research establishes methods for quantifying a flood regime's floodplain inundation characteristics, illustrates the role of flow variability and landscape complexity in producing heterogeneous floodplain conditions, and suggests important implications for managing more ecologically functional floodplains.

  8. Quantifying and transferring contextual information in object detection.

    PubMed

    Zheng, Wei-Shi; Gong, Shaogang; Xiang, Tao

    2012-04-01

    Context is critical for reducing the uncertainty in object detection. However, context modeling is challenging because there are often many different types of contextual information coexisting with different degrees of relevance to the detection of target object(s) in different images. It is therefore crucial to devise a context model to automatically quantify and select the most effective contextual information for assisting in detecting the target object. Nevertheless, the diversity of contextual information means that learning a robust context model requires a larger training set than learning the target object appearance model, which may not be available in practice. In this work, a novel context modeling framework is proposed without the need for any prior scene segmentation or context annotation. We formulate a polar geometric context descriptor for representing multiple types of contextual information. In order to quantify context, we propose a new maximum margin context (MMC) model to evaluate and measure the usefulness of contextual information directly and explicitly through a discriminant context inference method. Furthermore, to address the problem of context learning with limited data, we exploit the idea of transfer learning based on the observation that although two categories of objects can have very different visual appearance, there can be similarity in their context and/or the way contextual information helps to distinguish target objects from nontarget objects. To that end, two novel context transfer learning models are proposed which utilize training samples from source object classes to improve the learning of the context model for a target object class based on a joint maximum margin learning framework. Experiments are carried out on PASCAL VOC2005 and VOC2007 data sets, a luggage detection data set extracted from the i-LIDS data set, and a vehicle detection data set extracted from outdoor surveillance footage. Our results validate the

  9. Quantifying compositional impacts of ambient aerosol on cloud droplet formation

    NASA Astrophysics Data System (ADS)

    Lance, Sara

    It has been historically assumed that most of the uncertainty associated with the aerosol indirect effect on climate can be attributed to the unpredictability of updrafts. In Chapter 1, we analyze the sensitivity of cloud droplet number density, to realistic variations in aerosol chemical properties and to variable updraft velocities using a 1-dimensional cloud parcel model in three important environmental cases (continental, polluted and remote marine). The results suggest that aerosol chemical variability may be as important to the aerosol indirect effect as the effect of unresolved cloud dynamics, especially in polluted environments. We next used a continuous flow streamwise thermal gradient Cloud Condensation Nuclei counter (CCNc) to study the water-uptake properties of the ambient aerosol, by exposing an aerosol sample to a controlled water vapor supersaturation and counting the resulting number of droplets. In Chapter 2, we modeled and experimentally characterized the heat transfer properties and droplet growth within the CCNc. Chapter 3 describes results from the MIRAGE field campaign, in which the CCNc and a Hygroscopicity Tandem Differential Mobility Analyzer (HTDMA) were deployed at a ground-based site during March, 2006. Size-resolved CCN activation spectra and growth factor distributions of the ambient aerosol in Mexico City were obtained, and an analytical technique was developed to quantify a probability distribution of solute volume fractions for the CCN in addition to the aerosol mixing-state. The CCN were shown to be much less CCN active than ammonium sulfate, with water uptake properties more consistent with low molecular weight organic compounds. The pollution outflow from Mexico City was shown to have CCN with an even lower fraction of soluble material. "Chemical Closure" was attained for the CCN, by comparing the inferred solute volume fraction with that from direct chemical measurements. A clear diurnal pattern was observed for the CCN solute

  10. Constraining Habitable Environments on Mars by Quantifying Available Geochemical Energy

    NASA Astrophysics Data System (ADS)

    Tierney, L. L.; Jakosky, B. M.

    2009-12-01

    The search for life on Mars includes the availability of liquid water, access to biogenic elements and an energy source. In the past, when water was more abundant on Mars, a source of energy may have been the limiting factor for potential life. Energy, either from photosynthesis or chemosynthesis, is required in order to drive metabolism. Potential martian organisms most likely took advantage of chemosynthetic reactions at and below the surface. Terrestrial chemolithoautotrophs, for example, thrive off of chemical disequilibrium that exists in many environments and use inorganic redox (reduction-oxidation) reactions to drive metabolism and create cellular biomass. The chemical disequilibrium of six different martian environments were modeled in this study and analyzed incorporating a range of water and rock compositions, water:rock mass ratios, atmospheric fugacities, pH, and temperatures. All of these models can be applied to specific sites on Mars including environments similar to Meridiani Planum and Gusev Crater. Both a mass transfer geochemical model of groundwater-basalt interaction and a mixing model of groundwater-hydrothermal fluid interaction were used to estimate hypothetical martian fluid compositions that results from mixing over the entire reaction path. By determining the overall Gibbs free energy yields for redox reactions in the H-O-C-S-Fe-Mn system, the amount of geochemical energy that was available for potential chemolithoautotrophic microorganisms was quantified and the amount of biomass that could have been sustained was estimated. The quantity of biomass that can be formed and supported within a system depends on energy availability, thus sites that have higher levels and fluxes of energy have greater potential to support life. Results show that iron- and sulfur-oxidation reactions would have been the most favorable redox reactions in aqueous systems where groundwater and rock interacted at or near the surface. These types of reactions could

  11. Species determination - Can we detect and quantify meat adulteration?

    PubMed

    Ballin, Nicolai Z; Vogensen, Finn K; Karlsson, Anders H

    2009-10-01

    Proper labelling of meat products is important to help fair-trade, and to enable consumers to make informed choices. However, it has been shown that labelling of species, expressed as weight/weight (w/w), on meat product labels was incorrect in more than 20% of cases. Enforcement of labelling regulations requires reliable analytical methods. Analytical methods are often based on protein or DNA measurements, which are not directly comparable to labelled meat expressed as w/w. This review discusses a wide range of analytical methods with focus on their ability to quantify and their limits of detection (LOD). In particular, problems associated with a correlation from quantitative DNA based results to meat content (w/w) are discussed. The hope is to make researchers aware of the problems of expressing DNA results as meat content (w/w) in order to find better alternatives. One alternative is to express DNA results as genome/genome equivalents. PMID:20416768

  12. Understanding and quantifying foliar temperature acclimation for Earth System Models

    NASA Astrophysics Data System (ADS)

    Smith, N. G.; Dukes, J.

    2015-12-01

    Photosynthesis and respiration on land are the two largest carbon fluxes between the atmosphere and Earth's surface. The parameterization of these processes represent major uncertainties in the terrestrial component of the Earth System Models used to project future climate change. Research has shown that much of this uncertainty is due to the parameterization of the temperature responses of leaf photosynthesis and autotrophic respiration, which are typically based on short-term empirical responses. Here, we show that including longer-term responses to temperature, such as temperature acclimation, can help to reduce this uncertainty and improve model performance, leading to drastic changes in future land-atmosphere carbon feedbacks across multiple models. However, these acclimation formulations have many flaws, including an underrepresentation of many important global flora. In addition, these parameterizations were done using multiple studies that employed differing methodology. As such, we used a consistent methodology to quantify the short- and long-term temperature responses of maximum Rubisco carboxylation (Vcmax), maximum rate of Ribulos-1,5-bisphosphate regeneration (Jmax), and dark respiration (Rd) in multiple species representing each of the plant functional types used in global-scale land surface models. Short-term temperature responses of each process were measured in individuals acclimated for 7 days at one of 5 temperatures (15-35°C). The comparison of short-term curves in plants acclimated to different temperatures were used to evaluate long-term responses. Our analyses indicated that the instantaneous response of each parameter was highly sensitive to the temperature at which they were acclimated. However, we found that this sensitivity was larger in species whose leaves typically experience a greater range of temperatures over the course of their lifespan. These data indicate that models using previous acclimation formulations are likely incorrectly

  13. Quantifying methane flux from lake sediments using multibeam sonar

    NASA Astrophysics Data System (ADS)

    Scandella, B.; Urban, P.; Delwiche, K.; Greinert, J.; Hemond, H.; Ruppel, C. D.; Juanes, R.

    2013-12-01

    Methane is a potent greenhouse gas, and the production and emission of methane from sediments in wetlands, lakes and rivers both contributes to and may be exacerbated by climate change. In some of these shallow-water settings, methane fluxes may be largely controlled by episodic venting that can be triggered by drops in hydrostatic pressure. Even with better constraints on the mechanisms for gas release, quantifying these fluxes has remained a challenge due to rapid spatiotemporal changes in the patterns of bubble emissions from the sediments. The research presented here uses a fixed-location Imagenex DeltaT 837B multibeam sonar to estimate methane-venting fluxes from organic-rich lake sediments over a large area (~400 m2) and over a multi-season deployment period with unprecedented spatial and temporal resolution. Simpler, single-beam sonar systems have been used in the past to estimate bubble fluxes in a variety of settings. Here we extend this methodology to a multibeam system by means of: (1) detailed calibration of the sonar signal against imposed bubble streams, and (2) validation against an in situ independent record of gas flux captured by overlying bubble traps. The calibrated sonar signals then yield estimates of the methane flux with high spatial resolution (~1 m) and temporal frequency (6 Hz) from a portion of the deepwater basin of Upper Mystic Lake, MA, USA, a temperate eutrophic kettle lake. These results in turn inform mathematical models of methane transport and release from the sediments, which reproduce with high fidelity the ebullitive response to hydrostatic pressure variations. In addition, the detailed information about spatial variability of methane flux derived from sonar records is used to estimate the uncertainty associated with upscaling flux measurements from bubble traps to the scale of the sonar observation area. Taken together, these multibeam sonar measurements and analysis provide a novel quantitative approach for the assessment of

  14. Quantifiers are incrementally interpreted in context, more than less

    PubMed Central

    Urbach, Thomas P.; DeLong, Katherine A.; Kutas, Marta

    2015-01-01

    Language interpretation is often assumed to be incremental. However, our studies of quantifier expressions in isolated sentences found N400 event-related brain potential (ERP) evidence for partial but not full immediate quantifier interpretation (Urbach & Kutas, 2010). Here we tested similar quantifier expressions in pragmatically supporting discourse contexts (Alex was an unusual toddler. Most/Few kids prefer sweets/vegetables…) while participants made plausibility judgments (Experiment 1) or read for comprehension (Experiment 2). Control Experiments 3A (plausibility) and 3B (comprehension) removed the discourse contexts. Quantifiers always modulated typical and/or atypical word N400 amplitudes. However, only the real-time N400 effects only in Experiment 2 mirrored offline quantifier and typicality crossover interaction effects for plausibility ratings and cloze probabilities. We conclude that quantifier expressions can be interpreted fully and immediately, though pragmatic and task variables appear to impact the speed and/or depth of quantifier interpretation. PMID:26005285

  15. Quantifying fluvial bedrock erosion using repeat terrestrial Lidar

    NASA Astrophysics Data System (ADS)

    Cook, Kristen

    2013-04-01

    The Da'an River Gorge in western Taiwan provides a unique opportunity to observe the formation and evolution of a natural bedrock gorge. The 1.2 km long and up to 20 m deep gorge has formed since 1999 in response to uplift of the riverbed during the Chi-Chi earthquake. The extremely rapid pace of erosion enables us to observe both downcutting and channel widening over short time periods. We have monitored the evolution of the gorge since 2009 using repeat RTK GPS surveys and terrestrial Lidar scans. GPS surveys of the channel profile are conducted frequently, with 24 surveys to date, while Lidar scans are conducted after major floods, or after 5-9 months without a flood, for a total of 8 scans to date. The Lidar data are most useful for recording erosion of channel walls, which is quite episodic and highly variable along the channel. By quantifying the distribution of wall erosion in space and time, we can improve our understanding of channel widening processes and of the development of the channel planform, particularly the growth of bends. During the summer of 2012, the Da'an catchment experienced two large storm events, a meiyu (plum rain) event on June 10-13 that brought 800 mm of rain and a typhoon on August 1-3 that brought 650 mm of rain. The resulting floods had significant geomorphic effects on the Da'an gorge, including up to 10s of meters of erosion in some sections of the gorge walls. We quantify these changes using Lidar surveys conducted on June 7, July 3, and August 30. Channel wall collapses also occur in the absence of large floods, and we use scans from August 23, 2011 and June 7, 2012 to quantify erosion during a period that included a number of small floods, but no large ones. This allows us to compare the impact of 9 months of normal conditions to the impact of short-duration extreme events. The observed variability of erosion in space and time highlights the need for 3D techniques such as terrestrial Lidar to properly quantify erosion in this

  16. Quantifying unsteadiness and dynamics of pulsatory volcanic activity

    NASA Astrophysics Data System (ADS)

    Dominguez, L.; Pioli, L.; Bonadonna, C.; Connor, C. B.; Andronico, D.; Harris, A. J. L.; Ripepe, M.

    2016-06-01

    , can be also described based on the log-logistic parameter s, which is found to increase from regular mafic systems to highly variable silicic systems. These results suggest that the periodicity of explosions, quantified in terms of the distribution of repose times, can give fundamental information about the system dynamics and change regularly across eruptive styles (i.e., Strombolian to Vulcanian), allowing for direct comparison and quantification of different types of pulsatory activity during these eruptions.

  17. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper.

    PubMed

    Hirn, Ulrich; Schennach, Robert

    2015-01-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption. PMID:26000898

  18. Comprehensive analysis of individual pulp fiber bonds quantifies the mechanisms of fiber bonding in paper

    NASA Astrophysics Data System (ADS)

    Hirn, Ulrich; Schennach, Robert

    2015-05-01

    The process of papermaking requires substantial amounts of energy and wood consumption, which contributes to larger environmental costs. In order to optimize the production of papermaking to suit its many applications in material science and engineering, a quantitative understanding of bonding forces between the individual pulp fibers is of importance. Here we show the first approach to quantify the bonding energies contributed by the individual bonding mechanisms. We calculated the impact of the following mechanisms necessary for paper formation: mechanical interlocking, interdiffusion, capillary bridges, hydrogen bonding, Van der Waals forces, and Coulomb forces on the bonding energy. Experimental results quantify the area in molecular contact necessary for bonding. Atomic force microscopy experiments derive the impact of mechanical interlocking. Capillary bridges also contribute to the bond. A model based on the crystal structure of cellulose leads to values for the chemical bonds. In contrast to general believe which favors hydrogen bonding Van der Waals bonds play the most important role according to our model. Comparison with experimentally derived bond energies support the presented model. This study characterizes bond formation between pulp fibers leading to insight that could be potentially used to optimize the papermaking process, while reducing energy and wood consumption.

  19. Approach to quantify human dermal skin aging using multiphoton laser scanning microscopy

    NASA Astrophysics Data System (ADS)

    Puschmann, Stefan; Rahn, Christian-Dennis; Wenck, Horst; Gallinat, Stefan; Fischer, Frank

    2012-03-01

    Extracellular skin structures in human skin are impaired during intrinsic and extrinsic aging. Assessment of these dermal changes is conducted by subjective clinical evaluation and histological and molecular analysis. We aimed to develop a new parameter for the noninvasive quantitative determination of dermal skin alterations utilizing the high-resolution three-dimensional multiphoton laser scanning microscopy (MPLSM) technique. To quantify structural differences between chronically sun-exposed and sun-protected human skin, the respective collagen-specific second harmonic generation and the elastin-specific autofluorescence signals were recorded in young and elderly volunteers using the MPLSM technique. After image processing, the elastin-to-collagen ratio (ELCOR) was calculated. Results show that the ELCOR parameter of volar forearm skin significantly increases with age. For elderly volunteers, the ELCOR value calculated for the chronically sun-exposed temple area is significantly augmented compared to the sun-protected upper arm area. Based on the MPLSM technology, we introduce the ELCOR parameter as a new means to quantify accurately age-associated alterations in the extracellular matrix.

  20. Quantifying Land Use Impacts on Biodiversity: Combining Species-Area Models and Vulnerability Indicators.

    PubMed

    Chaudhary, Abhishek; Verones, Francesca; de Baan, Laura; Hellweg, Stefanie

    2015-08-18

    Habitat degradation and subsequent biodiversity damage often take place far from the place of consumption because of globalization and the increasing level of international trade. Informing consumers and policy makers about the biodiversity impacts "hidden" in the life cycle of imported products is an important step toward achieving sustainable consumption patterns. Spatially explicit methods are needed in life cycle assessment to accurately quantify biodiversity impacts of products and processes. We use the Countryside species-area relationship (SAR) to quantify regional species loss due to land occupation and transformation for five taxa and six land use types in 804 terrestrial ecoregions. Further, we calculate vulnerability scores for each ecoregion based on the fraction of each species' geographic range (endemic richness) hosted by the ecoregion and the IUCN assigned threat level of each species. Vulnerability scores are multiplied with SAR-predicted regional species loss to estimate potential global extinctions per unit of land use. As a case study, we assess the land use biodiversity impacts of 1 kg of bioethanol produced using six different feed stocks in different parts of the world. Results show that the regions with highest biodiversity impacts differed markedly when the vulnerability of species was included. PMID:26197362

  1. Characterizing uncertainties for quantifying bathymetry change between time-separated multibeam echo-sounder surveys

    NASA Astrophysics Data System (ADS)

    Schmitt, Thierry; Mitchell, Neil C.; Ramsay, A. Tony S.

    2008-05-01

    Changes of bathymetry derived from multibeam sonars are useful for quantifying the effects of many sedimentary, tectonic and volcanic processes, but depth changes also require an assessment of their uncertainty. Here, we outline and illustrate a simple technique that aims both to quantify uncertainties and to help reveal the spatial character of errors. An area of immobile seafloor is mapped in each survey, providing a common 'benchmark'. Each survey dataset over the benchmark is filtered with a simple moving-averaging window and depth differences between the two surveys are collated to derive a difference histogram. The procedure is repeated using different length-scales of filtering. By plotting the variability of the differences versus the length-scale of the filter, the different effects of spatially uncorrelated and correlated noise can be deduced. The former causes variability to decrease systematically as predicted by the Central Limit Theorem, whereas the remaining variability not predicted by the Central Limit Theorem then represents the effect of spatially correlated noise. Calculations made separately for different beams can reveal whether problems are due to heave, roll, etc., which affect inner and outer beams differently. We show how the results can be applied to create a map of uncertainties, which can be used to remove insignificant data from the bathymetric change map. We illustrate the technique by characterizing changes in nearshore bed morphology over one annual cycle using data from a subtidal bay, bedrock headland and a banner sand bank in the Bristol Channel UK.

  2. Quantifying local cerebral blood flow by N-isopropyl-p-(123I)iodoamphetamine (IMP) tomography

    SciTech Connect

    Kuhl, D.E.; Barrio, J.R.; Huang, S.C.; Selin, C.; Ackermann, R.F.; Lear, J.L.; Wu, J.L.; Lin, T.H.; Phelps, M.E.

    1982-03-01

    A model was validated wherein local cerebral blood flow (LCBF) in humans was quantified by single-photon emission computed tomography (SPECT) with intravenously injected N-isopropyl-p-(123I)iodoamphetamine (IMP) combined with a modification of the classic method of arterial input sampling. After intravenous injection of IMP in rat, autoradiograms of the brain showed activity distributions in the pattern of LCBF. IMP was nearly completely removed on first pass through monkey brain after intracarotid injection (CBF.33 ml/100 g/min) and washed out with a half-time of approximately 1 hr. When the modified method of arterial input and tissue-sample counting applied to dog brain, there was good correspondence between LCBF based on IMP and on that by microsphere injection over a wide flow range. In applying the method to human subjects using SPECT, whole-brain CBF measured 47.2 +/- 5.4 ml/100 g/min (mean +/- s.d., N.5), stable gray-white distinction persisted for over 1 hr, and the half-time for brain washout was approximately 1 hr. Perfusion deficits in patients were clearly demonstrated and quantified, comparing well with results now available from positron ECT.

  3. VA-Index: Quantifying Assortativity Patterns in Networks with Multidimensional Nodal Attributes.

    PubMed

    Pelechrinis, Konstantinos; Wei, Dong

    2016-01-01

    Network connections have been shown to be correlated with structural or external attributes of the network vertices in a variety of cases. Given the prevalence of this phenomenon network scientists have developed metrics to quantify its extent. In particular, the assortativity coefficient is used to capture the level of correlation between a single-dimensional attribute (categorical or scalar) of the network nodes and the observed connections, i.e., the edges. Nevertheless, in many cases a multi-dimensional, i.e., vector feature of the nodes is of interest. Similar attributes can describe complex behavioral patterns (e.g., mobility) of the network entities. To date little attention has been given to this setting and there has not been a general and formal treatment of this problem. In this study we develop a metric, the vector assortativity index (VA-index for short), based on network randomization and (empirical) statistical hypothesis testing that is able to quantify the assortativity patterns of a network with respect to a vector attribute. Our extensive experimental results on synthetic network data show that the VA-index outperforms a baseline extension of the assortativity coefficient, which has been used in the literature to cope with similar cases. Furthermore, the VA-index can be calibrated (in terms of parameters) fairly easy, while its benefits increase with the (co-)variance of the vector elements, where the baseline systematically over(under)estimate the true mixing patterns of the network. PMID:26816262

  4. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar

    PubMed Central

    Wallace, Andrea P. C.; Milner-Gulland, E. J.; Jones, Julia P. G.; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  5. Quantifying the Short-Term Costs of Conservation Interventions for Fishers at Lake Alaotra, Madagascar.

    PubMed

    Wallace, Andrea P C; Milner-Gulland, E J; Jones, Julia P G; Bunnefeld, Nils; Young, Richard; Nicholson, Emily

    2015-01-01

    Artisanal fisheries are a key source of food and income for millions of people, but if poorly managed, fishing can have declining returns as well as impacts on biodiversity. Management interventions such as spatial and temporal closures can improve fishery sustainability and reduce environmental degradation, but may carry substantial short-term costs for fishers. The Lake Alaotra wetland in Madagascar supports a commercially important artisanal fishery and provides habitat for a Critically Endangered primate and other endemic wildlife of conservation importance. Using detailed data from more than 1,600 fisher catches, we used linear mixed effects models to explore and quantify relationships between catch weight, effort, and spatial and temporal restrictions to identify drivers of fisher behaviour and quantify the potential effect of fishing restrictions on catch. We found that restricted area interventions and fishery closures would generate direct short-term costs through reduced catch and income, and these costs vary between groups of fishers using different gear. Our results show that conservation interventions can have uneven impacts on local people with different fishing strategies. This information can be used to formulate management strategies that minimise the adverse impacts of interventions, increase local support and compliance, and therefore maximise conservation effectiveness. PMID:26107284

  6. Quantifying dynamic sensitivity of optimization algorithm parameters to improve hydrological model calibration

    NASA Astrophysics Data System (ADS)

    Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng

    2016-02-01

    It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.

  7. An integrated method for quantifying root architecture of field-grown maize

    PubMed Central

    Wu, Jie; Guo, Yan

    2014-01-01

    Background and Aims A number of techniques have recently been developed for studying the root system architecture (RSA) of seedlings grown in various media. In contrast, methods for sampling and analysis of the RSA of field-grown plants, particularly for details of the lateral root components, are generally inadequate. Methods An integrated methodology was developed that includes a custom-made root-core sampling system for extracting intact root systems of individual maize plants, a combination of proprietary software and a novel program used for collecting individual RSA information, and software for visualizing the measured individual nodal root architecture. Key Results Example experiments show that large root cores can be sampled, and topological and geometrical structure of field-grown maize root systems can be quantified and reconstructed using this method. Second- and higher order laterals are found to contribute substantially to total root number and length. The length of laterals of distinct orders varies significantly. Abundant higher order laterals can arise from a single first-order lateral, and they concentrate in the proximal axile branching zone. Conclusions The new method allows more meaningful sampling than conventional methods because of its easily opened, wide corer and sampling machinery, and effective analysis of RSA using the software. This provides a novel technique for quantifying RSA of field-grown maize and also provides a unique evaluation of the contribution of lateral roots. The method also offers valuable potential for parameterization of root architectural models. PMID:24532646

  8. Quantifying the effect of environment stability on the transcription factor repertoire of marine microbes

    PubMed Central

    2011-01-01

    Background DNA-binding transcription factors (TFs) regulate cellular functions in prokaryotes, often in response to environmental stimuli. Thus, the environment exerts constant selective pressure on the TF gene content of microbial communities. Recently a study on marine Synechococcus strains detected differences in their genomic TF content related to environmental adaptation, but so far the effect of environmental parameters on the content of TFs in bacterial communities has not been systematically investigated. Results We quantified the effect of environment stability on the transcription factor repertoire of marine pelagic microbes from the Global Ocean Sampling (GOS) metagenome using interpolated physico-chemical parameters and multivariate statistics. Thirty-five percent of the difference in relative TF abundances between samples could be explained by environment stability. Six percent was attributable to spatial distance but none to a combination of both spatial distance and stability. Some individual TFs showed a stronger relationship to environment stability and space than the total TF pool. Conclusions Environmental stability appears to have a clearly detectable effect on TF gene content in bacterioplanktonic communities described by the GOS metagenome. Interpolated environmental parameters were shown to compare well to in situ measurements and were essential for quantifying the effect of the environment on the TF content. It is demonstrated that comprehensive and well-structured contextual data will strongly enhance our ability to interpret the functional potential of microbes from metagenomic data. PMID:22587903

  9. Predicting Outcome in Comatose Patients: The Role of EEG Reactivity to Quantifiable Electrical Stimuli

    PubMed Central

    Liu, Gang; Su, Yingying; Liu, Yifei; Jiang, Mengdi; Zhang, Yan; Zhang, Yunzhou; Gao, Daiquan

    2016-01-01

    Objective. To test the value of quantifiable electrical stimuli as a reliable method to assess electroencephalogram reactivity (EEG-R) for the early prognostication of outcome in comatose patients. Methods. EEG was recorded in consecutive adults in coma after cardiopulmonary resuscitation (CPR) or stroke. EEG-R to standard electrical stimuli was tested. Each patient received a 3-month follow-up by the Glasgow-Pittsburgh cerebral performance categories (CPC) or modified Rankin scale (mRS) score. Results. Twenty-two patients met the inclusion criteria. In the CPR group, 6 of 7 patients with EEG-R had good outcomes (positive predictive value (PPV), 85.7%) and 4 of 5 patients without EEG-R had poor outcomes (negative predictive value (NPV), 80%). The sensitivity and specificity were 85.7% and 80%, respectively. In the stroke group, 6 of 7 patients with EEG-R had good outcomes (PPV, 85.7%); all of the 3 patients without EEG-R had poor outcomes (NPV, 100%). The sensitivity and specificity were 100% and 75%, respectively. Of all patients, the presence of EEG-R showed 92.3% sensitivity, 77.7% specificity, 85.7% PPV, and 87.5% NPV. Conclusion. EEG-R to quantifiable electrical stimuli might be a good positive predictive factor for the prognosis of outcome in comatose patients after CPR or stroke. PMID:27127529

  10. Quantifying DMS-cloud-climate interactions using the ECHAM5-HAMMOZ model

    NASA Astrophysics Data System (ADS)

    Thomas, M.; Suntharalingam, P.; Rast, S.; Pozzoli, L.; Feichter, J.; Lenton, T.

    2009-04-01

    The CLAW hypothesis (Charlson et al. 1987) proposes a feedback loop between ocean ecosystems and the earth's climate. The exact contribution of each process in their proposed feedback loop is still uncertain. Here we use a state of the art general circulation model, ECHAM5-HAMMOZ, to assess changes in cloud microphysical properties arising from prescribed perturbations to oceanic dimethyl sulphide (DMS) emissions in a present day climate scenario. ECHAM5-HAMMOZ consists of three interlinked modules, the atmospheric model ECHAM5, the aerosol module HAM and the tropospheric chemistry module MOZ. This study focuses on the atmosphere over the southern oceans where anthropogenic influence is minimal. We investigate changes in a range of aerosol and cloud properties to establish and quantify the linkages between them. We focus on changes in cloud droplet number concentration (CDNC), cloud droplet effective radii, total cloud cover and radiative forcing due to changes in DMS. Our preliminary results suggest that ECHAM5-HAMMOZ produces a realistic simulation of the first and second indirect aerosols effects over the Southern Ocean. The regions with higher DMS emissions show an increase in CDNC, a decrease in cloud effective radius and an increase in cloud cover. The magnitude of these changes is quantified with the ECHAM5-HAMMOZ model and will be discussed in detail.

  11. Quantifying Riverscape Connectivity with Graph Theory

    NASA Astrophysics Data System (ADS)

    Carbonneau, P.; Milledge, D.; Sinha, R.; Tandon, S. K.

    2013-12-01

    Fluvial catchments convey fluxes of water, sediment, nutrients and aquatic biota. At continental scales, crustal topography defines the overall path of channels whilst at local scales depositional and/or erosional features generally determine the exact path of a channel. Furthermore, constructions such as dams, for either water abstraction or hydropower, often have a significant impact on channel networks.The concept of ';connectivity' is commonly invoked when conceptualising the structure of a river network.This concept is easy to grasp but there have been uneven efforts across the environmental sciences to actually quantify connectivity. Currently there have only been a few studies reporting quantitative indices of connectivity in river sciences, notably, in the study of avulsion processes. However, the majority of current work describing some form of environmental connectivity in a quantitative manner is in the field of landscape ecology. Driven by the need to quantify habitat fragmentation, landscape ecologists have returned to graph theory. Within this formal setting, landscape ecologists have successfully developed a range of indices which can model connectivity loss. Such formal connectivity metrics are currently needed for a range of applications in fluvial sciences. One of the most urgent needs relates to dam construction. In the developed world, hydropower development has generally slowed and in many countries, dams are actually being removed. However, this is not the case in the developing world where hydropower is seen as a key element to low-emissions power-security. For example, several dam projects are envisaged in Himalayan catchments in the next 2 decades. This region is already under severe pressure from climate change and urbanisation, and a better understanding of the network fragmentation which can be expected in this system is urgently needed. In this paper, we apply and adapt connectivity metrics from landscape ecology. We then examine the

  12. Quantifying Collective Attention from Tweet Stream

    PubMed Central

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive “digital fossil” of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of “collective attention” on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or “tweets.” Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. “Retweet” networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  13. Quantifying collective attention from tweet stream.

    PubMed

    Sasahara, Kazutoshi; Hirata, Yoshito; Toyoda, Masashi; Kitsuregawa, Masaru; Aihara, Kazuyuki

    2013-01-01

    Online social media are increasingly facilitating our social interactions, thereby making available a massive "digital fossil" of human behavior. Discovering and quantifying distinct patterns using these data is important for studying social behavior, although the rapid time-variant nature and large volumes of these data make this task difficult and challenging. In this study, we focused on the emergence of "collective attention" on Twitter, a popular social networking service. We propose a simple method for detecting and measuring the collective attention evoked by various types of events. This method exploits the fact that tweeting activity exhibits a burst-like increase and an irregular oscillation when a particular real-world event occurs; otherwise, it follows regular circadian rhythms. The difference between regular and irregular states in the tweet stream was measured using the Jensen-Shannon divergence, which corresponds to the intensity of collective attention. We then associated irregular incidents with their corresponding events that attracted the attention and elicited responses from large numbers of people, based on the popularity and the enhancement of key terms in posted messages or "tweets." Next, we demonstrate the effectiveness of this method using a large dataset that contained approximately 490 million Japanese tweets by over 400,000 users, in which we identified 60 cases of collective attentions, including one related to the Tohoku-oki earthquake. "Retweet" networks were also investigated to understand collective attention in terms of social interactions. This simple method provides a retrospective summary of collective attention, thereby contributing to the fundamental understanding of social behavior in the digital era. PMID:23637913

  14. Aurora Boundaries Quantified by Geomagnetic Index

    NASA Astrophysics Data System (ADS)

    Carbary, J. F.

    2004-12-01

    Various operational systems require information on the location and intensity of the aurora. A statistical model of the aurora is given using global images from the Ultraviolet Imager (UVI) on the Polar satellite. The equatorward (EQ), poleward (PO) and peak (PK) boundaries of the auroral oval are determined. using UVI images averaged into 1° x1° spatial bins according to common geomagnetic indices such as Kp, AE, AL, and PCI. From these bin-averaged images, latitude intensity profiles at 1 hour MLT intervals are constructed by interpolation. A background is subtracted for each profile, and the EQ, PO, and PK boundary latitudes are found from the corrected profile. (The PK boundary is the maximum, and the EQ and PO boundaries are threshold locations of fixed irradiances such as 1, 2, or 4 photons/cm2s.) Several months of images during the winter and summer of 1997 were used to statistically quantify the boundaries at various levels of geomagnetic activity given by the several indices. As expected, the higher the level of activity, the wider and more expanded the oval. More importantly, the boundaries are functionally related to the indices at any local time. These functional relations can then be used to determine the auroral location at any level of geomagnetic activity given by the indices. Thus, given a level of geomagnetic activity, one can find the boundaries of the oval as defined on the basis of intensity. By monitoring the relevant geomagnetic index, an operational system can then easily compute the expected oval location and judge its impact on performance. The optimum indices that best define the oval will be discussed.

  15. Quantifying landscape resilience using vegetation indices

    NASA Astrophysics Data System (ADS)

    Eddy, I. M. S.; Gergel, S. E.

    2014-12-01

    Landscape resilience refers to the ability of systems to adapt to and recover from disturbance. In pastoral landscapes, degradation can be measured in terms of increased desertification and/or shrub encroachment. In many countries across Central Asia, the use and resilience of pastoral systems has changed markedly over the past 25 years, influenced by centralized Soviet governance, private property rights and recently, communal resource governance. In Kyrgyzstan, recent governance reforms were in response to the increasing degradation of pastures attributed to livestock overgrazing. Our goal is to examine and map the landscape-level factors that influence overgrazing throughout successive governance periods. Here, we map and examine some of the spatial factors influencing landscape resilience in agro-pastoral systems in the Kyrgyzstan Republic where pastures occupy >50% of the country's area. We ask three questions: 1) which mechanisms of pasture degradation (desertification vs. shrub encroachment), are detectable using remote sensing vegetation indices?; 2) Are these degraded pastures associated with landscape features that influence herder mobility and accessibility (e.g., terrain, distance to other pastures)?; and 3) Have these patterns changed through successive governance periods? Using a chronosequence of Landsat imagery (1999-2014), NDVI and other VIs were used to identify trends in pasture condition during the growing season. Least-cost path distances as well as graph theoretic indices were derived from topographic factors to assess landscape connectivity (from villages to pastures and among pastures). Fieldwork was used to assess the feasibility and accuracy of this approach using the most recent imagery. Previous research concluded that low herder mobility hindered pasture use, thus we expect the distance from pasture to village to be an important predictor of pasture condition. This research will quantify the magnitude of pastoral degradation and test

  16. Quantifying human vitamin kinetics using AMS

    SciTech Connect

    Hillegonds, D; Dueker, S; Ognibene, T; Buchholz, B; Lin, Y; Vogel, J; Clifford, A

    2004-02-19

    Tracing vitamin kinetics at physiologic concentrations has been hampered by a lack of quantitative sensitivity for chemically equivalent tracers that could be used safely in healthy people. Instead, elderly or ill volunteers were sought for studies involving pharmacologic doses with radioisotopic labels. These studies fail to be relevant in two ways: vitamins are inherently micronutrients, whose biochemical paths are saturated and distorted by pharmacological doses; and while vitamins remain important for health in the elderly or ill, their greatest effects may be in preventing slow and cumulative diseases by proper consumption throughout youth and adulthood. Neither the target dose nor the target population are available for nutrient metabolic studies through decay counting of radioisotopes at high levels. Stable isotopic labels are quantified by isotope ratio mass spectrometry at levels that trace physiologic vitamin doses, but the natural background of stable isotopes severely limits the time span over which the tracer is distinguishable. Indeed, study periods seldom ranged over a single biological mean life of the labeled nutrients, failing to provide data on the important final elimination phase of the compound. Kinetic data for the absorption phase is similarly rare in micronutrient research because the phase is rapid, requiring many consecutive plasma samples for accurate representation. However, repeated blood samples of sufficient volume for precise stable or radio-isotope quantitations consume an indefensible amount of the volunteer's blood over a short period. Thus, vitamin pharmacokinetics in humans has often relied on compartmental modeling based upon assumptions and tested only for the short period of maximal blood circulation, a period that poorly reflects absorption or final elimination kinetics except for the most simple models.

  17. The Use of Micro-CT with Image Segmentation to Quantify Leakage in Dental Restorations

    PubMed Central

    Carrera, Carola A.; Lan, Caixia; Escobar-Sanabria, David; Li, Yuping; Rudney, Joel; Aparicio, Conrado; Fok, Alex

    2015-01-01

    Objective To develop a method for quantifying leakage in composite resin restorations after curing, using non-destructive X-ray micro-computed tomography (micro-CT) and image segmentation. Methods Class-I cavity preparations were made in 20 human third molars, which were divided into 2 groups. Group I was restored with Z100 and Group II with Filtek LS. Micro-CT scans were taken for both groups before and after they were submerged in silver nitrate solution (AgNO3 50%) to reveal any interfacial gap and leakage at the tooth restoration interface. Image segmentation was carried out by first performing image correlation to align the before- and after-treatment images and then by image subtraction to isolate the silver nitrate penetrant for precise volume calculation. Two-tailed Student’s t-test was used to analyze the results, with the level of significance set at p<0.05. Results All samples from Group I showed silver nitrate penetration with a mean volume of 1.3 ± 0.7 mm3. In Group II, only 2 out of the 10 restorations displayed infiltration along the interface, giving a mean volume of 0.3 ± 0.3 mm3. The difference between the two groups was statistically significant (p < 0.05). The infiltration showed non-uniform patterns within the interface. Significance We have developed a method to quantify the volume of leakage using non-destructive micro-CT, silver nitrate infiltration and image segmentation. Our results confirmed that substantial leakage could occur in composite restorations that have imperfections in the adhesive layer or interfacial debonding through polymerization shrinkage. For the restorative systems investigated in this study, this occurred mostly at the interface between the adhesive system and the tooth structure. PMID:25649496

  18. Quantifying Recent Changes in Earth's Radiation Budget

    NASA Astrophysics Data System (ADS)

    Loeb, N. G.; Kato, S.; Lyman, J. M.; Johnson, G. C.; Doelling, D.; Wong, T.; Allan, R.; Soden, B. J.; Stephens, G. L.

    2011-12-01

    The radiative energy balance between the solar or shortwave (SW) radiation absorbed by Earth and the thermal infrared or longwave (LW) radiation emitted back to space is fundamental to climate. An increase in the net radiative flux into the system (e.g., due to external forcing) is primarily stored as heat in the ocean, and can resurface at a later time to affect weather and climate on a global scale. The associated changes in the components of the Earth-atmosphere such as clouds, the surface and the atmosphere further alter the radiative balance, leading to further changes in weather and climate. Observations from instruments aboard Aqua and other satellites clearly show large interannual and decadal variability in the Earth's radiation budget associated with the major modes of climate variability (e.g., ENSO, NAO, etc.). We present results from CERES regarding variations in the net radiation imbalance of the planet during the past decade, comparing them with independent estimates of ocean heating rates derived from in-situ observations of ocean heat content. We combine these two data sets to calculate that during the past decade Earth has been accumulating energy at the rate 0.54±0.43 Wm-2, suggesting that while Earth's surface has not warmed significantly during the 2000s, energy is continuing to accumulate in the sub-surface ocean. Our observations do not support previous claims of "missing energy" in the system. We exploit data from other instruments such as MODIS, AIRS, CALIPSO and CloudSat to examine how clouds and atmospheric temperature/humidity vary both at regional and global scales during ENSO events. Finally, we present a revised representation of the global mean Earth radiation budget derived from gridded monthly mean TOA and surface radiative fluxes (EBAF-TOA and EBAF-SFC) that are based on a radiative assimilation analysis of observations from Aqua, Terra, geostationary satellites, CALIPSO and CloudSat.

  19. Methods for quantifying uncertainty in fast reactor analyses.

    SciTech Connect

    Fanning, T. H.; Fischer, P. F.

    2008-04-07

    Liquid-metal-cooled fast reactors in the form of sodium-cooled fast reactors have been successfully built and tested in the U.S. and throughout the world. However, no fast reactor has operated in the U.S. for nearly fourteen years. More importantly, the U.S. has not constructed a fast reactor in nearly 30 years. In addition to reestablishing the necessary industrial infrastructure, the development, testing, and licensing of a new, advanced fast reactor concept will likely require a significant base technology program that will rely more heavily on modeling and simulation than has been done in the past. The ability to quantify uncertainty in modeling and simulations will be an important part of any experimental program and can provide added confidence that established design limits and safety margins are appropriate. In addition, there is an increasing demand from the nuclear industry for best-estimate analysis methods to provide confidence bounds along with their results. The ability to quantify uncertainty will be an important component of modeling that is used to support design, testing, and experimental programs. Three avenues of UQ investigation are proposed. Two relatively new approaches are described which can be directly coupled to simulation codes currently being developed under the Advanced Simulation and Modeling program within the Reactor Campaign. A third approach, based on robust Monte Carlo methods, can be used in conjunction with existing reactor analysis codes as a means of verification and validation of the more detailed approaches.

  20. Risk-Quantified Decision-Making at Rocky Flats

    SciTech Connect

    Myers, Jeffrey C.

    2008-01-15

    Surface soils in the 903 Pad Lip Area of the Rocky Flats Environmental Technology Site (RFETS) were contaminated with {sup 239/240}Pu by site operations. To meet remediation goals, accurate definition of areas where {sup 239/240}Pu activity exceeded the threshold level of 50 pCi/g and those below 50- pCi/g needed definition. In addition, the confidence for remedial decisions needed to be quantified and displayed visually. Remedial objectives needed to achieve a 90 percent certainty that unremediated soils had less than a 10 percent chance of {sup 239/240}Pu activity exceeding 50-pCi/g. Removing areas where the chance of exceedance is greater than 10 percent creates a 90 percent confidence in the remedial effort results. To achieve the stipulated goals, the geostatistical approach of probability kriging (Myers 1997) was implemented. Lessons learnt: Geostatistical techniques provided a risk-quantified approach to remedial decision-making and provided visualizations of the excavation area. Error analysis demonstrated compliance and confirmed that more than sufficient soils were removed. Error analysis also illustrated that any soils above the threshold that were not removed would be of nominal activity. These quantitative approaches were useful from a regulatory, engineering, and stakeholder satisfaction perspective.

  1. Quantifying complexity in translational research: an integrated approach

    PubMed Central

    Munoz, David A.; Nembhard, Harriet Black; Kraschnewski, Jennifer L.

    2014-01-01

    Purpose This article quantifies complexity in translational research. The impact of major operational steps and technical requirements (TR) is calculated with respect to their ability to accelerate moving new discoveries into clinical practice. Design/Methodology/Approach A three-phase integrated Quality Function Deployment (QFD) and Analytic Hierarchy Process (AHP) method was used to quantify complexity in translational research. A case study in obesity was used to usability. Findings Generally, the evidence generated was valuable for understanding various components in translational research. Particularly, we found that collaboration networks, multidisciplinary team capacity and community engagement are crucial for translating new discoveries into practice. Research limitations/implications As the method is mainly based on subjective opinion, some argue that the results may be biased. However, a consistency ratio is calculated and used as a guide to subjectivity. Alternatively, a larger sample may be incorporated to reduce bias. Practical implications The integrated QFD-AHP framework provides evidence that could be helpful to generate agreement, develop guidelines, allocate resources wisely, identify benchmarks and enhance collaboration among similar projects. Originality/value Current conceptual models in translational research provide little or no clue to assess complexity. The proposed method aimed to fill this gap. Additionally, the literature review includes various features that have not been explored in translational research. PMID:25417380

  2. Quantifying radionuclide signatures from a γ-γ coincidence system.

    PubMed

    Britton, Richard; Jackson, Mark J; Davies, Ashley V

    2015-11-01

    A method for quantifying gamma coincidence signatures has been developed, and tested in conjunction with a high-efficiency multi-detector system to quickly identify trace amounts of radioactive material. The γ-γ system utilises fully digital electronics and list-mode acquisition to time-stamp each event, allowing coincidence matrices to be easily produced alongside typical 'singles' spectra. To quantify the coincidence signatures a software package has been developed to calculate efficiency and cascade summing corrected branching ratios. This utilises ENSDF records as an input, and can be fully automated, allowing the user to quickly and easily create/update a coincidence library that contains all possible γ and conversion electron cascades, associated cascade emission probabilities, and true-coincidence summing corrected γ cascade detection probabilities. It is also fully searchable by energy, nuclide, coincidence pair, γ multiplicity, cascade probability and half-life of the cascade. The probabilities calculated were tested using measurements performed on the γ-γ system, and found to provide accurate results for the nuclides investigated. Given the flexibility of the method, (it only relies on evaluated nuclear data, and accurate efficiency characterisations), the software can now be utilised for a variety of systems, quickly and easily calculating coincidence signature probabilities. PMID:26254208

  3. Quantifying Disorder through Conditional Entropy: An Application to Fluid Mixing

    PubMed Central

    Brandani, Giovanni B.; Schor, Marieke; MacPhee, Cait E.; Grubmüller, Helmut; Zachariae, Ulrich; Marenduzzo, Davide

    2013-01-01

    In this paper, we present a method to quantify the extent of disorder in a system by using conditional entropies. Our approach is especially useful when other global, or mean field, measures of disorder fail. The method is equally suited for both continuum and lattice models, and it can be made rigorous for the latter. We apply it to mixing and demixing in multicomponent fluid membranes, and show that it has advantages over previous measures based on Shannon entropies, such as a much diminished dependence on binning and the ability to capture local correlations. Further potential applications are very diverse, and could include the study of local and global order in fluid mixtures, liquid crystals, magnetic materials, and particularly biomolecular systems. PMID:23762401

  4. Live Cell Interferometry Quantifies Dynamics of Biomass Partitioning during Cytokinesis

    PubMed Central

    Zangle, Thomas A.; Teitell, Michael A.; Reed, Jason

    2014-01-01

    The equal partitioning of cell mass between daughters is the usual and expected outcome of cytokinesis for self-renewing cells. However, most studies of partitioning during cell division have focused on daughter cell shape symmetry or segregation of chromosomes. Here, we use live cell interferometry (LCI) to quantify the partitioning of daughter cell mass during and following cytokinesis. We use adherent and non-adherent mouse fibroblast and mouse and human lymphocyte cell lines as models and show that, on average, mass asymmetries present at the time of cleavage furrow formation persist through cytokinesis. The addition of multiple cytoskeleton-disrupting agents leads to increased asymmetry in mass partitioning which suggests the absence of active mass partitioning mechanisms after cleavage furrow positioning. PMID:25531652

  5. Quantifying Age-dependent Extinction from Species Phylogenies.

    PubMed

    Alexander, Helen K; Lambert, Amaury; Stadler, Tanja

    2016-01-01

    Several ecological factors that could play into species extinction are expected to correlate with species age, i.e., time elapsed since the species arose by speciation. To date, however, statistical tools to incorporate species age into likelihood-based phylogenetic inference have been lacking. We present here a computational framework to quantify age-dependent extinction through maximum likelihood parameter estimation based on phylogenetic trees, assuming species lifetimes are gamma distributed. Testing on simulated trees shows that neglecting age dependence can lead to biased estimates of key macroevolutionary parameters. We then apply this method to two real data sets, namely a complete phylogeny of birds (class Aves) and a clade of self-compatible and -incompatible nightshades (Solanaceae), gaining initial insights into the extent to which age-dependent extinction may help explain macroevolutionary patterns. Our methods have been added to the R package TreePar. PMID:26405218

  6. Quantifying chaotic dynamics from integrate-and-fire processes

    SciTech Connect

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-15

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  7. Electron tunnelling through a quantifiable barrier of variable width

    NASA Astrophysics Data System (ADS)

    Wyatt, A. F. G.; Bromberger, H.; Klier, J.; Leiderer, P.; Zech, M.

    2009-02-01

    This is the first study of electron tunnelling through a quantifiable barrier of adjustable width. We find quantitative agreement between the measured and calculated tunnelling probability with no adjustable constants. The tunnel barrier is a thin film of 3He on Cs1 which it wets. We excite photoelectrons which have to tunnel through the barrier to escape. The image potential must be included in calculating the barrier and hence the tunnelling current. This has been a debatable point until now. We confirm that an electron has a potential of 1.0 eV in liquid 3He for short times before a bubble forms. We show that the thickness of the 3He is given by thermodynamics for films of thickness at least down to 3 monolayers.

  8. Quantifying chaotic dynamics from integrate-and-fire processes

    NASA Astrophysics Data System (ADS)

    Pavlov, A. N.; Pavlova, O. N.; Mohammad, Y. K.; Kurths, J.

    2015-01-01

    Characterizing chaotic dynamics from integrate-and-fire (IF) interspike intervals (ISIs) is relatively easy performed at high firing rates. When the firing rate is low, a correct estimation of Lyapunov exponents (LEs) describing dynamical features of complex oscillations reflected in the IF ISI sequences becomes more complicated. In this work we discuss peculiarities and limitations of quantifying chaotic dynamics from IF point processes. We consider main factors leading to underestimated LEs and demonstrate a way of improving numerical determining of LEs from IF ISI sequences. We show that estimations of the two largest LEs can be performed using around 400 mean periods of chaotic oscillations in the regime of phase-coherent chaos. Application to real data is discussed.

  9. Gradient approach to quantify the gradation smoothness for output media

    NASA Astrophysics Data System (ADS)

    Kim, Youn Jin; Bang, Yousun; Choh, Heui-Keun

    2010-01-01

    We aim to quantify the perception of color gradation smoothness using objectively measurable properties. We propose a model to compute the smoothness of hardcopy color-to-color gradations. It is a gradient-based method that can be determined as a function of the 95th percentile of second derivative for the tone-jump estimator and the fifth percentile of first derivative for the tone-clipping estimator. Performance of the model and a previously suggested method were psychophysically appreciated, and their prediction accuracies were compared to each other. Our model showed a stronger Pearson correlation to the corresponding visual data, and the magnitude of the Pearson correlation reached up to 0.87. Its statistical significance was verified through analysis of variance. Color variations of the representative memory colors-blue sky, green grass and Caucasian skin-were rendered as gradational scales and utilized as the test stimuli.

  10. Quantifying Light, Medium, and Heavy Crude Oil Distribution in Homogeneous Porous Media

    NASA Astrophysics Data System (ADS)

    Ghosh, J.; Tick, G. R.

    2008-12-01

    Crude oil recovery is highly dependent upon the physical heterogeneity of media and resulting distribution of the oil-phase within the pore spaces. Factors such as capillary force, the geometry of the pore spaces, and interfacial tension between the oil blobs and water-wet porous media will ultimately control the recovery process. Pore scale studies were conducted to study the distribution and the morphology of various fractions of crude oil in increasingly heterogeneous porous media. In addition, experiments were also carried out to characterize the temporal changes in distribution and morphology of the oil phase after a series of surfactant flooding events. Specifically, columns were packed with three different porous media with increasing heterogeneity and distributed with three different fractions (light, medium, and heavy) of crude oil. The columns were imaged using synchrotron X-ray microtomography before and after a series of surfactant floods to quantify the resulting crude oil distributions over time. Preliminary results show that the light crude oil was more heterogeneously distributed than the medium fraction crude oil within the same porous media type both before and throughout the series of surfactant floods. It was also observed that approximately 95% of the medium fraction crude oil blob-size distribution was smaller (<0.0008 cu mm) than that of the light crude oil, encompassing a significant number of blob singlets. The lighter crude oil fraction has the median blob diameter approximately 20 times greater than that of the medium crude oil fraction. These results further reveal that that oil extraction and recovery is highly dependent upon the oil fraction, the presence of small- sized blob singlets, and the resulting distributions present within porous media before and during surfactant flooding. This research will not only be helpful in understanding the factors controlling crude oil mobilization at the pore scale but also test the utility of

  11. Molecular Marker Approach on Characterizing and Quantifying Charcoal in Environmental Media

    NASA Astrophysics Data System (ADS)

    Kuo, L.; Herbert, B. E.; Louchouarn, P.

    2006-12-01

    Black carbon (BC) is widely distributed in natural environments including soils, sediments, freshwater, seawater and the atmosphere. It is produced mostly from the incomplete combustion of fossil fuels and vegetation. In recent years, increasing attention has been given to BC due to its potential influence in many biogeochemical processes. In the environment, BC exists as a continuum ranging from partly charred plant materials, charcoal residues to highly condensed soot and graphite particles. The heterogeneous nature of black carbon means that BC is always operationally-defined, highlighting the need for standard methods that support data comparisons. Unlike soot and graphite that can be quantified with well-established methods, it is difficult to directly quantify charcoal in geologic media due to its chemical and physical heterogeneity. Most of the available charcoal quantification methods detect unknown fractions of the BC continuum. To specifically identify and quantify charcoal in soils and sediments, we adopted and validated an innovative molecular marker approach that quantifies levoglucosan, a pyrogenic derivative of cellulose, as a proxy of charcoal. Levoglucosan is source-specific, stable and is able to be detected at low concentrations using gas chromatograph-mass spectrometer (GC-MS). In the present study, two different plant species, honey mesquite and cordgrass, were selected as the raw materials to synthesize charcoals. The lab-synthesize charcoals were made under control conditions to eliminate the high heterogeneity often found in natural charcoals. The effects of two major combustion factors, temperature and duration, on the yield of levoglucosan were characterized in the lab-synthesize charcoals. Our results showed that significant levoglucosan production in the two types of charcoal was restricted to relatively low combustion temperatures (150-350 degree C). The combustion duration did not cause significant differences in the yield of

  12. Quantifying the Local Environment of Low Redshift Quasars

    NASA Astrophysics Data System (ADS)

    Brunner, R. J.; Bender, A.

    2003-12-01

    We report preliminary results from an ongoing project to understand the interplay between local environment and quasi-stellar objects (QSOs). Many models exist which account for various observed qualities of QSOs, but little is known about the relationship between QSOs, their host galaxy, and their local environment. We use data from the Sloan Digital Sky Survey (SDSS) DR1 to identify possible statistical excesses (both positive and negative) in the local density of galaxies surrounding each QSO. In order to explore the quasar-environment interaction, we quantify the relationships between the environment and the intrinsic properties of the QSOs. From these results we hope to learn more about the fueling processes of the supermassive black holes that are believed to lie at the center of QSOs. We would like to thank the NASA AISR Program for funding in part this work through grant NAG 5-12580.

  13. Quantifying distributed damage in composites via the thermoelastic effect

    SciTech Connect

    Mahoney, B.J.

    1992-01-01

    A new approach toward quantifying transverse matrix cracking in composite laminates using the thermoelastic effect is developed. The thermoelastic effect refers to the small temperature changes that are generated in components under dynamic loading. Two models are derived, and the theoretical predictions are experimentally verified for three types of laminates. Both models include damage-induced changes in the lamina stress state and lamina coefficients of thermal expansion conduction effects, and epoxy thickness. The first model relates changes in the laminate TSA signal to changes in longitudinal laminate stiffness and Poisson's ratio. This model is based on gross simplifying assumptions and can be used on any composite laminate layup undergoing transverse matrix cracking. The second model relates TSA signal changes to longitudinal laminate stiffness, Poisson's ratio, and microcrack density for (0[sub p]90[sub q])[sub s] and (90[sub q]/0[sub p])[sub s] cross-ply laminates. Both models yield virtually identical results for the cross-ply laminates considered. A sensitivity analysis is performed on both models to quantify the effects of reasonable property variations on the normalized stiffness vs. normalized TSA signal results for the three laminates under consideration. The results for the cross-ply laminates are very insensitive, while the (+/- 45)[sub 5s] are particularly sensitive to epoxy thickness and longitudinal lamina coefficient of thermal expansion. Experiments are conducted on (0[sub 3]/90[sub 3])[sub s] and (90[sub 3]/0[sub 3])[sub s] Gl/Ep laminates and (+/- 45)[sub 5s] Gr/Ep laminates to confirm the theoretical developments of the thesis. There is a very good correlation between the theoretical predictions and experimental results for the Gl/Ep laminates.

  14. Quantifying Carbon Bioavailability in Northeast Siberian Soils

    NASA Astrophysics Data System (ADS)

    Heslop, J.; Chandra, S.; Sobczak, W. V.; Spektor, V.; Davydova, A.; Holmes, R. M.; Bulygina, E. B.; Schade, J. D.; Frey, K. E.; Bunn, A. G.; Walter Anthony, K.; Zimov, S. A.; Zimov, N.

    2010-12-01

    Soils in Northeast Siberia, particularly carbon-rich yedoma (Pleistocene permafrost) soils, have the potential to release large amounts of carbon dioxide and methane due to permafrost thaw and thermokarst activity. In order to quantify the amount of carbon release potential in these soils, it is important to understand carbon bioavailability for microbial consumption in the permafrost. In this study we measured amounts of bioavailable soil carbon across five locations in the Kolyma River Basin, NE Siberia. At each location, we sampled four horizons (top active layer, bottom active layer, Holocene optimum permafrost, and Pleistocene permafrost) and conducted soil extracts for each sample. Filtered and unfiltered extracts were used in biological oxygen demand experiments to determine the dissolved and particulate bioavailable carbon potential for consumption in the soil. Concentrations of bioavailable carbon were 102-608 mg C/kg dry soil for filtered extracts and 115-703 mg C/kg dry soil for unfiltered extracts. Concentrations of carbon respired per gram of dry soil were roughly equal for both the DOC and POC extracts (P<0.001), suggesting that bioavailable soil carbon is predominately in the dissolved form or the presence of an additional unknown limitation preventing organisms from utilizing carbon in the particulate form. Concentrations of bioavailable carbon were similar across the different sampling locations but differed among horizons. The top active layer (102-703 mg C/kg dry soil), Holocene optimum permafrost (193-481 mg C/kg dry soil), and Pleistocene permafrost (151-589 mg C/kg dry soil) horizons had the highest amounts of bioavailable carbon, and the bottom active layer (115-179 mg C/kg dry soil) horizon had the lowest amounts. For comparison, ice wedges had bioavailable carbon concentrations of 23.0 mg C/L and yedoma runoff from Duvyanni Yar had concentrations of 306 mg C/L. Pleistocene permafrost soils had similar concentrations of bioavailable carbon

  15. Quantifying the impacts of global disasters

    NASA Astrophysics Data System (ADS)

    Jones, L. M.; Ross, S.; Wilson, R. I.; Borrero, J. C.; Brosnan, D.; Bwarie, J. T.; Geist, E. L.; Hansen, R. A.; Johnson, L. A.; Kirby, S. H.; Long, K.; Lynett, P. J.; Miller, K. M.; Mortensen, C. E.; Perry, S. C.; Porter, K. A.; Real, C. R.; Ryan, K. J.; Thio, H. K.; Wein, A. M.; Whitmore, P.; Wood, N. J.

    2012-12-01

    The US Geological Survey, National Oceanic and Atmospheric Administration, California Geological Survey, and other entities are developing a Tsunami Scenario, depicting a realistic outcome of a hypothetical but plausible large tsunami originating in the eastern Aleutian Arc, affecting the west coast of the United States, including Alaska and Hawaii. The scenario includes earth-science effects, damage and restoration of the built environment, and social and economic impacts. Like the earlier ShakeOut and ARkStorm disaster scenarios, the purpose of the Tsunami Scenario is to apply science to quantify the impacts of natural disasters in a way that can be used by decision makers in the affected sectors to reduce the potential for loss. Most natural disasters are local. A major hurricane can destroy a city or damage a long swath of coastline while mostly sparing inland areas. The largest earthquake on record caused strong shaking along 1500 km of Chile, but left the capital relatively unscathed. Previous scenarios have used the local nature of disasters to focus interaction with the user community. However, the capacity for global disasters is growing with the interdependency of the global economy. Earthquakes have disrupted global computer chip manufacturing and caused stock market downturns. Tsunamis, however, can be global in their extent and direct impact. Moreover, the vulnerability of seaports to tsunami damage can increase the global consequences. The Tsunami Scenario is trying to capture the widespread effects while maintaining the close interaction with users that has been one of the most successful features of the previous scenarios. The scenario tsunami occurs in the eastern Aleutians with a source similar to the 2011 Tohoku event. Geologic similarities support the argument that a Tohoku-like source is plausible in Alaska. It creates a major nearfield tsunami in the Aleutian arc and peninsula, a moderate tsunami in the US Pacific Northwest, large but not the

  16. Quantifying the net slab pull force as a driving mechanism for plate tectonics

    NASA Astrophysics Data System (ADS)

    Schellart, W. P.

    2004-04-01

    It has remained unclear how much of the negative buoyancy force of the slab (FB) is used to pull the trailing plate at the surface into the mantle. Here I present three-dimensional laboratory experiments to quantify the net slab pull force (FNSP) with respect to FB during subduction. Results show that FNSP increases with increasing slab length and dip up to ~8-12% of FB, making FNSP up to twice as large as the ridge push force. The remainder of FB is primarily used to drive rollback-induced mantle flow (~70%), to bend the subducting plate at the trench (~15-30%) and to overcome shear resistance between slab and mantle (0-8%).

  17. Life cycle assessment of urban wastewater systems: Quantifying the relative contribution of sewer systems.

    PubMed

    Risch, Eva; Gutierrez, Oriol; Roux, Philippe; Boutin, Catherine; Corominas, Lluís

    2015-06-15

    This study aims to propose a holistic, life cycle assessment (LCA) of urban wastewater systems (UWS) based on a comprehensive inventory including detailed construction and operation of sewer systems and wastewater treatment plants (WWTPs). For the first time, the inventory of sewers infrastructure construction includes piping materials and aggregates, manholes, connections, civil works and road rehabilitation. The operation stage comprises energy consumption in pumping stations together with air emissions of methane and hydrogen sulphide, and water emissions from sewer leaks. Using a real case study, this LCA aims to quantify the contributions of sewer systems to the total environmental impacts of the UWS. The results show that the construction of sewer infrastructures has an environmental impact (on half of the 18 studied impact categories) larger than both the construction and operation of the WWTP. This study highlights the importance of including the construction and operation of sewer systems in the environmental assessment of centralised versus decentralised options for UWS. PMID:25839834

  18. Quantifying Nanomolar Protein Concentrations Using Designed DNA Carriers and Solid-State Nanopores

    PubMed Central

    2016-01-01

    Designed “DNA carriers” have been proposed as a new method for nanopore based specific protein detection. In this system, target protein molecules bind to a long DNA strand at a defined position creating a second level transient current drop against the background DNA translocation. Here, we demonstrate the ability of this system to quantify protein concentrations in the nanomolar range. After incubation with target protein at different concentrations, the fraction of DNA translocations showing a secondary current spike allows for the quantification of the corresponding protein concentration. For our proof-of-principle experiments we use two standard binding systems, biotin–streptavidin and digoxigenin–antidigoxigenin, that allow for measurements of the concentration down to the low nanomolar range. The results demonstrate the potential for a novel quantitative and specific protein detection scheme using the DNA carrier method. PMID:27121643

  19. Quantifying Nanomolar Protein Concentrations Using Designed DNA Carriers and Solid-State Nanopores.

    PubMed

    Kong, Jinglin; Bell, Nicholas A W; Keyser, Ulrich F

    2016-06-01

    Designed "DNA carriers" have been proposed as a new method for nanopore based specific protein detection. In this system, target protein molecules bind to a long DNA strand at a defined position creating a second level transient current drop against the background DNA translocation. Here, we demonstrate the ability of this system to quantify protein concentrations in the nanomolar range. After incubation with target protein at different concentrations, the fraction of DNA translocations showing a secondary current spike allows for the quantification of the corresponding protein concentration. For our proof-of-principle experiments we use two standard binding systems, biotin-streptavidin and digoxigenin-antidigoxigenin, that allow for measurements of the concentration down to the low nanomolar range. The results demonstrate the potential for a novel quantitative and specific protein detection scheme using the DNA carrier method. PMID:27121643

  20. Quantifying fiber formation in meat analogs under high moisture extrusion using image processing

    NASA Astrophysics Data System (ADS)

    Ranasinghesagara, J.; Hsieh, F.; Yao, G.

    2005-11-01

    High moisture extrusion using twin-screw extruders shows great promise of producing meat analog products with vegetable proteins. The resulting products have well defined fiber formations; resemble real meat in both visual appearance and taste sensation. Developing reliable non-destructive techniques to quantify the textural properties of extrudates is important for quality control in the manufacturing process. In this study, we developed an image processing technique to automatically characterize sample fiber formation using digital imaging. The algorithm is based on statistical analysis of Hough transform. This objective method can be used as a standard method for evaluating other non-invasive methods. We have compared the fiber formation indices measured using this technique and a non-invasive fluorescence polarization method and obtained a high correlation.

  1. Quantifying the complexity of human colonic pressure signals using an entropy measure.

    PubMed

    Xu, Fei; Yan, Guozheng; Zhao, Kai; Lu, Li; Wang, Zhiwu; Gao, Jinyang

    2016-02-01

    Studying the complexity of human colonic pressure signals is important in understanding this intricate, evolved, dynamic system. This article presents a method for quantifying the complexity of colonic pressure signals using an entropy measure. As a self-adaptive non-stationary signal analysis algorithm, empirical mode decomposition can decompose a complex pressure signal into a set of intrinsic mode functions (IMFs). Considering that IMF2, IMF3, and IMF4 represent crucial characteristics of colonic motility, a new signal was reconstructed with these three signals. Then, the time entropy (TE), power spectral entropy (PSE), and approximate entropy (AE) of the reconstructed signal were calculated. For subjects with constipation and healthy individuals, experimental results showed that the entropies of reconstructed signals between these two classes were distinguishable. Moreover, the TE, PSE, and AE can be extracted as features for further subject classification. PMID:26043437

  2. Quantifying moisture transport in cementitious materials using neutron radiography

    NASA Astrophysics Data System (ADS)

    Lucero, Catherine L.

    . It has been found through this study that small pores, namely voids created by chemical shrinkage, gel pores, and capillary pores, ranging from 0.5 nm to 50 microm, fill quickly through capillary action. However, large entrapped and entrained air voids ranging from 0.05 to 1.25 mm remain empty during the initial filling process. In mortar exposed to calcium chloride solution, a decrease in sorptivity was observed due to an increase in viscosity and surface tension of the solution as proposed by Spragg et al 2011. This work however also noted a decrease in the rate of absorption due to a reaction between the salt and matrix which results in the filling of the pores in the concrete. The results from neutron imaging can help in the interpretation of standard absorption tests. ASTM C1585 test results can be further analyzed in several ways that could give an accurate indication of the durability of the concrete. Results can be reported in depth of penetration versus the square root of time rather than mm3 of fluid per mm2 of exposed surface area. Since a known fraction of pores are initially filling before reaching the edge of the sample, the actual depth of penetration can be calculated. This work is compared with an 'intrinsic sorptivity' that can be used to interpret mass measurements. Furthermore, the influence of shrinkage reducing admixtures (SRAs) on drying was studied. Neutron radiographs showed that systems saturated in water remain "wetter" than systems saturated in 5% SRA solution. The SRA in the system reduces the moisture diffusion coefficient due an increase in viscosity and decrease in surface tension. Neutron radiography provided spatial information of the drying front that cannot be achieved using other methods.

  3. Quantifying the effect size of changing environmental controls on carbon release from permafrost-affected soils

    NASA Astrophysics Data System (ADS)

    Schaedel, C.; Bader, M. K. F.; Schuur, E. A. G.; Bracho, R. G.; Capek, P.; De Baets, S. L.; Diakova, K.; Ernakovich, J. G.; Hartley, I. P.; Iversen, C. M.; Kane, E. S.; Knoblauch, C.; Lupascu, M.; Natali, S.; Norby, R. J.; O'Donnell, J. A.; Roy Chowdhury, T.; Santruckova, H.; Shaver, G. R.; Sloan, V. L.; Treat, C. C.; Waldrop, M. P.

    2014-12-01

    High-latitude surface air temperatures are rising twice as fast as the global mean, causing permafrost to thaw and thereby exposing large quantities of previously frozen organic carbon (C) to microbial decomposition. Increasing temperatures in high latitude ecosystems not only increase C emissions from previously frozen C in permafrost but also indirectly affect the C cycle through changes in regional and local hydrology. Warmer temperatures increase thawing of ice-rich permafrost, causing land surface subsidence where soils become waterlogged, anoxic conditions prevail and C is released in form of anaerobic CO2 and CH4. Although substrate quality, physical protection, and nutrient availability affect C decomposition, increasing temperatures and changes in surface and sub-surface hydrology are likely the dominant factors affecting the rate and form of C release from permafrost; however, their effect size on C release is poorly quantified. We have compiled a database of 24 incubation studies with soils from active layer and permafrost from across the entire permafrost zone to quantify a) the effect size of increasing temperatures and b) the changes from aerobic to anaerobic environmental soil conditions on C release. Results from two different meta-analyses show that a 10°C increase in temperature increased C release by a factor of two in boreal forest, peatland and tundra ecosystems. Under aerobic incubation conditions, soils released on average three times more C than under anaerobic conditions with large variation among the different ecosystems. While peatlands showed similar amounts of C release under aerobic and anaerobic soil conditions, tundra and boreal forest ecosystems released up to 8 times more C under anoxic conditions. This pan-arctic synthesis shows that boreal forest and tundra soils will have a larger impact on climate change when newly thawed permafrost C decomposes in an aerobic environment compared to an anaerobic environment even when

  4. Radiative transfer modeling for quantifying lunar surface minerals, particle size, and submicroscopic metallic Fe

    NASA Astrophysics Data System (ADS)

    Li, Shuai; Li, Lin

    2011-09-01

    The main objective of this work is to quantify lunar surface minerals (agglutinate, clinopyroxene, orthopyroxene, plagioclase, olivine, ilmenite, and volcanic glass), particle sizes, and the abundance of submicroscopic metallic Fe (SMFe) from the Lunar Soil Characterization Consortium (LSCC) data set with Hapke's radiative transfer theory. The mode is implemented for both forward and inverse modeling. We implement Hapke's radiative transfer theory in the inverse mode in which, instead of commonly used look-up tables, Newton's method and least squares are jointly used to solve nonlinear questions. Although the effects of temperature and surface roughness are incorporated into the implementation to improve the model performance for application of lunar spacecraft data, these effects cannot be extensively addressed in the current work because of the use of lab-measured reflectance data. Our forward radiative transfer model results show that the correlation coefficients between modeled and measured spectra are over 0.99. For the inverse model, the distribution of the particle sizes is all within their measured range. The range of modeled SMFe for highland samples is 0.01%-0.5%, and for mare samples it is 0.03%-1%. The linear trend between SMFe and ferromagnetic resonance (Is) for all the LSCC samples is consistent with laboratory measurements. For quantifying lunar mineral abundances, the results show that the R squared for the training samples (Is/FeO ≤ 65) are over 0.65 with plagioclase having highest correlation (0.94) and pyroxene having the lowest correlation (0.68). In future work, the model needs to be improved for handling more mature lunar soil samples.

  5. Path Similarity Analysis: A Method for Quantifying Macromolecular Pathways

    PubMed Central

    Seyler, Sean L.; Kumar, Avishek; Thorpe, M. F.; Beckstein, Oliver

    2015-01-01

    Diverse classes of proteins function through large-scale conformational changes and various sophisticated computational algorithms have been proposed to enhance sampling of these macromolecular transition paths. Because such paths are curves in a high-dimensional space, it has been difficult to quantitatively compare multiple paths, a necessary prerequisite to, for instance, assess the quality of different algorithms. We introduce a method named Path Similarity Analysis (PSA) that enables us to quantify the similarity between two arbitrary paths and extract the atomic-scale determinants responsible for their differences. PSA utilizes the full information available in 3N-dimensional configuration space trajectories by employing the Hausdorff or Fréchet metrics (adopted from computational geometry) to quantify the degree of similarity between piecewise-linear curves. It thus completely avoids relying on projections into low dimensional spaces, as used in traditional approaches. To elucidate the principles of PSA, we quantified the effect of path roughness induced by thermal fluctuations using a toy model system. Using, as an example, the closed-to-open transitions of the enzyme adenylate kinase (AdK) in its substrate-free form, we compared a range of protein transition path-generating algorithms. Molecular dynamics-based dynamic importance sampling (DIMS) MD and targeted MD (TMD) and the purely geometric FRODA (Framework Rigidity Optimized Dynamics Algorithm) were tested along with seven other methods publicly available on servers, including several based on the popular elastic network model (ENM). PSA with clustering revealed that paths produced by a given method are more similar to each other than to those from another method and, for instance, that the ENM-based methods produced relatively similar paths. PSA applied to ensembles of DIMS MD and FRODA trajectories of the conformational transition of diphtheria toxin, a particularly challenging example, showed that

  6. Quantifying Unnecessary Normal Tissue Complication Risks due to Suboptimal Planning: A Secondary Study of RTOG 0126

    SciTech Connect

    Moore, Kevin L.; Schmidt, Rachel; Moiseenko, Vitali; Olsen, Lindsey A.; Tan, Jun; Xiao, Ying; Galvin, James; Pugh, Stephanie; Seider, Michael J.; Dicker, Adam P.; Bosch, Walter; Michalski, Jeff; Mutic, Sasa

    2015-06-01

    Purpose: The purpose of this study was to quantify the frequency and clinical severity of quality deficiencies in intensity modulated radiation therapy (IMRT) planning in the Radiation Therapy Oncology Group 0126 protocol. Methods and Materials: A total of 219 IMRT patients from the high-dose arm (79.2 Gy) of RTOG 0126 were analyzed. To quantify plan quality, we used established knowledge-based methods for patient-specific dose-volume histogram (DVH) prediction of organs at risk and a Lyman-Kutcher-Burman (LKB) model for grade ≥2 rectal complications to convert DVHs into normal tissue complication probabilities (NTCPs). The LKB model was validated by fitting dose-response parameters relative to observed toxicities. The 90th percentile (22 of 219) of plans with the lowest excess risk (difference between clinical and model-predicted NTCP) were used to create a model for the presumed best practices in the protocol (pDVH{sub 0126,top10%}). Applying the resultant model to the entire sample enabled comparisons between DVHs that patients could have received to DVHs they actually received. Excess risk quantified the clinical impact of suboptimal planning. Accuracy of pDVH predictions was validated by replanning 30 of 219 patients (13.7%), including equal numbers of presumed “high-quality,” “low-quality,” and randomly sampled plans. NTCP-predicted toxicities were compared to adverse events on protocol. Results: Existing models showed that bladder-sparing variations were less prevalent than rectum quality variations and that increased rectal sparing was not correlated with target metrics (dose received by 98% and 2% of the PTV, respectively). Observed toxicities were consistent with current LKB parameters. Converting DVH and pDVH{sub 0126,top10%} to rectal NTCPs, we observed 94 of 219 patients (42.9%) with ≥5% excess risk, 20 of 219 patients (9.1%) with ≥10% excess risk, and 2 of 219 patients (0.9%) with ≥15% excess risk. Replanning demonstrated the

  7. Quantifying the role of forest soil and bedrock in the acid neutralization of surface water in steep hillslopes.

    PubMed

    Asano, Yuko; Uchida, Taro

    2005-02-01

    The role of soil and bedrock in acid neutralizing processes has been difficult to quantify because of hydrological and biogeochemical uncertainties. To quantify those roles, hydrochemical observations were conducted at two hydrologically well-defined, steep granitic hillslopes in the Tanakami Mountains of Japan. These paired hillslopes are similar except for their soils; Fudoji is leached of base cations (base saturation <6%), while Rachidani is covered with fresh soil (base saturation >30%), because the erosion rate is 100-1000 times greater. The results showed that (1) soil solution pH at the soil-bedrock interface at Fudoji (4.3) was significantly lower than that of Rachidani (5.5), (2) the hillslope discharge pH in both hillslopes was similar (6.7-6.8), and (3) at Fudoji, 60% of the base cations leaching from the hillslope were derived from bedrock, whereas only 20% were derived from bedrock in Rachidani. Further, previously published results showed that the stream pH could not be predicted from the acid deposition rate and soil base saturation status. These results demonstrate that bedrock plays an especially important role when the overlying soil has been leached of base cations. These results indicate that while the status of soil acidification is a first-order control on vulnerability to surface water acidification, in some cases such as at Fudoji, subsurface interaction with the bedrock determines the sensitivity of surface water to acidic deposition. PMID:15519722

  8. Quantifying the Behavior of Stock Correlations Under Market Stress

    PubMed Central

    Preis, Tobias; Kenett, Dror Y.; Stanley, H. Eugene; Helbing, Dirk; Ben-Jacob, Eshel

    2012-01-01

    Understanding correlations in complex systems is crucial in the face of turbulence, such as the ongoing financial crisis. However, in complex systems, such as financial systems, correlations are not constant but instead vary in time. Here we address the question of quantifying state-dependent correlations in stock markets. Reliable estimates of correlations are absolutely necessary to protect a portfolio. We analyze 72 years of daily closing prices of the 30 stocks forming the Dow Jones Industrial Average (DJIA). We find the striking result that the average correlation among these stocks scales linearly with market stress reflected by normalized DJIA index returns on various time scales. Consequently, the diversification effect which should protect a portfolio melts away in times of market losses, just when it would most urgently be needed. Our empirical analysis is consistent with the interesting possibility that one could anticipate diversification breakdowns, guiding the design of protected portfolios. PMID:23082242

  9. Identifying and quantifying interactions in a laboratory swarm

    NASA Astrophysics Data System (ADS)

    Puckett, James; Kelley, Douglas; Ouellette, Nicholas

    2013-03-01

    Emergent collective behavior, such as in flocks of birds or swarms of bees, is exhibited throughout the animal kingdom. Many models have been developed to describe swarming and flocking behavior using systems of self-propelled particles obeying simple rules or interacting via various potentials. However, due to experimental difficulties and constraints, little empirical data exists for characterizing the exact form of the biological interactions. We study laboratory swarms of flying Chironomus riparius midges, using stereoimaging and particle tracking techniques to record three-dimensional trajectories for all the individuals in the swarm. We describe methods to identify and quantify interactions by examining these trajectories, and report results on interaction magnitude, frequency, and mutuality.

  10. Quantifying Interparticle Forces and Heterogeneity in 3D Granular Materials.

    PubMed

    Hurley, R C; Hall, S A; Andrade, J E; Wright, J

    2016-08-26

    Interparticle forces in granular materials are intimately linked to mechanical properties and are known to self-organize into heterogeneous structures, or force chains, under external load. Despite progress in understanding the statistics and spatial distribution of interparticle forces in recent decades, a systematic method for measuring forces in opaque, three-dimensional (3D), frictional, stiff granular media has yet to emerge. In this Letter, we present results from an experiment that combines 3D x-ray diffraction, x-ray tomography, and a numerical force inference technique to quantify interparticle forces and their heterogeneity in an assembly of quartz grains undergoing a one-dimensional compression cycle. Forces exhibit an exponential decay above the mean and partition into strong and weak networks. We find a surprising inverse relationship between macroscopic load and the heterogeneity of interparticle forces, despite the clear emergence of two force chains that span the system. PMID:27610890

  11. Quantifying light exposure patterns in young adult students

    NASA Astrophysics Data System (ADS)

    Alvarez, Amanda A.; Wildsoet, Christine F.

    2013-08-01

    Exposure to bright light appears to be protective against myopia in both animals (chicks, monkeys) and children, but quantitative data on human light exposure are limited. In this study, we report on a technique for quantifying light exposure using wearable sensors. Twenty-seven young adult subjects wore a light sensor continuously for two weeks during one of three seasons, and also completed questionnaires about their visual activities. Light data were analyzed with respect to refractive error and season, and the objective sensor data were compared with subjects' estimates of time spent indoors and outdoors. Subjects' estimates of time spent indoors and outdoors were in poor agreement with durations reported by the sensor data. The results of questionnaire-based studies of light exposure should thus be interpreted with caution. The role of light in refractive error development should be investigated using multiple methods such as sensors to complement questionnaires.

  12. Quantifying the Impact of Unavailability in Cyber-Physical Environments

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Federick T.; Mili, Ali

    2014-01-01

    The Supervisory Control and Data Acquisition (SCADA) system discussed in this work manages a distributed control network for the Tunisian Electric & Gas Utility. The network is dispersed over a large geographic area that monitors and controls the flow of electricity/gas from both remote and centralized locations. The availability of the SCADA system in this context is critical to ensuring the uninterrupted delivery of energy, including safety, security, continuity of operations and revenue. Such SCADA systems are the backbone of national critical cyber-physical infrastructures. Herein, we propose adapting the Mean Failure Cost (MFC) metric for quantifying the cost of unavailability. This new metric combines the classic availability formulation with MFC. The resulting metric, so-called Econometric Availability (EA), offers a computational basis to evaluate a system in terms of the gain/loss ($/hour of operation) that affects each stakeholder due to unavailability.

  13. Quantifying adhesion energy of mechanical coatings at atomistic scale

    NASA Astrophysics Data System (ADS)

    Yin, Deqiang; Peng, Xianghe; Qin, Yi; Feng, Jiling; Wang, Zhongchang

    2011-12-01

    Coatings of transition metal compounds find widespread technological applications where adhesion is known to influence or control functionality. Here, we, by first-principles calculations, propose a new way to assess adhesion in coatings and apply it to analyze the TiN coating. We find that the calculated adhesion energies of both the (1 1 1) and (0 0 1) orientations are small under no residual stress, yet increase linearly once the stress is imposed, suggesting that the residual stress is key to affecting adhesion. The strengthened adhesion is found to be attributed to the stress-induced shrinkage of neighbouring bonds, which results in stronger interactions between bonds in TiN coatings. Further finite elements simulation (FEM) based on calculated adhesion energy reproduces well the initial cracking process observed in nano-indentation experiments, thereby validating the application of this approach in quantifying adhesion energy of surface coating systems.

  14. Quantifying the effects of anagenetic and cladogenetic evolution.

    PubMed

    Bartoszek, Krzysztof

    2014-08-01

    An ongoing debate in evolutionary biology is whether phenotypic change occurs predominantly around the time of speciation or whether it instead accumulates gradually over time. In this work I propose a general framework incorporating both types of change, quantify the effects of speciational change via the correlation between species and attribute the proportion of change to each type. I discuss results of parameter estimation of Hominoid body size in this light. I derive mathematical formulae related to this problem, the probability generating functions of the number of speciation events along a randomly drawn lineage and from the most recent common ancestor of two randomly chosen tip species for a conditioned Yule tree. Additionally I obtain in closed form the variance of the distance from the root to the most recent common ancestor of two randomly chosen tip species. PMID:24933475

  15. Quantifying induced effects of subsurface renewable energy storage

    NASA Astrophysics Data System (ADS)

    Bauer, Sebastian; Beyer, Christof; Pfeiffer, Tilmann; Boockmeyer, Anke; Popp, Steffi; Delfs, Jens-Olaf; Wang, Bo; Li, Dedong; Dethlefsen, Frank; Dahmke, Andreas

    2015-04-01

    New methods and technologies for energy storage are required for the transition to renewable energy sources. Subsurface energy storage systems such as salt caverns or porous formations offer the possibility of hosting large amounts of energy or substance. When employing these systems, an adequate system and process understanding is required in order to assess the feasibility of the individual storage option at the respective site and to predict the complex and interacting effects induced. This understanding is the basis for assessing the potential as well as the risks connected with a sustainable usage of these storage options, especially when considering possible mutual influences. For achieving this aim, in this work synthetic scenarios for the use of the geological underground as an energy storage system are developed and parameterized. The scenarios are designed to represent typical conditions in North Germany. The types of subsurface use investigated here include gas storage and heat storage in porous formations. The scenarios are numerically simulated and interpreted with regard to risk analysis and effect forecasting. For this, the numerical simulators Eclipse and OpenGeoSys are used. The latter is enhanced to include the required coupled hydraulic, thermal, geomechanical and geochemical processes. Using the simulated and interpreted scenarios, the induced effects are quantified individually and monitoring concepts for observing these effects are derived. This presentation will detail the general investigation concept used and analyze the parameter availability for this type of model applications. Then the process implementation and numerical methods required and applied for simulating the induced effects of subsurface storage are detailed and explained. Application examples show the developed methods and quantify induced effects and storage sizes for the typical settings parameterized. This work is part of the ANGUS+ project, funded by the German Ministry

  16. Sodium borohydride/chloranil-based assay for quantifying total flavonoids.

    PubMed

    He, Xiangjiu; Liu, Dong; Liu, Rui Hai

    2008-10-22

    A novel sodium borohydride/chloranil-based (SBC) assay for quantifying total flavonoids, including flavones, flavonols, flavonones, flavononols, isoflavonoids, flavanols, and anthocyanins, has been developed. Flavonoids with a 4-carbonyl group were reduced to flavanols using sodium borohydride catalyzed with aluminum chloride. Then the flavan-4-ols were oxidized to anthocyanins by chloranil in an acetic acid solution. The anthocyanins were reacted with vanillin in concentrated hydrochloric acid and then quantified spectrophotometrically at 490 nm. A representative of each common flavonoid class including flavones (baicalein), flavonols (quercetin), flavonones (hesperetin), flavononols (silibinin), isoflavonoids (biochanin A), and flavanols (catechin) showed excellent linear dose-responses in the general range of 0.1-10.0 mM. For most flavonoids, the detection limit was about 0.1 mM in this assay. The recoveries of quercetin from spiked samples of apples and red peppers were 96.5 +/- 1.4% (CV = 1.4%, n = 4) and 99.0 +/- 4.2% (CV = 4.2%, n = 4), respectively. The recovery of catechin from spiked samples of cranberry extracts was 97.9 +/- 2.0% (CV = 2.0%, n = 4). The total flavonoids of selected common fruits and vegetables were measured using this assay. Among the samples tested, blueberry had the highest total flavonoid content (689.5 +/- 10.7 mg of catechin equiv per 100 g of sample), followed by cranberry, apple, broccoli, and red pepper. This novel SBC total flavonoid assay can be widely used to measure the total flavonoid content of fruits, vegetables, whole grains, herbal products, dietary supplements, and nutraceutical products. PMID:18798633

  17. The SEGUE K Giant Survey. III. Quantifying Galactic Halo Substructure

    NASA Astrophysics Data System (ADS)

    Janesh, William; Morrison, Heather L.; Ma, Zhibo; Rockosi, Constance; Starkenburg, Else; Xue, Xiang Xiang; Rix, Hans-Walter; Harding, Paul; Beers, Timothy C.; Johnson, Jennifer; Lee, Young Sun; Schneider, Donald P.

    2016-01-01

    We statistically quantify the amount of substructure in the Milky Way stellar halo using a sample of 4568 halo K giant stars at Galactocentric distances ranging over 5-125 kpc. These stars have been selected photometrically and confirmed spectroscopically as K giants from the Sloan Digital Sky Survey’s Sloan Extension for Galactic Understanding and Exploration project. Using a position-velocity clustering estimator (the 4distance) and a model of a smooth stellar halo, we quantify the amount of substructure in the halo, divided by distance and metallicity. Overall, we find that the halo as a whole is highly structured. We also confirm earlier work using blue horizontal branch (BHB) stars which showed that there is an increasing amount of substructure with increasing Galactocentric radius, and additionally find that the amount of substructure in the halo increases with increasing metallicity. Comparing to resampled BHB stars, we find that K giants and BHBs have similar amounts of substructure over equivalent ranges of Galactocentric radius. Using a friends-of-friends algorithm to identify members of individual groups, we find that a large fraction (˜33%) of grouped stars are associated with Sgr, and identify stars belonging to other halo star streams: the Orphan Stream, the Cetus Polar Stream, and others, including previously unknown substructures. A large fraction of sample K giants (more than 50%) are not grouped into any substructure. We find also that the Sgr stream strongly dominates groups in the outer halo for all except the most metal-poor stars, and suggest that this is the source of the increase of substructure with Galactocentric radius and metallicity.

  18. Quantifying realized inbreeding in wild and captive animal populations.

    PubMed

    Knief, U; Hemmrich-Stanisak, G; Wittig, M; Franke, A; Griffith, S C; Kempenaers, B; Forstmeier, W

    2015-04-01

    Most molecular measures of inbreeding do not measure inbreeding at the scale that is most relevant for understanding inbreeding depression-namely the proportion of the genome that is identical-by-descent (IBD). The inbreeding coefficient FPed obtained from pedigrees is a valuable estimator of IBD, but pedigrees are not always available, and cannot capture inbreeding loops that reach back in time further than the pedigree. We here propose a molecular approach to quantify the realized proportion of the genome that is IBD (propIBD), and we apply this method to a wild and a captive population of zebra finches (Taeniopygia guttata). In each of 948 wild and 1057 captive individuals we analyzed available single-nucleotide polymorphism (SNP) data (260 SNPs) spread over four different genomic regions in each population. This allowed us to determine whether any of these four regions was completely homozygous within an individual, which indicates IBD with high confidence. In the highly nomadic wild population, we did not find a single case of IBD, implying that inbreeding must be extremely rare (propIBD=0-0.00094, 95% CI). In the captive population, a five-generation pedigree strongly underestimated the average amount of realized inbreeding (FPed=0.013quantifying inbreeding at the individual or population level, and we show analytically that it can capture inbreeding loops that reach back up to a few hundred generations. PMID:25585923

  19. Quantifying realized inbreeding in wild and captive animal populations

    PubMed Central

    Knief, U; Hemmrich-Stanisak, G; Wittig, M; Franke, A; Griffith, S C; Kempenaers, B; Forstmeier, W

    2015-01-01

    Most molecular measures of inbreeding do not measure inbreeding at the scale that is most relevant for understanding inbreeding depression—namely the proportion of the genome that is identical-by-descent (IBD). The inbreeding coefficient FPed obtained from pedigrees is a valuable estimator of IBD, but pedigrees are not always available, and cannot capture inbreeding loops that reach back in time further than the pedigree. We here propose a molecular approach to quantify the realized proportion of the genome that is IBD (propIBD), and we apply this method to a wild and a captive population of zebra finches (Taeniopygia guttata). In each of 948 wild and 1057 captive individuals we analyzed available single-nucleotide polymorphism (SNP) data (260 SNPs) spread over four different genomic regions in each population. This allowed us to determine whether any of these four regions was completely homozygous within an individual, which indicates IBD with high confidence. In the highly nomadic wild population, we did not find a single case of IBD, implying that inbreeding must be extremely rare (propIBD=0–0.00094, 95% CI). In the captive population, a five-generation pedigree strongly underestimated the average amount of realized inbreeding (FPed=0.013quantifying inbreeding at the individual or population level, and we show analytically that it can capture inbreeding loops that reach back up to a few hundred generations. PMID:25585923

  20. Land cover change and remote sensing: Examples of quantifying spatiotemporal dynamics in tropical forests

    SciTech Connect

    Krummel, J.R.; Su, Haiping; Fox, J.; Yarnasan, S.; Ekasingh, M.

    1995-06-01

    Research on human impacts or natural processes that operate over broad geographic areas must explicitly address issues of scale and spatial heterogeneity. While the tropical forests of Southeast Asia and Mexico have been occupied and used to meet human needs for thousands of years, traditional forest management systems are currently being transformed by rapid and far-reaching demographic, political, economic, and environmental changes. The dynamics of population growth, migration into the remaining frontiers, and responses to national and international market forces result in a demand for land to produce food and fiber. These results illustrate some of the mechanisms that drive current land use changes, especially in the tropical forest frontiers. By linking the outcome of individual land use decisions and measures of landscape fragmentation and change, the aggregated results shows the hierarchy of temporal and spatial events that in summation result in global changes to the most complex and sensitive biome -- tropical forests. By quantifying the spatial and temporal patterns of tropical forest change, researchers can assist policy makers by showing how landscape systems in these tropical forests are controlled by physical, biological, social, and economic parameters.

  1. Quantifying Qualitative Data Using Cognitive Maps

    ERIC Educational Resources Information Center

    Scherp, Hans-Ake

    2013-01-01

    The aim of the article is to show how substantial qualitative material consisting of graphic cognitive maps can be analysed by using digital CmapTools, Excel and SPSS. Evidence is provided of how qualitative and quantitative methods can be combined in educational research by transforming qualitative data into quantitative data to facilitate…

  2. Quantifying Particle Numbers and Mass Flux in Drifting Snow

    NASA Astrophysics Data System (ADS)

    Crivelli, Philip; Paterna, Enrico; Horender, Stefan; Lehning, Michael

    2016-06-01

    We compare two of the most common methods of quantifying mass flux, particle numbers and particle-size distribution for drifting snow events, the snow-particle counter (SPC), a laser-diode-based particle detector, and particle tracking velocimetry based on digital shadowgraphic imaging. The two methods were correlated for mass flux and particle number flux. For the SPC measurements, the device was calibrated by the manufacturer beforehand. The shadowgrapic imaging method measures particle size and velocity directly from consecutive images, and before each new test the image pixel length is newly calibrated. A calibration study with artificially scattered sand particles and glass beads provides suitable settings for the shadowgraphical imaging as well as obtaining a first correlation of the two methods in a controlled environment. In addition, using snow collected in trays during snowfall, several experiments were performed to observe drifting snow events in a cold wind tunnel. The results demonstrate a high correlation between the mass flux obtained for the calibration studies (r ≥slant 0.93 ) and good correlation for the drifting snow experiments (r ≥slant 0.81 ). The impact of measurement settings is discussed in order to reliably quantify particle numbers and mass flux in drifting snow. The study was designed and performed to optimize the settings of the digital shadowgraphic imaging system for both the acquisition and the processing of particles in a drifting snow event. Our results suggest that these optimal settings can be transferred to different imaging set-ups to investigate sediment transport processes.

  3. Quantifying the Nonlinear, Anisotropic Material Response of Spinal Ligaments

    NASA Astrophysics Data System (ADS)

    Robertson, Daniel J.

    Spinal ligaments may be a significant source of chronic back pain, yet they are often disregarded by the clinical community due to a lack of information with regards to their material response, and innervation characteristics. The purpose of this dissertation was to characterize the material response of spinal ligaments and to review their innervation characteristics. Review of relevant literature revealed that all of the major spinal ligaments are innervated. They cause painful sensations when irritated and provide reflexive control of the deep spinal musculature. As such, including the neurologic implications of iatrogenic ligament damage in the evaluation of surgical procedures aimed at relieving back pain will likely result in more effective long-term solutions. The material response of spinal ligaments has not previously been fully quantified due to limitations associated with standard soft tissue testing techniques. The present work presents and validates a novel testing methodology capable of overcoming these limitations. In particular, the anisotropic, inhomogeneous material constitutive properties of the human supraspinous ligament are quantified and methods for determining the response of the other spinal ligaments are presented. In addition, a method for determining the anisotropic, inhomogeneous pre-strain distribution of the spinal ligaments is presented. The multi-axial pre-strain distributions of the human anterior longitudinal ligament, ligamentum flavum and supraspinous ligament were determined using this methodology. Results from this work clearly demonstrate that spinal ligaments are not uniaxial structures, and that finite element models which account for pre-strain and incorporate ligament's complex material properties may provide increased fidelity to the in vivo condition.

  4. Quantifying commuter exposures to volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the

  5. Quantifying commuter exposures to volatile organic compounds

    NASA Astrophysics Data System (ADS)

    Kayne, Ashleigh

    Motor-vehicles can be a predominant source of air pollution in cities. Traffic-related air pollution is often unavoidable for people who live in populous areas. Commuters may have high exposures to traffic-related air pollution as they are close to vehicle tailpipes. Volatile organic compounds (VOCs) are one class of air pollutants of concern because exposure to VOCs carries risk for adverse health effects. Specific VOCs of interest for this work include benzene, toluene, ethylbenzene, and xylenes (BTEX), which are often found in gasoline and combustion products. Although methods exist to measure time-integrated personal exposures to BTEX, there are few practical methods to measure a commuter's time-resolved BTEX exposure which could identify peak exposures that could be concealed with a time-integrated measurement. This study evaluated the ability of a photoionization detector (PID) to measure commuters' exposure to BTEX using Tenax TA samples as a reference and quantified the difference in BTEX exposure between cyclists and drivers with windows open and closed. To determine the suitability of two measurement methods (PID and Tenax TA) for use in this study, the precision, linearity, and limits of detection (LODs) for both the PID and Tenax TA measurement methods were determined in the laboratory with standard BTEX calibration gases. Volunteers commuted from their homes to their work places by cycling or driving while wearing a personal exposure backpack containing a collocated PID and Tenax TA sampler. Volunteers completed a survey and indicated if the windows in their vehicle were open or closed. Comparing pairs of exposure data from the Tenax TA and PID sampling methods determined the suitability of the PID to measure the BTEX exposures of commuters. The difference between BTEX exposures of cyclists and drivers with windows open and closed in Fort Collins was determined. Both the PID and Tenax TA measurement methods were precise and linear when evaluated in the

  6. Using Accelerometer and Gyroscopic Measures to Quantify Postural Stability

    PubMed Central

    Alberts, Jay L.; Hirsch, Joshua R.; Koop, Mandy Miller; Schindler, David D.; Kana, Daniel E.; Linder, Susan M.; Campbell, Scott; Thota, Anil K.

    2015-01-01

    Context Force platforms and 3-dimensional motion-capture systems provide an accurate method of quantifying postural stability. Substantial cost, space, time to administer, and need for trained personnel limit widespread use of biomechanical techniques in the assessment of postural stability in clinical or field environments. Objective To determine whether accelerometer and gyroscope data sampled from a consumer electronics device (iPad2) provide sufficient resolution of center-of-gravity (COG) movements to accurately quantify postural stability in healthy young people. Design Controlled laboratory study. Setting Research laboratory in an academic medical center. Patients or Other Participants A total of 49 healthy individuals (age = 19.5 ± 3.1 years, height = 167.7 ± 13.2 cm, mass = 68.5 ± 17.5 kg). Intervention(s) Participants completed the NeuroCom Sensory Organization Test (SOT) with an iPad2 affixed at the sacral level. Main Outcome Measure(s) Primary outcomes were equilibrium scores from both systems and the time series of the angular displacement of the anteroposterior COG sway during each trial. A Bland-Altman assessment for agreement was used to compare equilibrium scores produced by the NeuroCom and iPad2 devices. Limits of agreement was defined as the mean bias (NeuroCom − iPad) ± 2 standard deviations. Mean absolute percentage error and median difference between the NeuroCom and iPad2 measurements were used to evaluate how closely the real-time COG sway measured by the 2 systems tracked each other. Results The limits between the 2 devices ranged from −0.5° to 0.5° in SOT condition 1 to −2.9° to 1.3° in SOT condition 5. The largest absolute value of the measurement error within the 95% confidence intervals for all conditions was 2.9°. The mean absolute percentage error analysis indicated that the iPad2 tracked NeuroCom COG with an average error ranging from 5.87% to 10.42% of the NeuroCom measurement across SOT conditions. Conclusions The i

  7. Development and comparison of a quantitative TaqMan-MGB real-time PCR assay to three other methods of quantifying vaccinia virions

    PubMed Central

    Baker, Jonathon L.; Ward, Brian M.

    2013-01-01

    Plaque assays are a widely used method to quantify stocks of viruses. Although this method is well established for titrating viral stocks, it is time consuming and can take several days to complete. In this study, the creation and validation of a quantitative real-time PCR (qPCR) assay for enumerating virions of vaccinia virus is reported. PCR primers and a minor groove-binding probe were designed to hybridize to the DNA polymerase gene (E9L) from a number of different orthopoxviruses. The number of viral genomes determined using qPCR was approximately similar to results obtained using OD 260 measurements and a direct count of fluorescent virions by microscopy indicating that all three methods are comparable in their ability to quantify virions from a purified stock. In addition, this report describes methodologies to harvest and quantify, using the qPCR assay, three of the four types of vaccinia virions produced during morphogenesis: intracellular mature virions, cell-associated enveloped virions, and extracellular enveloped virions. Using these procedures a particle to plaque forming unit of 61:1, 14:1 and 6:1 was calculated for IMV, CEV and EEV respectively. These results show that qPCR can be used as a fast and accurate assay to quantify stocks of vaccinia virus over several orders of magnitude from both purified and unpurified stocks and should be applicable to other members of the orthopoxvirus genera. PMID:24211297

  8. A LC-MS method to quantify tenofovir urinary concentrations in treated patients.

    PubMed

    Simiele, Marco; Carcieri, Chiara; De Nicolò, Amedeo; Ariaudo, Alessandra; Sciandra, Mauro; Calcagno, Andrea; Bonora, Stefano; Di Perri, Giovanni; D'Avolio, Antonio

    2015-10-10

    Tenofovir disoproxil fumarate is a prodrug of tenofovir used in the treatment of HIV and HBV infections: it is the most used antiretroviral worldwide. Tenofovir is nucleotidic HIV reverse trascriptase inhibitor that showed excellent long-term efficacy and tolerability. However renal and bone complications (proximal tubulopathy, hypophosphatemia, decreased bone mineral density, and reduced creatinine clearance) limit its use. Tenofovir renal toxicity has been suggested as the consequence of drug entrapment in proximal tubular cells: measuring tenofovir urinary concentrations may be a proxy of this event and it may be used as predictor of tenofovir side effects. No method is currently available for quantifying tenofovir in this matrix: then, the aim of this work was to validate a new LC-MS method for the quantification of urinary tenofovir. Chromatographic separation was achieved with a gradient (acetonitrile and water with formic acid 0.05%) on an Atlantis 5 μm T3, 4.6 mm × 150 mm, reversed phase analytical column. Detection of tenofovir and internal standard was achieved by electrospray ionization mass spectrometry in the positive ion mode. Calibration ranged from 391 to 100,000 ng/mL. The limit of quantification was 391 ng/mL and the limit of detection was 195 ng/mL. Mean recovery of tenofovir and internal standard were consistent and stable, while matrix effect resulted low and stable. The method was tested on 35 urine samples from HIV-positive patients treated with tenofovir-based HAARTs and did not show any significant interference with antiretrovirals or other concomitantly administered drugs. All the observed concentrations in real samples fitted the calibration range, confirming the capability of this method for the use in clinical routine. Whether confirmed in ad hoc studies this method may be used for quantifying tenofovir urinary concentrations and help managing HIV-positive patients treated with tenofovir. PMID:25997174

  9. New primers for detecting and quantifying denitrifying anaerobic methane oxidation archaea in different ecological niches.

    PubMed

    Ding, Jing; Ding, Zhao-Wei; Fu, Liang; Lu, Yong-Ze; Cheng, Shuk H; Zeng, Raymond J

    2015-11-01

    The significance of ANME-2d in methane sink in the environment has been overlooked, and there was no any study evaluating the distribution of ANME-2d in the environment. New primers were thus needed to be designed for following research. In this paper, a pair of primers (DP397F and DP569R) was designed to quantify ANME-2d. The specificity and amplification efficiency of this primer pair were acceptable. PCR amplification of another pair of primers (DP142F and DP779R) generated a single, bright targeted band from the enrichment sample, but yielded faint, multiple bands from the environmental samples. Nested PCR was conducted using the primers DP142F/DP779R in the first round and DP142F/DP569R in the second round, which generated a bright targeted band. Further phylogenetic analysis showed that these targeted bands were ANME-2d-related sequences. Real-time PCR showed that the copies of the 16s ribosomal RNA gene of ANME-2d in these samples ranged from 3.72 × 10(4) to 2.30 × 10(5) copies μg(-1) DNA, indicating that the percentage of ANME-2d was greatest in a polluted river sample and least in a rice paddy sample. These results demonstrate that the newly developed real-time PCR primers could sufficiently quantify ANME-2d and that nested PCR with an appropriate combination of the new primers could successfully detect ANME-2d in environmental samples; the latter finding suggests that ANME-2d may spread in environments. PMID:26300291

  10. Hyperspectral remote sensing tools for quantifying plant litter and invasive species in arid ecosystems

    USGS Publications Warehouse

    Nagler, Pamela L.; Sridhar, B.B. Maruthi; Olsson, Aaryn Dyami; Glenn, Edward P.; van Leeuwen, Willem J.D.

    2012-01-01

    Green vegetation can be distinguished using visible and infrared multi-band and hyperspectral remote sensing methods. The problem has been in identifying and distinguishing the non-photosynthetically active radiation (PAR) landscape components, such as litter and soils, and from green vegetation. Additionally, distinguishing different species of green vegetation is challenging using the relatively few bands available on most satellite sensors. This chapter focuses on hyperspectral remote sensing characteristics that aim to distinguish between green vegetation, soil, and litter (or senescent vegetation). Quantifying litter by remote sensing methods is important in constructing carbon budgets of natural and agricultural ecosystems. Distinguishing between plant types is important in tracking the spread of invasive species. Green leaves of different species usually have similar spectra, making it difficult to distinguish between species. However, in this chapter we show that phenological differences between species can be used to detect some invasive species by their distinct patterns of greening and dormancy over an annual cycle based on hyperspectral data. Both applications require methods to quantify the non-green cellulosic fractions of plant tissues by remote sensing even in the presence of soil and green plant cover. We explore these methods and offer three case studies. The first concerns distinguishing surface litter from soil using the Cellulose Absorption Index (CAI), as applied to no-till farming practices where plant litter is left on the soil after harvest. The second involves using different band combinations to distinguish invasive saltcedar from agricultural and native riparian plants on the Lower Colorado River. The third illustrates the use of the CAI and NDVI in time-series analyses to distinguish between invasive buffelgrass and native plants in a desert environment in Arizona. Together the results show how hyperspectral imagery can be applied to

  11. Quantifying and Generalizing Hydrologic Responses to Dam Regulation using a Statistical Modeling Approach

    SciTech Connect

    McManamay, Ryan A

    2014-01-01

    Despite the ubiquitous existence of dams within riverscapes, much of our knowledge about dams and their environmental effects remains context-specific. Hydrology, more than any other environmental variable, has been studied in great detail with regard to dam regulation. While much progress has been made in generalizing the hydrologic effects of regulation by large dams, many aspects of hydrology show site-specific fidelity to dam operations, small dams (including diversions), and regional hydrologic regimes. A statistical modeling framework is presented to quantify and generalize hydrologic responses to varying degrees of dam regulation. Specifically, the objectives were to 1) compare the effects of local versus cumulative dam regulation, 2) determine the importance of different regional hydrologic regimes in influencing hydrologic responses to dams, and 3) evaluate how different regulation contexts lead to error in predicting hydrologic responses to dams. Overall, model performance was poor in quantifying the magnitude of hydrologic responses, but performance was sufficient in classifying hydrologic responses as negative or positive. Responses of some hydrologic indices to dam regulation were highly dependent upon hydrologic class membership and the purpose of the dam. The opposing coefficients between local and cumulative-dam predictors suggested that hydrologic responses to cumulative dam regulation are complex, and predicting the hydrology downstream of individual dams, as opposed to multiple dams, may be more easy accomplished using statistical approaches. Results also suggested that particular contexts, including multipurpose dams, high cumulative regulation by multiple dams, diversions, close proximity to dams, and certain hydrologic classes are all sources of increased error when predicting hydrologic responses to dams. Statistical models, such as the ones presented herein, show promise in their ability to model the effects of dam regulation effects at

  12. QUANTIFYING DIFFUSIVE MASS TRANSFER IN FRACTURED SHALEBEDROCK

    EPA Science Inventory

    A significant limitation in defining remediation needs at contaminated sites often results from aninsufficient understanding of the transport processes that control contaminant migration. Theobjectives of this research were to help resolve this dilemma by providing an improved...

  13. Experimental Drug for Rheumatoid Arthritis Shows Promise

    MedlinePlus

    ... news/fullstory_158076.html Experimental Drug for Rheumatoid Arthritis Shows Promise Baricitinib helped patients who failed other ... HealthDay News) -- An experimental drug to treat rheumatoid arthritis showed promise in a new six-month trial. ...

  14. Experimental Genital Herpes Drug Shows Promise

    MedlinePlus

    ... gov/medlineplus/news/fullstory_159462.html Experimental Genital Herpes Drug Shows Promise Drug lowered viral activity, recurrence ... News) -- An experimental immune-boosting treatment for genital herpes shows promise, researchers report. The drug, called GEN- ...

  15. Alzheimer's Gene May Show Effects in Childhood

    MedlinePlus

    ... https://medlineplus.gov/news/fullstory_159854.html Alzheimer's Gene May Show Effects in Childhood Brain scans reveal ... 2016 WEDNESDAY, July 13, 2016 (HealthDay News) -- A gene related to Alzheimer's disease may start to show ...

  16. Quantifying biodiversity and asymptotics for a sequence of random strings.

    PubMed

    Koyano, Hitoshi; Kishino, Hirohisa

    2010-06-01

    We present a methodology for quantifying biodiversity at the sequence level by developing the probability theory on a set of strings. Further, we apply our methodology to the problem of quantifying the population diversity of microorganisms in several extreme environments and digestive organs and reveal the relation between microbial diversity and various environmental parameters. PMID:20866445

  17. Visual Attention and Quantifier-Spreading in Heritage Russian Bilinguals

    ERIC Educational Resources Information Center

    Sekerina, Irina A.; Sauermann, Antje

    2015-01-01

    It is well established in language acquisition research that monolingual children and adult second language learners misinterpret sentences with the universal quantifier "every" and make quantifier-spreading errors that are attributed to a preference for a match in number between two sets of objects. The present Visual World eye-tracking…

  18. Shortcuts to Quantifier Interpretation in Children and Adults

    ERIC Educational Resources Information Center

    Brooks, Patricia J.; Sekerina, Irina

    2006-01-01

    Errors involving universal quantification are common in contexts depicting sets of individuals in partial, one-to-one correspondence. In this article, we explore whether quantifier-spreading errors are more common with distributive quantifiers each and every than with all. In Experiments 1 and 2, 96 children (5- to 9-year-olds) viewed pairs of…

  19. Quantifying terpenes in rumen fluid, serum, and plasma from sheep

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Determining the fate of terpenes consumed by browsing ruminants require methods to quantify their presence in blood and rumen fluid. Our objective was to modify an existing procedure for plasma terpenes to quantify 25 structurally diverse mono- and sesquiterpenes in serum, plasma, and rumen fluid fr...

  20. Monitoring microemboli during cardiopulmonary bypass with the EDAC quantifier.

    PubMed

    Lynch, John E; Wells, Christopher; Akers, Tom; Frantz, Paul; Garrett, Donna; Scott, M Lance; Williamson, Lisa; Agnew, Barbara; Lynch, John K

    2010-09-01

    Gaseous emboli may be introduced into the bypass circuit both from the surgical field and during perfusionist interventions. While circuits provide good protection against massive air embolism, they do not remove gaseous microemboli (GME) from the bypass circuit. The purpose of this preliminary study is to assess the incidence of GME during bypass surgery and determine if increased GME counts were associated with specific events during bypass surgery. In 30 cases divided between 15 coronary artery bypass grafts and 15 valve repairs, GME were counted and sizedt the three locations on the bypass circuit using the EDAC" Quantifier (Luna Innovations, Roanoke, VA). A mean of 45,276 GME were detected after the arterial line filter during these 30 cases, with significantly more detected (p = .04) post filter during valve cases (mean = 72,137 +/- 22,113) than coronary artery bypass graft cases (mean = 18,416 +/- 7831). GME detected post filter were significantly correlated in time with counts detected in the venous line (p < .001). Specific events associated with high counts included the initiation of cardiopulmonary bypass, heart manipulations, insertion and removal of clamps, and the administration of drugs. Global factors associated with increased counts post filter included higher venous line counts and higher post reservoir/bubble trap counts. The mean number of microemboli detected during bypass surgery was much higher than reported in other studies of emboli incidence, most likely due to the increased sensitivity of the EDAC Quantifier compared to other detection modalities. The results furthermore suggest the need for further study of the clinical significance of these microemboli and what practices may be used to reduce GME incidence. Increased in vitro testing of the air handling capability of different circuit designs, along with more clinical studies assessing best clinical practices for reducing GME activity, is recommended. PMID:21114224

  1. Quantifying Selective Pressures Driving Bacterial Evolution Using Lineage Analysis

    NASA Astrophysics Data System (ADS)

    Lambert, Guillaume; Kussell, Edo

    2015-01-01

    Organisms use a variety of strategies to adapt to their environments and maximize long-term growth potential, but quantitative characterization of the benefits conferred by the use of such strategies, as well as their impact on the whole population's rate of growth, remains challenging. Here, we use a path-integral framework that describes how selection acts on lineages—i.e., the life histories of individuals and their ancestors—to demonstrate that lineage-based measurements can be used to quantify the selective pressures acting on a population. We apply this analysis to Escherichia coli bacteria exposed to cyclical treatments of carbenicillin, an antibiotic that interferes with cell-wall synthesis and affects cells in an age-dependent manner. While the extensive characterization of the life history of thousands of cells is necessary to accurately extract the age-dependent selective pressures caused by carbenicillin, the same measurement can be recapitulated using lineage-based statistics of a single surviving cell. Population-wide evolutionary pressures can be extracted from the properties of the surviving lineages within a population, providing an alternative and efficient procedure to quantify the evolutionary forces acting on a population. Importantly, this approach is not limited to age-dependent selection, and the framework can be generalized to detect signatures of other trait-specific selection using lineage-based measurements. Our results establish a powerful way to study the evolutionary dynamics of life under selection and may be broadly useful in elucidating selective pressures driving the emergence of antibiotic resistance and the evolution of survival strategies in biological systems.

  2. Quantifying the sources of uncertainty in upper air climate variables

    NASA Astrophysics Data System (ADS)

    Eghdamirad, Sajjad; Johnson, Fiona; Woldemeskel, Fitsum; Sharma, Ashish

    2016-04-01

    Future estimates of precipitation and streamflow are of utmost interest in hydrological climate change impact assessments. Just as important as the estimate itself, is the variance around the ensemble mean of the projections, this variance being defined as uncertainty in the context of this study. This uncertainty in the hydrological variables of interest is affected by uncertainty in upper air climate variables which are used in statistical downscaling of precipitation or streamflow. Here the extent of uncertainty in upper air climate variables has been assessed for a selection of commonly used atmospheric variables for downscaling, namely, geopotential height and its difference in the north-south direction, specific humidity, and eastward and northward wind speeds. Generally, in statistical downscaling, no consideration is usually given to the uncertainty of different individual variables, which can result in biases in future climate simulations. The approach of quantifying uncertainty presented here has the potential to enable modelers to better formulate downscaling approaches, leading to more accurate characterization of future precipitation and its associated uncertainty. Based on the spread of multiple-model outputs, an uncertainty measure called square root of error variance has been used to quantify the contribution of different sources of uncertainty (i.e., models, scenarios, and ensembles) in monthly future climate projections in the 21st century at the 500 hPa and 850 hPa pressure levels. It has been shown that the different climate variables and levels of the atmosphere have distinct patterns in terms of their total future uncertainty and the contributions from the three sources. Scenario and model uncertainties in general contribute reasonably evenly to total uncertainty, with smaller contributions from the initial condition ensembles.

  3. Quantifying the prevalence of frailty in English hospitals

    PubMed Central

    Soong, J; Poots, AJ; Scott, S; Donald, K; Woodcock, T; Lovett, D; Bell, D

    2015-01-01

    Objectives Population ageing has been associated with an increase in comorbid chronic disease, functional dependence, disability and associated higher health care costs. Frailty Syndromes have been proposed as a way to define this group within older persons. We explore whether frailty syndromes are a reliable methodology to quantify clinically significant frailty within hospital settings, and measure trends and geospatial variation using English secondary care data set Hospital Episode Statistics (HES). Setting National English Secondary Care Administrative Data HES. Participants All 50 540 141 patient spells for patients over 65 years admitted to acute provider hospitals in England (January 2005—March 2013) within HES. Primary and secondary outcome measures We explore the prevalence of Frailty Syndromes as coded by International Statistical Classification of Diseases, Injuries and Causes of Death (ICD-10) over time, and their geographic distribution across England. We examine national trends for admission spells, inpatient mortality and 30-day readmission. Results A rising trend of admission spells was noted from January 2005 to March 2013(daily average admissions for month rising from over 2000 to over 4000). The overall prevalence of coded frailty is increasing (64 559 spells in January 2005 to 150 085 spells by Jan 2013). The majority of patients had a single frailty syndrome coded (10.2% vs total burden of 13.9%). Cognitive impairment and falls (including significant fracture) are the most common frailty syndromes coded within HES. Geographic variation in frailty burden was in keeping with known distribution of prevalence of the English elderly population and location of National Health Service (NHS) acute provider sites. Overtime, in-hospital mortality has decreased (>65 years) whereas readmission rates have increased (esp.>85 years). Conclusions This study provides a novel methodology to reliably quantify clinically significant frailty

  4. Quantifying higher-order correlations in a neuronal pool

    NASA Astrophysics Data System (ADS)

    Montangie, Lisandro; Montani, Fernando

    2015-03-01

    Recent experiments involving a relatively large population of neurons have shown a very significant amount of higher-order correlations. However, little is known of how these affect the integration and firing behavior of a population of neurons beyond the second order statistics. To investigate how higher-order inputs statistics can shape beyond pairwise spike correlations and affect information coding in the brain, we consider a neuronal pool where each neuron fires stochastically. We develop a simple mathematically tractable model that makes it feasible to account for higher-order spike correlations in a neuronal pool with highly interconnected common inputs beyond second order statistics. In our model, correlations between neurons appear from q-Gaussian inputs into threshold neurons. The approach constitutes the natural extension of the Dichotomized Gaussian model, where the inputs to the model are just Gaussian distributed and therefore have no input interactions beyond second order. We obtain an exact analytical expression for the joint distribution of firing, quantifying the degree of higher-order spike correlations, truly emphasizing the functional aspects of higher-order statistics, as we account for beyond second order inputs correlations seen by each neuron within the pool. We determine how higher-order correlations depend on the interaction structure of the input, showing that the joint distribution of firing is skewed as the parameter q increases inducing larger excursions of synchronized spikes. We show how input nonlinearities can shape higher-order correlations and enhance coding performance by neural populations.

  5. Quantifying MCMC exploration of phylogenetic tree space.

    PubMed

    Whidden, Chris; Matsen, Frederick A

    2015-05-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  6. Quantifying the Ease of Scientific Discovery.

    PubMed

    Arbesman, Samuel

    2011-02-01

    It has long been known that scientific output proceeds on an exponential increase, or more properly, a logistic growth curve. The interplay between effort and discovery is clear, and the nature of the functional form has been thought to be due to many changes in the scientific process over time. Here I show a quantitative method for examining the ease of scientific progress, another necessary component in understanding scientific discovery. Using examples from three different scientific disciplines - mammalian species, chemical elements, and minor planets - I find the ease of discovery to conform to an exponential decay. In addition, I show how the pace of scientific discovery can be best understood as the outcome of both scientific output and ease of discovery. A quantitative study of the ease of scientific discovery in the aggregate, such as done here, has the potential to provide a great deal of insight into both the nature of future discoveries and the technical processes behind discoveries in science. PMID:22328796

  7. Quantifying tissue mechanical properties using photoplethysmography

    SciTech Connect

    Akl, Tony; Wilson, Mark A.; Ericson, Milton Nance; Cote, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance.

  8. Quantifying tissue mechanical properties using photoplethysmography.

    PubMed

    Akl, Tony J; Wilson, Mark A; Ericson, M Nance; Coté, Gerard L

    2014-07-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young's Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance. PMID:25071970

  9. Quantifying tissue mechanical properties using photoplethysmography

    PubMed Central

    Akl, Tony J.; Wilson, Mark A.; Ericson, M. Nance; Coté, Gerard L.

    2014-01-01

    Photoplethysmography (PPG) is a non-invasive optical method that can be used to detect blood volume changes in the microvascular bed of tissue. The PPG signal comprises two components; a pulsatile waveform (AC) attributed to changes in the interrogated blood volume with each heartbeat, and a slowly varying baseline (DC) combining low frequency fluctuations mainly due to respiration and sympathetic nervous system activity. In this report, we investigate the AC pulsatile waveform of the PPG pulse for ultimate use in extracting information regarding the biomechanical properties of tissue and vasculature. By analyzing the rise time of the pulse in the diastole period, we show that PPG is capable of measuring changes in the Young’s Modulus of tissue mimicking phantoms with a resolution of 4 KPa in the range of 12 to 61 KPa. In addition, the shape of the pulse can potentially be used to diagnose vascular complications by differentiating upstream from downstream complications. A Windkessel model was used to model changes in the biomechanical properties of the circulation and to test the proposed concept. The modeling data confirmed the response seen in vitro and showed the same trends in the PPG rise and fall times with changes in compliance and vascular resistance. PMID:25071970

  10. Quantifying MCMC Exploration of Phylogenetic Tree Space

    PubMed Central

    Whidden, Chris; Matsen, Frederick A.

    2015-01-01

    In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this article, we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so, we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. PMID:25631175

  11. NASA GIBS Use in Live Planetarium Shows

    NASA Astrophysics Data System (ADS)

    Emmart, C. B.

    2015-12-01

    The American Museum of Natural History's Hayden Planetarium was rebuilt in year 2000 as an immersive theater for scientific data visualization to show the universe in context to our planet. Specific astrophysical movie productions provide the main daily programming, but interactive control software, developed at AMNH allows immersive presentation within a data aggregation of astronomical catalogs called the Digital Universe 3D Atlas. Since 2006, WMS globe browsing capabilities have been built into a software development collaboration with Sweden's Linkoping University (LiU). The resulting Uniview software, now a product of the company SCISS, is operated by about fifty planetariums around that world with ability to network amongst the sites for global presentations. Public presentation of NASA GIBS has allowed authoritative narratives to be presented within the range of data available in context to other sources such as Science on a Sphere, NASA Earth Observatory and Google Earth KML resources. Specifically, the NOAA supported World Views Network conducted a series of presentations across the US that focused on local ecological issues that could then be expanded in the course of presentation to national and global scales of examination. NASA support of for GIBS resources in an easy access multi scale streaming format like WMS has tremendously enabled particularly facile presentations of global monitoring like never before. Global networking of theaters for distributed presentations broadens out the potential for impact of this medium. Archiving and refinement of these presentations has already begun to inform new types of documentary productions that examine pertinent, global interdependency topics.

  12. Everything, everywhere, all the time: quantifying the information gained from intensive hydrochemical sampling

    NASA Astrophysics Data System (ADS)

    Kirchner, J. W.; Neal, C.

    2011-12-01

    Catchment hydrochemical studies have suffered from a stark mismatch of measurement timescales: water fluxes are typically measured sub-hourly, but their chemical signatures are typically sampled only weekly or monthly. At the Plynlimon catchment in mid-Wales, however, precipitation and streamflow have now been sampled every seven hours for nearly two years, and analyzed for deuterium, oxygen-18, and more than 40 chemical species. This high-frequency sampling reveals temporal patterns that would be invisible in typical weekly monitoring samples. Furthermore, recent technological developments are now leading to systems that can provide measurements of rainfall and streamflow chemistry at hourly or sub-hourly intervals, similar to the time scales at which hydrometric data have long been available - and to provide these measurements for long spans of time, not just for intensive field campaigns associated with individual storms. But at what point will higher-frequency measurements become pointless, as additional measurements simply "connect the dots" between lower-frequency data points? Information Theory, dating back to the original work of Shannon and colleagues in the 1940's, provides mathematical tools for rigorously quantifying the information content of a time series. The key input data for such an analysis are the power spectrum of the measured data, and the power spectrum of the measurement noise. Here we apply these techniques to the high-frequency Plynlimon data set. The results show that, at least up to 7-hourly sampling frequency, the information content of the time series increases nearly linearly with the frequency of sampling. These results rigorously quantify what inspection of the time series visually suggests: these high-frequency data do not simply "connect the dots" between lower-frequency measurements, but instead contain a richly textured signature of dynamic behavior in catchment hydrochemistry.

  13. A field method to quantify exchange with less-mobile porosity in streambeds using electrical hysteresis

    NASA Astrophysics Data System (ADS)

    Briggs, M. A.; Day-Lewis, F. D.; Zarnetske, J. P.; Harvey, J. W.; Lane, J. W., Jr.

    2015-12-01

    Heterogeneous streambed materials may be expected to develop two general porosity domains: a more-mobile porosity dominated by advective exchange, and a less-mobile porosity dominated by diffusive exchange. Less-mobile porosity containing unique redox conditions or contaminant mass may be invisible to traditional porewater sampling methods, even using "low-flow" techniques, because these methods sample water preferentially from the mobile porosity domain. Further, most tracer breakthrough curve analyses have only provided indirect information (tailing) regarding the prevalence and connectivity of less-mobile porosity, typically over experimental flowpath scales between 1-10 meters. To address the limitations of conventional methods, we use electrical geophysical methods to aid in the inference of less-mobile porosity parameters. Unlike traditional fluid sampling, electrical methods can directly sense less-mobile solute and can target specific points along subsurface flowpaths. We demonstrate how the geophysical methodology developed for dual-domain groundwater transport can be scaled to the streambed through synthetic, laboratory column, and field experiments; further we show how previously-used numerical modeling techniques can be replaced by a more-simple analytical approach. The new analytical method is based on electrical theory, and involves characteristics of electrical hysteresis patterns (e.g. hinge point values) that are used to quantify (1) the size of paired mobile and less-mobile porosities, and (2) the exchange rate coefficient through simple curve fitting. Results from the analytical approach compare favorably with results from calibration of numerical models and also independent measurements of mobile and less-mobile porosity. Lastly, we demonstrate a method of focused solute streambed injection to quantify less-mobile porosity and explain redox zonation in contrasting stream environments.

  14. Quantifying Effects Of Water Stress On Sunflowers

    Technology Transfer Automated Retrieval System (TEKTRAN)

    This poster presentation describes the data collection and analysis procedures and results for 2009 from a research grant funded by the National Sunflower Association. The primary objective was to evaluate the use of crop canopy temperature measured with infrared temperature sensors, as a more time ...

  15. Quantifying Security Threats and Their Impact

    SciTech Connect

    Aissa, Anis Ben; Abercrombie, Robert K; Sheldon, Frederick T; Mili, Ali

    2009-04-01

    In earlier works, we present a computational infrastructure that allows an analyst to estimate the security of a system in terms of the loss that each stakeholder stands to sustain as a result of security breakdowns. In this paper we illustrate this infrastructure by means of a sample example involving an e-commerce application.

  16. Quantifying App Store Dynamics: Longitudinal Tracking of Mental Health Apps

    PubMed Central

    Nicholas, Jennifer; Christensen, Helen

    2016-01-01

    Background For many mental health conditions, mobile health apps offer the ability to deliver information, support, and intervention outside the clinical setting. However, there are difficulties with the use of a commercial app store to distribute health care resources, including turnover of apps, irrelevance of apps, and discordance with evidence-based practice. Objective The primary aim of this study was to quantify the longevity and rate of turnover of mental health apps within the official Android and iOS app stores. The secondary aim was to quantify the proportion of apps that were clinically relevant and assess whether the longevity of these apps differed from clinically nonrelevant apps. The tertiary aim was to establish the proportion of clinically relevant apps that included claims of clinical effectiveness. We performed additional subgroup analyses using additional data from the app stores, including search result ranking, user ratings, and number of downloads. Methods We searched iTunes (iOS) and the Google Play (Android) app stores each day over a 9-month period for apps related to depression, bipolar disorder, and suicide. We performed additional app-specific searches if an app no longer appeared within the main search Results On the Android platform, 50% of the search results changed after 130 days (depression), 195 days (bipolar disorder), and 115 days (suicide). Search results were more stable on the iOS platform, with 50% of the search results remaining at the end of the study period. Approximately 75% of Android and 90% of iOS apps were still available to download at the end of the study. We identified only 35.3% (347/982) of apps as being clinically relevant for depression, of which 9 (2.6%) claimed clinical effectiveness. Only 3 included a full citation to a published study. Conclusions The mental health app environment is volatile, with a clinically relevant app for depression becoming unavailable to download every 2.9 days. This poses

  17. Complexity and approximability of quantified and stochastic constraint satisfaction problems

    SciTech Connect

    Hunt, H. B.; Stearns, R. L.; Marathe, M. V.

    2001-01-01

    Let D be an arbitrary (not necessarily finite) nonempty set, let C be a finite set of constant symbols denoting arbitrary elements of D, and let S be an arbitrary finite set of finite-arity relations on D. We denote the problem of determining the satisfiability of finite conjunctions of relations in S applied to variables (to variables and symbols in C) by SAT(S) (by SAT{sub c}(S)). Here, we study simultaneously the complexity of and the existence of efficient approximation algorithms for a number of variants of the problems SAT(S) and SAT{sub c}(S), and for many different D, C, and S. These problem variants include decision and optimization problems, for formulas, quantified formulas stochastically-quantified formulas. We denote these problems by Q-SAT(S), MAX-Q-SAT(S), S-SAT(S), MAX-S-SAT(S) MAX-NSF-Q-SAT(S) and MAX-NSF-S-SAT(S). The main contribution is the development of a unified predictive theory for characterizing the the complexity of these problems. Our unified approach is based on the following basic two basic concepts: (i) strongly-local replacements/reductions and (ii) relational/algebraic representability. Let k {ge} 2. Let S be a finite set of finite-arity relations on {Sigma}{sub k} with the following condition on S: All finite arity relations on {Sigma}{sub k} can be represented as finite existentially-quantified conjunctions of relations in S applied to variables (to variables and constant symbols in C), Then we prove the following new results: (1) The problems SAT(S) and SAT{sub c}(S) are both NQL-complete and {le}{sub logn}{sup bw}-complete for NP. (2) The problems Q-SAT(S), Q-SAT{sub c}(S), are PSPACE-complete. Letting k = 2, the problem S-SAT(S) and S-SAT{sub c}(S) are PSPACE-complete. (3) {exists} {epsilon} > 0 for which approximating the problems MAX-Q-SAT(S) within {epsilon} times optimum is PSPACE-hard. Letting k =: 2, {exists} {epsilon} > 0 for which approximating the problems MAX-S-SAT(S) within {epsilon} times optimum is PSPACE-hard. (4

  18. Quantifying the Electrocatalytic Turnover of Vitamin B12-Mediated Dehalogenation on Single Soft Nanoparticles.

    PubMed

    Cheng, Wei; Compton, Richard G

    2016-02-12

    We report the electrocatalytic dehalogenation of trichloroethylene (TCE) by single soft nanoparticles in the form of Vitamin B12 -containing droplets. We quantify the turnover number of the catalytic reaction at the single soft nanoparticle level. The kinetic data shows that the binding of TCE with the electro-reduced vitamin in the Co(I) oxidation state is chemically reversible. PMID:26806226

  19. The Interpretation of Classically Quantified Sentences: A Set-Theoretic Approach

    ERIC Educational Resources Information Center

    Politzer, Guy; Van der Henst, Jean-Baptiste; Delle Luche, Claire; Noveck, Ira A.

    2006-01-01

    We present a set-theoretic model of the mental representation of classically quantified sentences (All P are Q, Some P are Q, Some P are not Q, and No P are Q). We take inclusion, exclusion, and their negations to be primitive concepts. We show that although these sentences are known to have a diagrammatic expression (in the form of the Gergonne…

  20. Quantifying and predicting Drosophila larvae crawling phenotypes.

    PubMed

    Günther, Maximilian N; Nettesheim, Guilherme; Shubeita, George T

    2016-01-01

    The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly's power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer's disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design. PMID:27323901

  1. Quantifying systematic uncertainties in supernova cosmology

    SciTech Connect

    Nordin, Jakob; Goobar, Ariel; Joensson, Jakob E-mail: ariel@physto.se

    2008-02-15

    Observations of Type Ia supernovae used to map the expansion history of the Universe suffer from systematic uncertainties that need to be propagated into the estimates of cosmological parameters. We propose an iterative Monte Carlo simulation and cosmology fitting technique (SMOCK) to investigate the impact of sources of error upon fits of the dark energy equation of state. This approach is especially useful to track the impact of non-Gaussian, correlated effects, e.g. reddening correction errors, brightness evolution of the supernovae, K-corrections, gravitational lensing, etc. While the tool is primarily aimed at studies and optimization of future instruments, we use the Gold data-set in Riess et al (2007 Astrophys. J. 659 98) to show examples of potential systematic uncertainties that could exceed the quoted statistical uncertainties.

  2. Quantifying Irregularity in Pulsating Red Giants

    NASA Astrophysics Data System (ADS)

    Percy, J. R.; Esteves, S.; Lin, A.; Menezes, C.; Wu, S.

    2009-12-01

    Hundreds of red giant variable stars are classified as “type L,” which the General Catalogue of Variable Stars (GCVS) defines as “slow irregular variables of late spectral type...which show no evidence of periodicity, or any periodicity present is very poorly defined....” Self-correlation (Percy and Muhammed 2004) is a simple form of time-series analysis which determines the cycle-to-cycle behavior of a star, averaged over all the available data. It is well suited for analyzing stars which are not strictly periodic. Even for non-periodic stars, it provides a “profile” of the variability, including the average “characteristic time” of variability. We have applied this method to twenty-three L-type variables which have been measured extensively by AAVSO visual observers. We find a continuous spectrum of behavior, from irregular to semiregular.

  3. Quantifying and predicting Drosophila larvae crawling phenotypes

    PubMed Central

    Günther, Maximilian N.; Nettesheim, Guilherme; Shubeita, George T.

    2016-01-01

    The fruit fly Drosophila melanogaster is a widely used model for cell biology, development, disease, and neuroscience. The fly’s power as a genetic model for disease and neuroscience can be augmented by a quantitative description of its behavior. Here we show that we can accurately account for the complex and unique crawling patterns exhibited by individual Drosophila larvae using a small set of four parameters obtained from the trajectories of a few crawling larvae. The values of these parameters change for larvae from different genetic mutants, as we demonstrate for fly models of Alzheimer’s disease and the Fragile X syndrome, allowing applications such as genetic or drug screens. Using the quantitative model of larval crawling developed here we use the mutant-specific parameters to robustly simulate larval crawling, which allows estimating the feasibility of laborious experimental assays and aids in their design. PMID:27323901

  4. Quantifying human response capabilities towards tsunami threats at community level

    NASA Astrophysics Data System (ADS)

    Post, J.; Mück, M.; Zosseder, K.; Wegscheider, S.; Taubenböck, H.; Strunz, G.; Muhari, A.; Anwar, H. Z.; Birkmann, J.; Gebert, N.

    2009-04-01

    besides others play a role. An attempt to quantify this variable under high uncertainty is also presented. Quantifying ET is based on a GIS modelling using a Cost Weighted Distance approach. Basic principle is to define the best evacuation path from a given point to the next safe area (shelter location). Here the fastest path from that point to the shelter location has to be found. Thereby the impact of land cover, slope, population density, population age and gender distribution are taken into account as literature studies prove these factors as highly important. Knowing the fastest path and the distance to the next safe area together with a spatially distributed pattern of evacuation speed delivers the time needed from each location to a safe area. By considering now the obtained time value for RsT the coverage area of an evacuation target point (safe area) can be assigned. Incorporating knowledge on people capacity of an evacuation target point the respective coverage area is refined. Hence areas with weak, moderate and good human response capabilities can be detected. This allows calculation of potential amount of people affected (dead or injured) and amount of people dislocated. First results for Kuta (Bali) for a worst case tsunami event deliver people affected of approx. 25 000 when RT = 0 minutes (direct evacuation when receiving a tsunami warning to 120 000 when RT > ETA (no evacuation action until tsunami hits the land). Additionally fastest evacuation routes to the evacuation target points can be assigned. Areas with weak response capabilities can be assigned as priority areas to install e.g. additional evacuation target points or to increase tsunami knowledge and awareness to promote a faster reaction time. Especially in analyzing underlying socio-economic properties causing deficiencies in responding to a tsunami threat can lead to valuable information and direct planning of adaptation measures. Keywords: Community level, Risk and vulnerability assessment

  5. Quantifying viruses and bacteria in wastewater - results, quality control, and interpretation methods

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Membrane bioreactors (MBR), used for wastewater treatment in Ohio and elsewhere in the United States, have pore sizes large enough to theoretically reduce concentrations of protozoa and bacteria, but not viruses. Sampling for viruses in wastewater is seldom done and not required. Instead, the bac...

  6. A graph-theoretic method to quantify the airline route authority

    NASA Technical Reports Server (NTRS)

    Chan, Y.

    1979-01-01

    The paper introduces a graph-theoretic method to quantify the legal statements in route certificate which specifies the airline routing restrictions. All the authorized nonstop and multistop routes, including the shortest time routes, can be obtained, and the method suggests profitable route structure alternatives to airline analysts. This method to quantify the C.A.B. route authority was programmed in a software package, Route Improvement Synthesis and Evaluation, and demonstrated in a case study with a commercial airline. The study showed the utility of this technique in suggesting route alternatives and the possibility of improvements in the U.S. route system.

  7. Quantifying oil filtration effects on bearing life

    NASA Technical Reports Server (NTRS)

    Needelman, William M.; Zaretsky, Erwin V.

    1991-01-01

    Rolling-element bearing life is influenced by the number, size, and material properties of particles entering the Hertzian contact of the rolling element and raceway. In general, rolling-element bearing life increases with increasing level of oil filtration. Based upon test results, two equations are presented which allow for the adjustment of bearing L(sub 10) or catalog life based upon oil filter rating. It is recommended that where no oil filtration is used catalog life be reduced by 50 percent.

  8. Quantifying the origin of metallic glass formation

    PubMed Central

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a ‘nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless ‘fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*. PMID:26786966

  9. Quantifying data worth toward reducing predictive uncertainty

    USGS Publications Warehouse

    Dausman, A.M.; Doherty, J.; Langevin, C.D.; Sukop, M.C.

    2010-01-01

    The present study demonstrates a methodology for optimization of environmental data acquisition. Based on the premise that the worth of data increases in proportion to its ability to reduce the uncertainty of key model predictions, the methodology can be used to compare the worth of different data types, gathered at different locations within study areas of arbitrary complexity. The method is applied to a hypothetical nonlinear, variable density numerical model of salt and heat transport. The relative utilities of temperature and concentration measurements at different locations within the model domain are assessed in terms of their ability to reduce the uncertainty associated with predictions of movement of the salt water interface in response to a decrease in fresh water recharge. In order to test the sensitivity of the method to nonlinear model behavior, analyses were repeated for multiple realizations of system properties. Rankings of observation worth were similar for all realizations, indicating robust performance of the methodology when employed in conjunction with a highly nonlinear model. The analysis showed that while concentration and temperature measurements can both aid in the prediction of interface movement, concentration measurements, especially when taken in proximity to the interface at locations where the interface is expected to move, are of greater worth than temperature measurements. Nevertheless, it was also demonstrated that pairs of temperature measurements, taken in strategic locations with respect to the interface, can also lead to more precise predictions of interface movement. Journal compilation ?? 2010 National Ground Water Association.

  10. Quantifying Transmission Investment in Malaria Parasites

    PubMed Central

    Greischar, Megan A.; Mideo, Nicole; Read, Andrew F.; Bjørnstad, Ottar N.

    2016-01-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  11. Quantifying the origin of metallic glass formation.

    PubMed

    Johnson, W L; Na, J H; Demetriou, M D

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a 'nose temperature' T(*) located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless 'fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*. PMID:26786966

  12. Quantifying heat losses using aerial thermography

    SciTech Connect

    Haigh, G.A.; Pritchard, S.E.

    1980-01-01

    A theoretical model is described for calculating flat roof total heat losses and thermal conductances from aerial infrared data. Three empirical methods for estimating convective losses are described. The disagreement between the methods shows that they are prone to large (20%) errors, and that the survey should be carried out in low wind speeds, in order to minimize the effect of these errors on the calculation of total heat loss. The errors associated with knowledge of ground truth data are discussed for a high emissivity roof and three sets of environmental conditions. It is shown that the error in the net radiative loss is strongly dependent on the error in measuring the broad-band radiation incident on the roof. This is minimized for clear skies, but should be measured. Accurate knowledge of roof emissivity and the radiation reflected from the roof is shown to be less important. Simple techniques are described for measuring all three factors. Using these techniques in good conditions it should be possible to measure total heat losses to within 15%.

  13. Quantifying the origin of metallic glass formation

    NASA Astrophysics Data System (ADS)

    Johnson, W. L.; Na, J. H.; Demetriou, M. D.

    2016-01-01

    The waiting time to form a crystal in a unit volume of homogeneous undercooled liquid exhibits a pronounced minimum τX* at a `nose temperature' T* located between the glass transition temperature Tg, and the crystal melting temperature, TL. Turnbull argued that τX* should increase rapidly with the dimensionless ratio trg=Tg/TL. Angell introduced a dimensionless `fragility parameter', m, to characterize the fall of atomic mobility with temperature above Tg. Both trg and m are widely thought to play a significant role in determining τX*. Here we survey and assess reported data for TL, Tg, trg, m and τX* for a broad range of metallic glasses with widely varying τX*. By analysing this database, we derive a simple empirical expression for τX*(trg, m) that depends exponentially on trg and m, and two fitting parameters. A statistical analysis shows that knowledge of trg and m alone is therefore sufficient to predict τX* within estimated experimental errors. Surprisingly, the liquid/crystal interfacial free energy does not appear in this expression for τX*.

  14. Stretching DNA to quantify nonspecific protein binding

    NASA Astrophysics Data System (ADS)

    Goyal, Sachin; Fountain, Chandler; Dunlap, David; Family, Fereydoon; Finzi, Laura

    2012-07-01

    Nonspecific binding of regulatory proteins to DNA can be an important mechanism for target search and storage. This seems to be the case for the lambda repressor protein (CI), which maintains lysogeny after infection of E. coli. CI binds specifically at two distant regions along the viral genome and induces the formation of a repressive DNA loop. However, single-molecule imaging as well as thermodynamic and kinetic measurements of CI-mediated looping show that CI also binds to DNA nonspecifically and that this mode of binding may play an important role in maintaining lysogeny. This paper presents a robust phenomenological approach using a recently developed method based on the partition function, which allows calculation of the number of proteins bound nonspecific to DNA from measurements of the DNA extension as a function of applied force. This approach was used to analyze several cycles of extension and relaxation of λ DNA performed at several CI concentrations to measure the dissociation constant for nonspecific binding of CI (˜100 nM), and to obtain a measurement of the induced DNA compaction (˜10%) by CI.

  15. Quantifying Transmission Investment in Malaria Parasites.

    PubMed

    Greischar, Megan A; Mideo, Nicole; Read, Andrew F; Bjørnstad, Ottar N

    2016-02-01

    Many microparasites infect new hosts with specialized life stages, requiring a subset of the parasite population to forgo proliferation and develop into transmission forms. Transmission stage production influences infectivity, host exploitation, and the impact of medical interventions like drug treatment. Predicting how parasites will respond to public health efforts on both epidemiological and evolutionary timescales requires understanding transmission strategies. These strategies can rarely be observed directly and must typically be inferred from infection dynamics. Using malaria as a case study, we test previously described methods for inferring transmission stage investment against simulated data generated with a model of within-host infection dynamics, where the true transmission investment is known. We show that existing methods are inadequate and potentially very misleading. The key difficulty lies in separating transmission stages produced by different generations of parasites. We develop a new approach that performs much better on simulated data. Applying this approach to real data from mice infected with a single Plasmodium chabaudi strain, we estimate that transmission investment varies from zero to 20%, with evidence for variable investment over time in some hosts, but not others. These patterns suggest that, even in experimental infections where host genetics and other environmental factors are controlled, parasites may exhibit remarkably different patterns of transmission investment. PMID:26890485

  16. Quantifying the effects of melittin on liposomes.

    PubMed

    Popplewell, J F; Swann, M J; Freeman, N J; McDonnell, C; Ford, R C

    2007-01-01

    Melittin, the soluble peptide of bee venom, has been demonstrated to induce lysis of phospholipid liposomes. We have investigated the dependence of the lytic activity of melittin on lipid composition. The lysis of liposomes, measured by following their mass and dimensions when immobilised on a solid substrate, was close to zero when the negatively charged lipids phosphatidyl glycerol or phosphatidyl serine were used as the phospholipid component of the liposome. Whilst there was significant binding of melittin to the liposomes, there was little net change in their diameter with melittin binding reversed upon salt injection. For the zwitterionic phosphatidyl choline the lytic ability of melittin is dependent on the degree of acyl chain unsaturation, with melittin able to induce lysis of liposomes in the liquid crystalline state, whilst those in the gel state showed strong resistance to lysis. By directly measuring the dimensions and mass changes of liposomes on exposure to melittin using Dual Polarisation Interferometry, rather than following the florescence of entrapped dyes we attained further information about the initial stages of melittin binding to liposomes. PMID:17092481

  17. Research of a boundary condition quantifiable correction method in the assembly homogenization

    SciTech Connect

    Peng, L. H.; Liu, Z. H.; Zhao, J.; Li, W. H.

    2012-07-01

    The methods and codes currently used in assembly homogenization calculation mostly adopt the reflection boundary conditions. The influences of real boundary conditions on the assembly homogenized parameters were analyzed. They were summarized into four quantifiable effects, and then the mathematical expressions could be got by linearization hypothesis. Through the calculation of a test model, it had been found that the result was close to transport calculation result when considering four boundary quantifiable effects. This method would greatly improve the precision of a core design code which using the assembly homogenization methods, but without much increase of the computing time. (authors)

  18. Quantifying uncertainty in material damage from vibrational data

    SciTech Connect

    Butler, T.; Huhtala, A.; Juntunen, M.

    2015-02-15

    The response of a vibrating beam to a force depends on many physical parameters including those determined by material properties. Damage caused by fatigue or cracks results in local reductions in stiffness parameters and may drastically alter the response of the beam. Data obtained from the vibrating beam are often subject to uncertainties and/or errors typically modeled using probability densities. The goal of this paper is to estimate and quantify the uncertainty in damage modeled as a local reduction in stiffness using uncertain data. We present various frameworks and methods for solving this parameter determination problem. We also describe a mathematical analysis to determine and compute useful output data for each method. We apply the various methods in a specified sequence that allows us to interface the various inputs and outputs of these methods in order to enhance the inferences drawn from the numerical results obtained from each method. Numerical results are presented using both simulated and experimentally obtained data from physically damaged beams.

  19. Quantifying the biodiversity value of tropical primary, secondary, and plantation forests

    PubMed Central

    Barlow, J.; Gardner, T. A.; Araujo, I. S.; Ávila-Pires, T. C.; Bonaldo, A. B.; Costa, J. E.; Esposito, M. C.; Ferreira, L. V.; Hawes, J.; Hernandez, M. I. M.; Hoogmoed, M. S.; Leite, R. N.; Lo-Man-Hung, N. F.; Malcolm, J. R.; Martins, M. B.; Mestre, L. A. M.; Miranda-Santos, R.; Nunes-Gutjahr, A. L.; Overal, W. L.; Parry, L.; Peters, S. L.; Ribeiro-Junior, M. A.; da Silva, M. N. F.; da Silva Motta, C.; Peres, C. A.

    2007-01-01

    Biodiversity loss from deforestation may be partly offset by the expansion of secondary forests and plantation forestry in the tropics. However, our current knowledge of the value of these habitats for biodiversity conservation is limited to very few taxa, and many studies are severely confounded by methodological shortcomings. We examined the conservation value of tropical primary, secondary, and plantation forests for 15 taxonomic groups using a robust and replicated sample design that minimized edge effects. Different taxa varied markedly in their response to patterns of land use in terms of species richness and the percentage of species restricted to primary forest (varying from 5% to 57%), yet almost all between-forest comparisons showed marked differences in community structure and composition. Cross-taxon congruence in response patterns was very weak when evaluated using abundance or species richness data, but much stronger when using metrics based upon community similarity. Our results show that, whereas the biodiversity indicator group concept may hold some validity for several taxa that are frequently sampled (such as birds and fruit-feeding butterflies), it fails for those exhibiting highly idiosyncratic responses to tropical land-use change (including highly vagile species groups such as bats and orchid bees), highlighting the problems associated with quantifying the biodiversity value of anthropogenic habitats. Finally, although we show that areas of native regeneration and exotic tree plantations can provide complementary conservation services, we also provide clear empirical evidence demonstrating the irreplaceable value of primary forests. PMID:18003934

  20. Quantifying changes in groundwater level and chemistry in Shahrood, northeastern Iran

    NASA Astrophysics Data System (ADS)

    Ajdary, Khalil; Kazemi, Gholam A.

    2014-03-01

    Temporal changes in the quantity and chemical status of groundwater resources must be accurately quantified to aid sustainable management of aquifers. Monitoring data show that the groundwater level in Shahrood alluvial aquifer, northeastern Iran, continuously declined from 1993 to 2009, falling 11.4 m in 16 years. This constitutes a loss of 216 million m3 from the aquifer's stored groundwater reserve. Overexploitation and reduction in rainfall intensified the declining trend. In contrast, the reduced abstraction rate, the result of reduced borehole productivity (related to the reduction in saturated-zone thickness over time), slowed down the declining trend. Groundwater salinity varied substantially showing a minor rising trend. For the same 16-year period, increases were recorded in the order of 24% for electrical conductivity, 12.4% for major ions, and 9.9% for pH. This research shows that the groundwater-level declining trend was not interrupted by fluctuation in rainfall and it does not necessarily lead to water-quality deterioration. Water-level drop is greater near the aquifer's recharging boundary, while greater rates of salinity rise occur around the end of groundwater flow lines. Also, fresher groundwater experiences a greater rate of salinity increase. These findings are of significance for predicting the groundwater level and salinity of exhausted aquifers.

  1. Quantifying the multiple, environmental benefits of reintroducing the Eurasian Beaver

    NASA Astrophysics Data System (ADS)

    Brazier, Richard; Puttock, Alan; Graham, Hugh; Anderson, Karen; Cunliffe, Andrew; Elliott, Mark

    2016-04-01

    Beavers are ecological engineers with an ability to modify the structure and flow of fluvial systems and create complex wetland environments with dams, ponds and canals. Consequently, beaver activity has potential for river restoration, management and the provision of multiple environmental ecosystem services including biodiversity, flood risk mitigation, water quality and sustainable drinking water provision. With the current debate surrounding the reintroduction of beavers into the United Kingdom, it is critical to monitor the impact of beavers upon the environment. We have developed and implemented a monitoring strategy to quantify the impact of reintroducing the Eurasian Beaver on multiple environmental ecosystem services and river systems at a range of scales. First, the experimental design and preliminary results will be presented from the Mid-Devon Beaver Trial, where a family of beavers has been introduced to a 3 ha enclosure situated upon a first order tributary of the River Tamar. The site was instrumented to monitor the flow rate and quality of water entering and leaving the site. Additionally, the impacts of beavers upon riparian vegetation structure, water/carbon storage were investigated. Preliminary results indicate that beaver activity, particularly the building of ponds and dams, increases water storage within the landscape and moderates the river response to rainfall. Baseflow is enhanced during dry periods and storm flow is attenuated, potentially reducing the risk of flooding downstream. Initial analysis of water quality indicates that water entering the site (running off intensively managed grasslands upslope), has higher suspended sediment loads and nitrate levels, than that leaving the site, after moving through the series of beaver ponds. These results suggest beaver activity may also act as a means by which the negative impact of diffuse water pollution from agriculture can be mitigated thus providing cleaner water in rivers downstream

  2. No-Show Analysis. Final Report.

    ERIC Educational Resources Information Center

    Kalsbeek, William D.; And Others

    The National Assessment of Educational Progress; Second Science Assessment No-Show Study assessed the magnitude and causation of nonresponse biases. A No-Show is defined as an individual who was selected as a sample respondent but failed to be present for regular assessment of the 17-year-old group. The procedure whereby a sample of eligible…

  3. Effects of Talk Show Viewing on Adolescents.

    ERIC Educational Resources Information Center

    Davis, Stacy; Mares, Marie-Louise

    1998-01-01

    Investigates the effects of talk-show viewing on high-school students' social-reality beliefs. Supports the hypothesis that viewers overestimate the frequency of deviant behaviors; does not find support for the hypothesis that viewers become desensitized to the suffering of others; and finds that talk-show viewing was positively related, among…

  4. Acculturation, Cultivation, and Daytime TV Talk Shows.

    ERIC Educational Resources Information Center

    Woo, Hyung-Jin; Dominick, Joseph R.

    2003-01-01

    Explores the cultivation phenomenon among international college students in the United States by examining the connection between levels of acculturation, daytime TV talk show viewing, and beliefs about social reality. Finds that students who scored low on acculturation and watched a great deal of daytime talk shows had a more negative perception…

  5. The Physics of Equestrian Show Jumping

    ERIC Educational Resources Information Center

    Stinner, Art

    2014-01-01

    This article discusses the kinematics and dynamics of equestrian show jumping. For some time I have attended a series of show jumping events at Spruce Meadows, an international equestrian center near Calgary, Alberta, often referred to as the "Wimbledon of equestrian jumping." I have always had a desire to write an article such as this…

  6. The Language of Show Biz: A Dictionary.

    ERIC Educational Resources Information Center

    Sergel, Sherman Louis, Ed.

    This dictionary of the language of show biz provides the layman with definitions and essays on terms and expressions often used in show business. The overall pattern of selection was intended to be more rather than less inclusive, though radio, television, and film terms were deliberately omitted. Lengthy explanations are sometimes used to express…

  7. Quantifying the Consistency of Scientific Databases

    PubMed Central

    Šubelj, Lovro; Bajec, Marko; Mileva Boshkoska, Biljana; Kastrin, Andrej; Levnajić, Zoran

    2015-01-01

    Science is a social process with far-reaching impact on our modern society. In recent years, for the first time we are able to scientifically study the science itself. This is enabled by massive amounts of data on scientific publications that is increasingly becoming available. The data is contained in several databases such as Web of Science or PubMed, maintained by various public and private entities. Unfortunately, these databases are not always consistent, which considerably hinders this study. Relying on the powerful framework of complex networks, we conduct a systematic analysis of the consistency among six major scientific databases. We found that identifying a single "best" database is far from easy. Nevertheless, our results indicate appreciable differences in mutual consistency of different databases, which we interpret as recipes for future bibliometric studies. PMID:25984946

  8. Quantifying uncertainties in wind energy assessment

    NASA Astrophysics Data System (ADS)

    Patlakas, Platon; Galanis, George; Kallos, George

    2015-04-01

    The constant rise of wind energy production and the subsequent penetration in global energy markets during the last decades resulted in new sites selection with various types of problems. Such problems arise due to the variability and the uncertainty of wind speed. The study of the wind speed distribution lower and upper tail may support the quantification of these uncertainties. Such approaches focused on extreme wind conditions or periods below the energy production threshold are necessary for a better management of operations. Towards this direction, different methodologies are presented for the credible evaluation of potential non-frequent/extreme values for these environmental conditions. The approaches used, take into consideration the structural design of the wind turbines according to their lifespan, the turbine failures, the time needed for repairing as well as the energy production distribution. In this work, a multi-parametric approach for studying extreme wind speed values will be discussed based on tools of Extreme Value Theory. In particular, the study is focused on extreme wind speed return periods and the persistence of no energy production based on a weather modeling system/hind cast/10-year dataset. More specifically, two methods (Annual Maxima and Peaks Over Threshold) were used for the estimation of extreme wind speeds and their recurrence intervals. Additionally, two different methodologies (intensity given duration and duration given intensity, both based on Annual Maxima method) were implied to calculate the extreme events duration, combined with their intensity as well as the event frequency. The obtained results prove that the proposed approaches converge, at least on the main findings, for each case. It is also remarkable that, despite the moderate wind speed climate of the area, several consequent days of no energy production are observed.

  9. Quantifying defects in zeolites and zeolite membranes

    NASA Astrophysics Data System (ADS)

    Hammond, Karl Daniel

    Zeolites are crystalline aluminosilicates that are frequently used as catalysts to transform chemical feedstocks into more useful materials in a size- or shape-selective fashion; they are one of the earliest forms of nanotechnology. Zeolites can also be used, especially in the form of zeolite membranes (layers of zeolite on a support), to separate mixtures based on the size of the molecules. Recent advances have also created the possibility of using zeolites as alkaline catalysts, in addition to their traditional applications as acid catalysts and catalytic supports. Transport and catalysis in zeolites are greatly affected by physical and chemical defects. Such defects can be undesirable (in the case of zeolite membranes), or desirable (in the case of nitrogen-doped alkaline zeolites). Studying zeolites at the relevant length scales requires indirect experimental methods such as vapor adsorption or atomic-scale modeling such as electronic structure calculations. This dissertation explores both experimental and theoretical characterization of zeolites and zeolite membranes. Physical defects, important in membrane permeation, are studied using physical adsorption experiments and models of membrane transport. The results indicate that zeolite membranes can be modeled as a zeolite powder on top of a support---a "supported powder," so to speak---for the purposes of adsorption. Mesoporosity that might be expected based on permeation and confocal microscopy measurements is not observed. Chemical defects---substitutions of nitrogen for oxygen---are studied using quantum mechanical models that predict spectroscopic properties. These models provide a method for simulating the 29Si NMR spectra of nitrogendefected zeolites. They also demonstrate that nitrogen substitutes into the zeolite framework (not just on the surface) under the proper reaction conditions. The results of these studies will be valuable to experimentalists and theorists alike in our efforts to understand the

  10. Quantifying the leakage of quantum protocols for classical two-party cryptography

    NASA Astrophysics Data System (ADS)

    Salvail, Louis; Schaffner, Christian; Sotáková, Miroslava

    2015-12-01

    We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer.

  11. Quantifying the leakage of quantum protocols for classical two-party cryptography

    NASA Astrophysics Data System (ADS)

    Salvail, Louis; Schaffner, Christian; Sotáková, Miroslava

    2014-12-01

    We study quantum protocols among two distrustful parties. By adopting a rather strict definition of correctness — guaranteeing that honest players obtain their correct outcomes only — we can show that every strictly correct quantum protocol implementing a non-trivial classical primitive necessarily leaks information to a dishonest player. This extends known impossibility results to all non-trivial primitives. We provide a framework for quantifying this leakage and argue that leakage is a good measure for the privacy provided to the players by a given protocol. Our framework also covers the case where the two players are helped by a trusted third party. We show that despite the help of a trusted third party, the players cannot amplify the cryptographic power of any primitive. All our results hold even against quantum honest-but-curious adversaries who honestly follow the protocol but purify their actions and apply a different measurement at the end of the protocol. As concrete examples, we establish lower bounds on the leakage of standard universal two-party primitives such as oblivious transfer.

  12. Quantified Energy Dissipation Rates in the Terrestrial Bow Shock. 1.; Analysis Techniques and Methodology

    NASA Technical Reports Server (NTRS)

    Wilson, L. B., III; Sibeck, D. G.; Breneman, A.W.; Le Contel, O.; Cully, C.; Turner, D. L.; Angelopoulos, V.; Malaspina, D. M.

    2014-01-01

    We present a detailed outline and discussion of the analysis techniques used to compare the relevance of different energy dissipation mechanisms at collisionless shock waves. We show that the low-frequency, quasi-static fields contribute less to ohmic energy dissipation, (-j · E ) (minus current density times measured electric field), than their high-frequency counterparts. In fact, we found that high-frequency, large-amplitude (greater than 100 millivolts per meter and/or greater than 1 nanotesla) waves are ubiquitous in the transition region of collisionless shocks. We quantitatively show that their fields, through wave-particle interactions, cause enough energy dissipation to regulate the global structure of collisionless shocks. The purpose of this paper, part one of two, is to outline and describe in detail the background, analysis techniques, and theoretical motivation for our new results presented in the companion paper. The companion paper presents the results of our quantitative energy dissipation rate estimates and discusses the implications. Together, the two manuscripts present the first study quantifying the contribution that high-frequency waves provide, through wave-particle interactions, to the total energy dissipation budget of collisionless shock waves.

  13. Quantifying Attachment and Antibiotic Resistance of from Conventional and Organic Swine Manure.

    PubMed

    Zwonitzer, Martha R; Soupir, Michelle L; Jarboe, Laura R; Smith, Douglas R

    2016-03-01

    Broad-spectrum antibiotics are often administered to swine, contributing to the occurrence of antibiotic-resistant bacteria in their manure. During land application, the bacteria in swine manure preferentially attach to particles in the soil, affecting their transport in overland flow. However, a quantitative understanding of these attachment mechanisms is lacking, and their relationship to antibiotic resistance is unknown. The objective of this study is to examine the relationships between antibiotic resistance and attachment to very fine silica sand in collected from swine manure. A total of 556 isolates were collected from six farms, two organic and four conventional (antibiotics fed prophylactically). Antibiotic resistance was quantified using 13 antibiotics at three minimum inhibitory concentrations: resistant, intermediate, and susceptible. Of the 556 isolates used in the antibiotic resistance assays, 491 were subjected to an attachment assay. Results show that isolates from conventional systems were significantly more resistant to amoxicillin, ampicillin, chlortetracycline, erythromycin, kanamycin, neomycin, streptomycin, tetracycline, and tylosin ( < 0.001). Results also indicate that isolated from conventional systems attached to very fine silica sand at significantly higher levels than those from organic systems ( < 0.001). Statistical analysis showed that a significant relationship did not exist between antibiotic resistance levels and attachment in from conventional systems but did for organic systems ( < 0.001). Better quantification of these relationships is critical to understanding the behavior of in the environment and preventing exposure of human populations to antibiotic-resistant bacteria. PMID:27065408

  14. A numerical analysis on the applicability of the water level fluctuation method for quantifying groundwater recharge

    NASA Astrophysics Data System (ADS)

    Koo, M.; Lee, D.

    2002-12-01

    The water table fluctuation(WTF) method is a conventional method for quantifying groundwater recharge by multiplying the specific yield to the water level rise. Based on the van Genuchten model, an analytical relationship between groundwater recharge and the water level rise is derived. The equation is used to analyze the effects of the depth to water level and the soil properties on the recharge estimate using the WTF method. The results show that the WTF method is reliable when applied to the aquifers of the fluvial sand provided the water table is below 1m depth. However, if it is applied to the silt loam having the water table depth ranging 4~10m, the recharge is overestimated by 30~80%, and the error increases drastically as the water table is getting shallower. A 2-D unconfined flow model with a time series of the recharge rate is developed. It is used for elucidating the errors of the WTF method, which is implicitly based on the tank model where the horizontal flow in the saturated zone is ignored. Simulations show that the recharge estimated by the WTF method is underestimated for the observation well near the discharge boundary. This is due to the fact that the hydraulic stress resulting from the recharge is rapidly dissipating by the horizontal flow near the discharge boundary. Simulations also reveal that the recharge is significantly underestimated with increase in the hydraulic conductivity and the recharge duration, and decrease in the specific yield.

  15. An iterative algorithm to quantify factors influencing peptide fragmentation during tandem mass spectrometry.

    PubMed

    Yu, Chungong; Lin, Yu; Sun, Shiwei; Cai, Jinjin; Zhang, Jingfen; Bu, Dongbo; Zhang, Zhuo; Chen, Runsheng

    2007-04-01

    In protein identification by tandem mass spectrometry, it is critical to accurately predict the theoretical spectrum for a peptide sequence. To date, the widely-used database searching methods adopted simple statistical models for predicting. For some peptide, these models usually yield a theoretical spectrum with a significant deviation from the experimental one. In this paper, in order to derive an improved predicting model, we utilized a non-linear programming model to quantify the factors impacting peptide fragmentation. Then, an iterative algorithm was proposed to solve this optimization problem. Upon a training set of 1803 spectra, the experimental result showed a good agreement with some known principles about peptide fragmentation, such as the tendency to cleave at the middle of peptide, and Pro's preference of the N-terminal cleavage. Moreover, upon a testing set of 941 spectra, comparison of the predicted spectra against the experimental ones showed that this method can generate reasonable predictions. The results in this paper can offer help to both database searching and de novo methods. PMID:17589963

  16. Quantifying the Chemical Weathering Efficiency of Basaltic Catchments

    NASA Astrophysics Data System (ADS)

    Ibarra, D. E.; Caves, J. K.; Thomas, D.; Chamberlain, C. P.; Maher, K.

    2014-12-01

    The geographic distribution and areal extent of rock type, along with the hydrologic cycle, influence the efficiency of global silicate weathering. Here we define weathering efficiency as the production of HCO3- for a given land surface area. Modern basaltic catchments located on volcanic arcs and continental flood basalts are particularly efficient, as they account for <5% of sub-aerial bedrock but produce ~30% of the modern global weathering flux. Indeed, changes in this weathering efficiency are thought to play an important role in modulating Earth's past climate via changes in the areal extent and paleo-latitude of basaltic catchments (e.g., Deccan and Ethiopian Traps, southeast Asia basaltic terranes). We analyze paired river discharge and solute concentration data for basaltic catchments from both literature studies and the USGS NWIS database to mechanistically understand geographic and climatic influences on weathering efficiency. To quantify the chemical weathering efficiency of modern basalt catchments we use solute production equations and compare the results to global river datasets. The weathering efficiency, quantified via the Damköhler coefficient (Dw [m/yr]), is calculated from fitting concentration-discharge relationships for catchments with paired solute and discharge measurements. Most basalt catchments do not demonstrate 'chemostatic' behavior. The distribution of basalt catchment Dw values (0.194 ± 0.176 (1σ)), derived using SiO2(aq) concentrations, is significantly higher than global river Dw values (mean Dw of 0.036), indicating a greater chemical weathering efficiency. Despite high Dw values and total weathering fluxes per unit area, many basaltic catchments are producing near their predicted weathering flux limit. Thus, weathering fluxes from basaltic catchments are proportionally less responsive to increases in runoff than other lithologies. The results of other solute species (Mg2+ and Ca2+) are comparable, but are influenced both by

  17. Quantifying peak discharges for historical floods

    USGS Publications Warehouse

    Cook, J.L.

    1987-01-01

    It is usually advantageous to use information regarding historical floods, if available, to define the flood-frequency relation for a stream. Peak stages can sometimes be determined for outstanding floods that occurred many years ago before systematic gaging of streams began. In the United States, this information is usually not available for more than 100-200 years, but in countries with long cultural histories, such as China, historical flood data are available at some sites as far back as 2,000 years or more. It is important in flood studies to be able to assign a maximum discharge rate and an associated error range to the historical flood. This paper describes the significant characteristics and uncertainties of four commonly used methods for estimating the peak discharge of a flood. These methods are: (1) rating curve (stage-discharge relation) extension; (2) slope conveyance; (3) slope area; and (4) step backwater. Logarithmic extensions of rating curves are based on theoretical plotting techniques that results in straight line extensions provided that channel shape and roughness do not change significantly. The slope-conveyance and slope-area methods are based on the Manning equation, which requires specific data on channel size, shape and roughness, as well as the water-surface slope for one or more cross-sections in a relatively straight reach of channel. The slope-conveyance method is used primarily for shaping and extending rating curves, whereas the slope-area method is used for specific floods. The step-backwater method, also based on the Manning equation, requires more cross-section data than the slope-area ethod, but has a water-surface profile convergence characteristic that negates the need for known or estimated water-surface slope. Uncertainties in calculating peak discharge for historical floods may be quite large. Various investigations have shown that errors in calculating peak discharges by the slope-area method under ideal conditions for

  18. Comparison of Weather Shows in Eastern Europe

    NASA Astrophysics Data System (ADS)

    Najman, M.

    2009-09-01

    Comparison of Weather Shows in Eastern Europe Television weather shows in Eastern Europe have in most cases in the high graphical standard. There is though a wast difference in duration and information content in the weather shows. There are few signs and regularities by which we can see the character of the weather show. The main differences are mainly caused by the income structure of the TV station. Either it is a fully privately funded TV relying on the TV commercials income. Or it is a public service TV station funded mainly by the national budget or fixed fee structure/tax. There are wast differences in duration and even a graphical presentation of the weather. Next important aspect is a supplier of the weather information and /or the processor. Shortly we can say, that when the TV show is produced by the national met office, the TV show consists of more scientific terms, synoptic maps, satellite imagery, etc. If the supplier is the private meteorological company, the weather show is more user-friendly, laical with less scientific terms. We are experiencing a massive shift in public weather knowledge and demand for information. In the past, weather shows consisted only of maps with weather icons. In todaýs world, even the laic weather shows consist partly of numerical weather model outputs - they are of course designed to be understandable and graphically attractive. Outputs of the numerical weather models used to be only a part of daily life of a professional meteorologist, today they are common part of life of regular people. Video samples are a part of this presentation.

  19. Feasibility of Quantifying Arterial Cerebral Blood Volume Using Multiphase Alternate Ascending/Descending Directional Navigation (ALADDIN)

    PubMed Central

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2016-01-01

    Arterial cerebral blood volume (aCBV) is associated with many physiologic and pathologic conditions. Recently, multiphase balanced steady state free precession (bSSFP) readout was introduced to measure labeled blood signals in the arterial compartment, based on the fact that signal difference between labeled and unlabeled blood decreases with the number of RF pulses that is affected by blood velocity. In this study, we evaluated the feasibility of a new 2D inter-slice bSSFP-based arterial spin labeling (ASL) technique termed, alternate ascending/descending directional navigation (ALADDIN), to quantify aCBV using multiphase acquisition in six healthy subjects. A new kinetic model considering bSSFP RF perturbations was proposed to describe the multiphase data and thus to quantify aCBV. Since the inter-slice time delay (TD) and gap affected the distribution of labeled blood spins in the arterial and tissue compartments, we performed the experiments with two TDs (0 and 500 ms) and two gaps (300% and 450% of slice thickness) to evaluate their roles in quantifying aCBV. Comparison studies using our technique and an existing method termed arterial volume using arterial spin tagging (AVAST) were also separately performed in five subjects. At 300% gap or 500-ms TD, significant tissue perfusion signals were demonstrated, while tissue perfusion signals were minimized and arterial signals were maximized at 450% gap and 0-ms TD. ALADDIN has an advantage of visualizing bi-directional flow effects (ascending/descending) in a single experiment. Labeling efficiency (α) of inter-slice blood flow effects could be measured in the superior sagittal sinus (SSS) (20.8±3.7%.) and was used for aCBV quantification. As a result of fitting to the proposed model, aCBV values in gray matter (1.4–2.3 mL/100 mL) were in good agreement with those from literature. Our technique showed high correlation with AVAST, especially when arterial signals were accentuated (i.e., when TD = 0 ms) (r = 0

  20. Feasibility of Quantifying Arterial Cerebral Blood Volume Using Multiphase Alternate Ascending/Descending Directional Navigation (ALADDIN).

    PubMed

    Kim, Ki Hwan; Choi, Seung Hong; Park, Sung-Hong

    2016-01-01

    Arterial cerebral blood volume (aCBV) is associated with many physiologic and pathologic conditions. Recently, multiphase balanced steady state free precession (bSSFP) readout was introduced to measure labeled blood signals in the arterial compartment, based on the fact that signal difference between labeled and unlabeled blood decreases with the number of RF pulses that is affected by blood velocity. In this study, we evaluated the feasibility of a new 2D inter-slice bSSFP-based arterial spin labeling (ASL) technique termed, alternate ascending/descending directional navigation (ALADDIN), to quantify aCBV using multiphase acquisition in six healthy subjects. A new kinetic model considering bSSFP RF perturbations was proposed to describe the multiphase data and thus to quantify aCBV. Since the inter-slice time delay (TD) and gap affected the distribution of labeled blood spins in the arterial and tissue compartments, we performed the experiments with two TDs (0 and 500 ms) and two gaps (300% and 450% of slice thickness) to evaluate their roles in quantifying aCBV. Comparison studies using our technique and an existing method termed arterial volume using arterial spin tagging (AVAST) were also separately performed in five subjects. At 300% gap or 500-ms TD, significant tissue perfusion signals were demonstrated, while tissue perfusion signals were minimized and arterial signals were maximized at 450% gap and 0-ms TD. ALADDIN has an advantage of visualizing bi-directional flow effects (ascending/descending) in a single experiment. Labeling efficiency (α) of inter-slice blood flow effects could be measured in the superior sagittal sinus (SSS) (20.8±3.7%.) and was used for aCBV quantification. As a result of fitting to the proposed model, aCBV values in gray matter (1.4-2.3 mL/100 mL) were in good agreement with those from literature. Our technique showed high correlation with AVAST, especially when arterial signals were accentuated (i.e., when TD = 0 ms) (r = 0